The way we feel about brands is changing. Much more than purveyors of products, shoppers look at brands as cultures and universes, and the goods and experiences they sell as entry points.

At the same time, brands are also rearchitecting their worlds to be more welcoming. Where once the industry was accused of catering only to a narrow tranche of skin tones and body types, now the planet’s biggest brands are pushing for – or being dragged towards – comprehensive inclusivity, adaptability, and size diversity.

Taken together, these two concurrent forces are setting a clear mandate for fashion brands and retailers to present themselves as being open to everyone. Which, in turn, means offering more complete assortments and more inclusive size ranges that allow anyone to visualise themselves as part of that brand’s universe.

An example of digital try-on produced using Zyler's unique approach.
image credit – zyler

But with this mandate comes a great deal of behind-the-scenes complexity in sizing – where non-linear grading and more complex size ranges are becoming common – and heightened expectations for the customer experience, with a much more diverse set of consumers wanting to try clothing from brands they may not have bought from before. And as a direct result, most apparel businesses believe that the industry’s already-high return rates will continue to grow between now and 2025 as a result of demographic changes, compounding the industry’s already-significant struggles with sustainability and overproduction.

The fashion industry has expended a lot of effort in trying to solve these interlinked problems through digital try-ons: allowing online shoppers to select a garment and, in one way or another, experiment with how it’s likely to fit them without ever setting foot in a changing room.  These projects have been accelerated in a major way by the pandemic – with contactless and eCommerce being the watchwords of fashion during lockdowns – but even before COVID, brand and retail businesses were looking for new ways to not just replicate, but enhance the physical try-on experience through digital channels.

To date, fashion brands have tackled this in one of two ways: by expanding their traditional product photography workflows to showcase their products on as many different body types as possible, increasing the odds of shoppers finding a model that represents them; or with digital avatars, which use a front and profile photo from a smartphone to synthesise a 3D model of the consumer with the goal of creating precise fit recommendations and exacting visualisations.

Digital try-on using pre-existing product photography can unlock new levels of diversity and inclusion.
image credit – zyler

Both approaches have drawbacks.

Traditional product and lifestyle photography is expensive and time-consuming to scale, and even the most exhaustive approaches to inclusive modelling still hinge on segmentation; it’s impossible for every potential customer to see themselves accurately reflected in photographs of people who aren’t them.

And while at-home body scanning can create very accurate results from a fairly limited input, it remains an intrusive process – and one that some senior retail executives have confirmed simply isn’t palatable for their consumers. Shoppers are required to shed everything but their underwear to take front and profile photos, which introduces a roadblock that some will simply not want to step over. And the resulting 3D model, while precise from a sizing point of view, also reduces a multi-faceted, subjective, emotional decision-making process to cold objectivity.

The use of digital avatars – which has rapidly become the new standard for in-house digital product creation strategies, and is now percolating across to consumer experiences – is also typically pitched to shoppers as being for verification purposes, as a way of identifying whether a garment they are already interested in will objectively fit them or not.  Little allowance is made for subjective fit considerations such as the decision to buy oversized clothes, and the experience often does not extend beyond fit validation for a single garment and into exploration of wider assortments.

Machine learning allows for turnkey adjustment of product photography into digital try-on experiences.
image credit – zyler

Crucially, for a digital avatar to “try on” a garment, that garment must also exist digitally. This places a significant burden on the brand or retailer to either design all their products in 3D from the outset, or to scan them in afterwards so that they can be draped on the consumer’s newly-created avatar. Neither is cost-effective, and designing the full complement of products digitally, natively, is a stage that a vanishingly small number of the world’s brands has reached.

But there is potentially a third way – one that blends the best of both approaches, allowing shoppers to make confident choices based on both the objective and subjective elements of fit, and one that allows the consumer to see themselves reflected precisely, at the individual level, in any brand’s products.

That third way is a route that deep learning company Zyler – a new initiative from a company that’s previously established itself as a leader in AI photo manipulation is premiering. It allows shoppers to upload a single profile photo (of their head and upper body only, in as flattering a pose as they wish) and, after asking them to input some basic measurements, it matches blends pre-existing product photography with the consumer’s photo input. Once the photo is imported, Zyler’s machine learning model matches skin tone, skeletal structure, and a host of other algorithmically-mapped variables, to present the shopper with an image of themselves wearing the garment – with no additional input required from brand or consumer.

image credit – zyler

In the same way that other innovations and best practices have recently made their way from other sectors into the apparel industry, Zyler’s approach is built on many years’ worth of technological progress that’s taken place outside fashion. Using generative methods, it has been possible for several years now to create a believable-looking image of a non-existent synthetic person, and that technology has been deployed in film, advertising, and a range of other purposes.  Readers of The Interline may be familiar with generative adversarial networks – or GANs – which specialise in producing believable results from large existing datasets. To see this kind of deep learning in action, try refreshing This Person Does Not Exist – a website frontend on a deep learning model that generates photo-real images of people who, as the name suggests, do not exist and, therefore, can’t be photographed.

But while this type of randomised synthesis is, effectively, a solved problem, specificity is much harder. And for virtual try-on to really deliver the results that brands expect – conversions, reduced returns, and new experiences that customers love – the chasm between creating a believable result of “a person wearing a specific item from your assortment” and generating a believable image of “the customer wearing anything from your collection” needed to be overcome.

This is where Zyler have focused their attention, building on foundations already proven elsewhere, and devoting considerable attention to creating a solution that’s uniquely suited to fashion. And the results could well speak for themselves. The images seen throughout this article were not created by overlaying a customer’s photo on pre-existing product photography, or by synthesising a randomised person, but rather taking the customer’s photo and employing a much more specific machine learning model to it. That model then maps and manipulates body structures, limb positions, skin tones, and even surroundings to merge the customer’s profile photo with the product photography in a way that’s consistent, organic, and – vitally – engaging.

image credit – zyler

The end results for brands? An uptick in conversion and a significant increase in order value: Zyler’s latest live deployments showcase a threefold increase in click-throughs for emails introducing new products that are personalised to show the individual customer wearing those products. A significant uptick in buying intent – approaching 20% – for products that are displayed using the same personalisation. And a sample size of more than half a million product visualisations that demonstrates that consumers are fifty times more willing to share a product across their social networks if they can do so by showing themselves “wearing” it. The brand equity built through this kind of organic sharing is – to put it bluntly – something that not even the biggest brands can buy.

At the root of these compelling results is the fact that, with Zyler’s approach, consumers are able to make use of both objective and subjective measurements to arrive at not just an accurate depiction of themselves wearing a garment they’ve already selected, but themselves wearing any garment from the brand or retailer’s current assortment. In live market testing, Zyler has already seen customers spending significantly longer on brand and retail websites, virtually trying on tens – or even hundreds – of styles, and even experimenting with garments they might not otherwise have considered before.

And the end result for consumers? Greater immersion. Where both traditional product photography and digital avatars place the consumer and the brand at one remove from one another, the ability to put the consumer into the brand’s lifestyle without the effort, expense, and intrusion of 3D scanning brings the two parties as close together as possible outside of physical try-on. And unlike digital avatars, which prioritise precision over emotion, the approach Zyler is already testing in the aforementioned live deployments trades on the same level of trust that a memorable physical retail try-on would create.

What’s more, the same technology can also potentially open several doors in digital fashion, where fitting garments to flat photographs is still mostly a manual task – and one that introduces latency into what should be an instant process. This is something that could be automated by bridging the benefits of avatar-based and photo-based try-on methods, and that could, in turn, unlock some of the unrealised potential of a more turnkey model of digital-only fashion.

So where does virtual try-on go from here? The Interline subscribes to the idea that it should be about much more than just matching shoppers with the objectively “perfect size”. Scientific, avatar-based virtual fitting will – and has – reduce returns, but a different approach could be about to marry a similar level of technical progress with fashion’s drive towards diversity and inclusivity, and brands’ desires to create engaging, memorable experiences that are simple to access for consumers, and easy to deploy for retailers.

There is, to put it another way, a gap in the market for a new method of digital try-on that’s both accessible, personal, and captivating in a way that previous approaches – product photography and digital avatars – has not quite managed so far. And if a brand can allow consumers to see themselves in anything, anywhere, any time, then it will be making tangible progress in all the technical, social, and commercial areas that matter.

About our partner: Zyler helps the biggest brands connect to their customers online. Its patented technology makes your customer the model, generating beautiful, personal results. A virtual dressing room, personalized marketing, ready-made models — Zyler has the solution to all your virtual needs.