There is a tipping point, when a new technology first gets into consumers hands, that can make or break the chances of that technology seeing broad adoption. Or, as the old saying goes, first impressions last.
I had a recent, relevant taste of this first-hand, when my phone reminded me of the importance of getting the consumer-facing aesthetics and experience of technology exactly right. A message popped up on my iPhone one day, inviting me to ‘personalise’ an emoji by making its face look like my own. [Apple’s “Memojis” were initially released in 2018 on a small range of iPhones, but became available to a wider audience with the release of iOS 13, in late 2019 – Editor.] For many people, emojis – those tiny cartoon-like images that create a kind of visual shorthand when inserted into text – are part of everyday language. A picture paints a thousand words.
Not being a big emjoi user myself I dismissed the notification at first, but I had to admit that the general idea seemed like a good one: emojis are generic and can be very flat and bland, so wouldn’t it be a whole lot more fun to have one, in animated 3D, with a face that resembled the sender? So, knowing that a large number of people were probably receiving this invitation at the same time that I was, I expected I’d soon start receiving an avalanche of ‘personalised emojis’. I’m still waiting.
After the Christmas holidays – normally primetime for emoji-happy communicators – came and went without my receiving a single personalised emoji, I was curious to know why the idea appeared to have bombed. So I looked into it myself and found that the actual personalisation process was really rudimentary, and did a very basic job of portraying an individual’s actual features. No wonder people weren’t sending personalised emojis: they just weren’t personal enough.
But this experience got me thinking about some of the wider implications of personalisation and identity when it comes to digital representations of us – our avatars. Day to day, I work with on fit technology for the fashion industry, so I have a professional interest in the subject of digital representations of people, and I’m especially interested in how giving consumers the ability to personalise them might play into the subjective, emotionally-driven process of buying clothes.
Everyone knows a lot of clothes get returned, but only we insiders really know how deep the problem goes. Anything from 25% to 70% of all apparel bought online gets returns, depending on the retailer. This has created a race to develop technology that will remedy the main cause of the problem of failed sales: that a significant proportion of clothes bought online does not physically fit its intended customer. Some of the fit technology currently being developed involves a visual simulation of the ‘fitting room’ experience within the consumers’ own phone, tablet or laptop screens. With this tech, prior to purchase, as in a ‘real world’ fitting room, a retail customer is able to see a garment’s qualities – colour, drape, fit and so on – as ‘worn’ by an avatar that, to a strong degree of accuracy, can mirror their own body shape.
Right now, those avatars tend to be abstract or based on scans. They are mannequins. Mannequins with the precise measurements of the customer, either faceless, or with faces of unflattering accuracy.
So in this sense, a digital avatar can currently tell you what a piece of clothing looks like on something that’s your shape, but not on something that you actually feel looks like you. Is this limiting the uptake of the technology? Perhaps. It’s certainly easy to assume that, just because a technology is effective in objective terms – which 3D fit simulation absolutely is –it will be widely used for what remains a subjective purpose: buying clothes that feel ‘right’.
Would it make sense, then, for the fashion industry to try and bridge that emotional gap by letting consumers personalise their avatars? I think the Memojis demonstrate that things aren’t going to be that easy.
Putting the tools in customers’ hands to create a personalised avatar for e-commerce fit (or developing tech that automatically creates a more ‘user friendly’ image) is, however you look at it, an extremely serious undertaking. Underestimating the importance of the emotional link that people create between themselves and digital representations of themselves would be a mistake, because if customers do not relate to the mirror being held up to them, it can make the difference between a consumer base engaging with the tech and one obstinately ignoring it. If customers see what they consider to be an inaccurate representation of themselves as individuals, (body, and, just as importantly, face), it’s likely that they will not believe anything else that the avatar tells them and will end up losing confidence in the fit tool. Worse, if the subject feels that the image is unflattering, he or she may even feel insulted – something that may damage their overall loyalty to the brand that has either developed or licensed the technology.
With the kind of system where a person creates his or her own image (such as the aforementioned Memojis), on offer is a crude selection of different colour hair, skin and eyes shape and suchlike. They may then choose between differing facial shapes, hair styles, glasses, headwear and a few other features. In the past, I’ve looked at fit tools that worked in a similar way and the resulting avatar was never satisfactory: if anything, it was worse than the faceless mannequins, because it was no longer generic but neither was it recognisable as being enough like the individual who created it.
I think this approach may end up being a dead end. In practice, it’s likely that nothing but an infeasibly gigantic library of different facial components and styles will get us close enough to the level of individuality that customers want. And the other logical step – giving the customers access to more complex sliders and variables to alter the avatar’s appearance at a very granular level – will prove far too time-consuming to actually be useful for virtually trying on clothes. Most customers will, in all likelihood, choose to abandon a fit tool than expend too much effort in customising it.
So, working on the assumption that most people would find an avatar with either a too-generic or entirely blank face spooky or unrelatable, sales avatars’ facial visuals are therefore being developed, based on actual images of the consumer, taken during the photographic or body scanning process that is usually part of this tech. However, thinking that this is a simple solution is also naïve: as customer engagement experts will attest, this is a subject fraught with complexity. People can become consumed by finding the “ideal”