The most effective fashion brands have always sold a lifestyle. Marketing opens a door into a different world – one that’s either attainable or aspirational depending on the brand’s market positioning – and invites shoppers to step through.
In the past, that invitee list has been narrow: brands would sell a particular vision to a specific consumer demographic, either consciously or unconsciously. And while people outside that demographic would be captured in its orbit, they were not purposefully being courted in the way the brand’s core audience was.
Today, inclusivity headlines every marketing playbook. Recognising that significant segments of the potential market were being shut out of experiences they might otherwise have engaged with, brands are overhauling everything from communications to design to make their lifestyles and their products accessible and appealing for everyone.
But while it’s widespread, this drive for inclusivity is by no means complete. Creating products and experiences that resonate with more demographics is not a case of genericising them. Different customer demographics bring different lived experiences to their side of the table, and for the brand the challenge is meeting them not with a broad-brush approach that paints over everybody at once, but with a strategy that makes the maximum number of people feel individually invited and feel individually seen.
The opposite – not feeling seen – can take many forms, all of which represent a locked door between the shopper and the brand or retailer. Size ranges that don’t accommodate your body type. Marketing that doesn’t represent your identity. Corporate stances on environment issues and basic human rights that are out of lockstep with your own.
It’s also not a feeling that I can pretend to have experienced first-hand. I like to think of myself as an advocate of diversity and inclusivity (even if I know I’m probably not nearly proactive enough about it), but equally I recognise that I don’t really know what it’s like to be excluded. That’s because I’m part of a demographic that’s seen, and invited to participate in brand lifestyles, by default.
As a straight, cisgender, white, middle-aged, man of median weight and only slightly above-average height, I don’t encounter many locked doors. Most size ranges accommodate me. Most brand marketing includes people who look like me. A growing number of sustainable and mass market brands share my values and allow me to buy with a (relatively) clear conscience. And, crucially, a lot of brand and retail businesses are owned and operated by people whose lived experiences are similar to mine.
But despite being right in the middle of a lot of customer datasets, I definitely do not represent everybody. And there’s a whole lot of everybody out there.
According to United States census data released this month, America’s multiracial population has grown by 276% in the last decade, to nearly 34 million people, which is in addition to nearly 47 million Black or African American people, 62 million Hispanic or Latino people, 24 million Asian people, and many more. These are huge markets with considerable spending power, and they only represent the racial makeup of a single country. Similar graphs could be drawn for gender identity, sexual orientation, and many other, equally vital, datapoints in the US, the UK, and in other countries.
Between 2010 and 2020, the US Diversity Index has jumped six points, showing a clear trend line in a more inclusive direction. Or, to couch this in pure commercial terms, marketing and selling to a single group is a demonstrably less viable strategy than it was just ten years ago. And in case this comes off as something the industry can tackle passively, over time, according to the State of Fashion 2020 report, “more than half of 21-to-27-year-olds in the US believe that retailers have a responsibility to address wider social issues with regard to diversity.” This is not a wave that any business can let simply wash over them.
Those wider social issues will probably also be the engines behind a rapid rewrite of the entire social fabric well within our lifetimes. Ten years may not feel like very long on a personal timeline; on a society-wide timeline it’s extremely meaningful. The last 18 months alone have seen social upheaval and long-overdue reckonings on a massive scale, on both sides of the Atlantic. And in the wake of a world-spanning pandemic and gigantic demonstrations against injustice, a lot of the right questions are now being asked, and the right institutions are being interrogated or held to account.
We live in a time where a lot of tradition is being revealed for what it is: ingrained, institutionalised exclusivity. And while some brands and retailers have been willing to turn over that rock and engage with what’s underneath, the industry’s response at an overall level has, according to independent research, been scattershot.
All of which leads to the inescapable conclusion that, for brand and retail businesses that do not get inclusivity right, the outcome could be worse than just lost potential sales. It could mean landing on the wrong side of history.
When is a people problem not a people problem?
There are, to be clear, many individuals within the fashion ecosystem who are working to make their brands, their workforces, the management of their businesses, and their products more inclusive. Either because they have experienced the opposite for themselves, or simply because they recognise that, commercially and personally, it’s the right thing to do.
But whether we confine ourselves to marketing initiatives, or take in the broader process landscape of product design and development, we quickly realise that many of the systems that contribute to the overall inclusivity / exclusivity profile of a brand or retail business have been taken out of the hands of people, and entrusted to AI. And this is rapidly becoming a problem because AI is not typically being taught the importance of everything I have just written.
Before we go any further, it’s important to note that this is not a fashion-only problem. AI has very quickly become a background part of our everyday lives. From the neural nets that look for faces and landmarks in our photographs, to generative models that are assisting with the digitisation of materials and the suggestion of new style options, AI-umbrella technologies are in extremely widespread use.
As a case in point, the AI industry is predicted to reach the $300 billion mark this year, taking in both apparent novelties and applications that are fundamentally changing the social fabric in unquestionably positive ways. But many – me included – would argue that the wide proliferation of AI has happened too quickly for society to properly reckon with its implications, and we are now faced with numerous scenarios – from the mundane to the life-altering – where human decision-making happens only after an algorithm has been trusted to make an initial recommendation.
Consider insurance – which may sound a little dull, but bear with me. Headlines were made in 2017 when the first insurance company replaced human underwriters with an algorithm. Fast forward four years, and some of the world’s most influential consultants are discussing how AI has already “rewritten the rules” of life insurance, and how organisations operating in that industry should react.
Just like social change, technological change is happening faster than we often realise. Four years may not feel like a huge span of time in your personal life – especially in a world where the best part of two years has vanished from underneath us – but mapped to the timeline of technology’s relentless march, it’s time for machine learning to move from edge case to essential component.
Crucially, the key question that was raised around the time of those initial insurance headlines was clearly not properly addressed before the roll-out continued, which has left the sector scrabbling to understand the scale of a major impact that has happened.
Are the models, algorithms, and deep-learning networks being deployed to automate tasks (and often to accelerate them well beyond human capability) fallible? If so, are the models themselves at fault, or are the datasets they’re being trained on, and the goals they’re being given, to blame?
And shouldn’t we have figured all this out before letting them run unregulated?
A growing body of evidence suggests that we should. This month saw the news that automated hiring software was worsening the current hiring crisis by rejecting valid candidates prior to interview. This is a prime example of an algorithmic approach to automation taking a decision out of human hands and then making that decision incorrectly, such that the human in the chain never gets their chance to vet the model’s recommendation.
And the same blithe trend towards adopting automation without proper scrutiny has charted a similar trajectory in even more damaging areas. Courts also began using AI to create sentencing guidelines in criminal trials around the same time – 2017 – with experts and commentators arguing passionately for greater oversight and regulation before the sector was allowed to progress any further. Three years on, the expansion of AI into the justice system was continuing, and, unsurprisingly, a computer vision model specialising in facial recognition had already directly contributed to the arrest of the wrong person. This also happened at least twice more, and a lawsuit is ongoing, alleging that the root cause of these false arrests was an algorithm that had not been properly trained to distinguish the faces of people of colour.
These examples of AI making faulty recommendations (or prescribing incorrect actions) are obviously extreme, but they are also representative of what AI ethics author Brian Christian, in his book The Alignment Problem, calls “putting the world literally and figuratively on autopilot”. Which is a glib way of saying that, whatever the application or the industry, humans are demonstrating a strong appetite to devolve what we see as trivial decision-making to models we believe we can trust – even if that trust is consistently being shown to stand on very shaky ground.
And the fallibility of AI as it’s trained and deployed today is also evident in situations that may not be matters of life-and-death, but which are nevertheless directly perpetuating a lack of inclusivity and diversity in many industries – including ours.
How intelligence is being inadvertently taught to ignore inclusivity.
Fashion is, to put it simply, where insurance was four years ago: at the point where applications of AI are becoming more common, but without the scrutiny that should be applied to ensure that they are fit for purpose and that they are reflecting the modern, diverse face of fashion rather than its exclusionary past.
Those AI applications range from behind-the-scenes assistance with trend analysis and generative design, to consumer-facing experiences like virtual try-on of clothing and cosmetics, and smart sizing recommendations. And in those downstream applications, independent research has begun to uncover accidental, but very real, bias that could already be undermining the inclusive image that brands are currently working to convey:
“I have spent most of my career working with colleagues to design AI systems with accuracy as the foremost objective, but a few years ago we realised that not a lot was being done to make sure that those systems were not inheriting biases and leaving certain groups of people under-represented,” said Parham Aarabi, a professor at the University of Toronto, and one of the founders of AI ethics startup HALT , who I spoke to about the themes of this article.
“When we started testing different systems,” Aarabi went on, “we discovered that there was a lot of bias ingrained in them. And while that’s something we were able to test for after the systems were launched – and then to quantify and measure for different demographic groups – we found that most people were not running these kinds of tests, either before or after their systems launched.”
The work Aarabi and his team are doing with HALT has so far focused on the beauty industry, where its findings have been stark. Following an analysis of the top advertising campaigns of 2021 in the United States, they found that, while racial diversity had improved year over year, the industry’s promotions were still falling short of reflecting the real demographic makeup of the country. Hispanic people were represented in 7% of adverts despite making up 18% of the population, and a similar discrepancy was shown between the presence of Asian people in ads compared to their representation in the US population.
Most tellingly, only 4% of the models featured in those top campaigns were plus-sized or large body types, which stands in contrast to statistics suggesting that the majority of US women wear plus-size, and the overall prevalence of larger body types in the country.
In terms of visual content and ads, then, the beauty industry was not as diverse as it needed to be – preferring to promote its products, on average, using slim, white, young models. But what does this mean for AI in fashion?
A lot, when we realise that the non-representative demographic data that is being used to inform these advertisements is, in many cases, the same dataset (or at least part of a common pool of inaccurate customer data) that is used to train the algorithms that underpin some of fashion’s most prominent AI applications.
“One of the biggest uses of AI in fashion right now is sizing recommendations, and if you do not have a representative dataset when it comes to the real sizing distribution of your target market, there’s the very real potential for those recommendations to be skewed in a way that creates bias that the customer will notice,” explained Aarabi. “And we can extrapolate the same effect anywhere that sizing data is used to train a model, such as virtual try-on. If the clothes you’re allowing your shoppers to simulate on their bodies were designed to fit slim people, the body projection mapping will work well for slim people but fail to fit anyone who doesn’t fit that sizing bracket. And the implications of being invited, as a customer, to try on clothing virtually, only to find that the application does not accommodate you, can be profoundly negative – creating a feeling of being excluded that will work against the brand’s desire for inclusivity.”
It’s logical to assume, then, that what has been observed in the beauty industry is also going to be applicable to fashion, and Aarabi and his team at HALT have created a secondary AI that is designed to test AI systems across industries for precisely this kind of inadvertent bias by tracking deviations between population-level demographic distribution, and how effectively that application caters to those demographics. If, for instance, a virtual try-on application for cosmetics is found to be less effective at recognising and mapping to Hispanic faces than it is at doing the same for white faces, then HALT’s measurements will help the brand to quantify and address that bias before the system is released to the public.
And that timeline is where the most pressing problem is going to lie – especially if fashion follows in the footsteps of other industries and continues to roll out AI applications that have not been properly audited for inclusivity.
Other industries provide cautionary examples of the negative consequences of algorithms that, while they were not designed to work against inclusivity, wound up presenting bias when they were made available to the public. The most famous of these was Twitter’s image cropping algorithm, which had not been trained on a sufficient volume of non-white faces, and consequently went on to preferentially crop images to preserve white faces even when darker faces were more prominent in the image. Shortly after Aarabi and I spoke, HALT was awarded in Twitter’s bug bounty for bringing this problem to light, and just days after that Facebook hit the headlines for a similarly biased (but far more concerning) application of computer vision.
It would be very easy to imagine applications in fashion that fall foul of the same accusations of inbuilt bias, unless the industry commits to ensuring that its uses of AI are designed with inclusivity in mind.
The opportunity to act.
As with many things, the best time to address a problem in an AI application is before anyone notices. If you, as a brand, are experimenting with augmented reality try-on, or looking to sell digital-only fashions to be worn using AR lenses, the worst time to discover that those applications don’t accommodate the people you want to feel welcomed would be when it’s already in the hands of the public.
This creates a clear mandate for the fashion industry to examine not only how it’s using AI, and why, but how that AI is being taught to recognise one of the industry’s (and society’s) most pressing concerns. And that examination is not going to take place if the industry adopts the same laissez-faire attitude to AI that other sectors have, and which has allowed bias to become ingrained in machine learning solutions that are now making some of the most important decisions in the world.
I am not a data scientist, a software engineer, or an ethicist. I just happen to be a technology commentator and analyst whose lifespan has coincidentally coincided with some of the most significant advancements in machine learning, and who has seen world-changing technology being rolled out with almost reckless abandon – in our industry and outside.
I’m also certainly not proposing that AI in fashion merits anything like the scrutiny that needs to be applied to the uses of algorithms in justice, employment, and other areas. But what I – along with Aarabi and a number of other AI ethics figureheads and organisations – am suggesting is that fashion could be about to let an opportunity slip through its fingers.
As an industry, fashion is doing a lot – most of it unevenly, but still – to correct past wrongs and to open its doors as wide as possible, for all the reasons contained in the opening of this article. Nobody in this industry is purposefully designing a computer vision model that favours lighter skin, or a body mapping application that only works on slim people, but nevertheless, if those models are fed incomplete data, and left unsupervised, then our chance to encourage greater inclusivity in AI will be missed.
With any new technology, the simplistic question has always been whether it will be used for good or ill. With AI, things are much more nuanced; a huge amount of power has been uncorked, and globally it’s being used to develop new medical treatments, support the creation of COVID vaccines, and also allow deadly drones to automatically acquire and kill targets overseas.
Between those two extremes sit the (comparatively) mundane applications of AI: back-end process automations, and front-end customer experiences that, despite being more prosaic, are still going to influence the way creators and consumers feel about AI, and how that reflects on the brands who use it.
The challenge for fashion, then, is to figure out what that means for us – an industry that seeks to align itself as closely as possible with consumer sentiment, and in many great cases to proactively drive change. Can we design systems that accurately reflect the world we want to see? In theory the answer is yes, but practically speaking there’s still work to be done on ensuring that those systems are built with inclusivity, rather than accidentally contributing to a level of exclusivity that fashion wishes to leave behind.
“AI has become such a fundamental part of our everyday lives that sooner or later everyone is going to run into a situation where an algorithm makes a decision that impacts us,” Aarabi concluded. “If there are unconscious biases that go into that decision-making process, then bias will be what the end user perceives, and over time those biases can become ingrained and further the idea that there is only one type of look that people should be trying to attain. That, to my mind, can quickly become quite damaging, so the best thing the fashion industry can do would be to shed a light on it before it reaches that stage. Because diversity in representation, and universal access to the same tools and the same experiences, is what allows people to feel comfortable in their own skin.”
And that, for me, is what fashion should be all about.