Penned by The Interline’s Editor-in-Chief, this article was originally published in the first-ever DPC Report 2022. For more on digital product creation in fashion, download the full DPC Report 2022 completely free of charge and ungated.


Every original pixel in the images accompanying this post was made by AI. The products they depict don’t exist, but the datasets used to train those AIs probably included real, IP-protected products in a way that’s impossible to unpick.

Take the sneakers, for example. Do they incorporate elements that you recognise from one or more real brands? If so, that might be because the machine learning models that created them ingested key characteristics from existing footwear (i.e. real product photography or renders gleaned from web scraping) as part of their training.

The immediate reaction: how big of a problem is that, ethically and legally speaking, and should (or can) anything be done about it? The deeper question: what implications do accessible generative AI models have for an industry where creativity and originality are everything?

Before we get too far into that quagmire, though, what exactly is generative AI?

In its most barebones form, it’s a specialised deep learning model that has been fed a large corpus of data, and then encouraged to create novel, acceptable results through a cycle of generation (raw output) and automated and manual discrimination (testing those raw outputs against a desired outcome, and eliminating the ones that don’t pass muster).

When that training process is complete, you’re hopefully left with a model you can prompt to produce something new in its narrow, specialised field, and you will receive a result that is believable or useful as an example of a deliverable in that field. The AI has generated something new, hence the label “generative”.

In real-world applications, it’s primarily an open text field you input a request into, and moments later you receive a result that makes you stop, think, and question how this all came so far, so fast. And this is a feeling a lot of people have had over the last twelve months or so, as generative AI made a major splash in the visual arts.

From the simple (“a painting of a tree against a blue sky”) to the hyper-specific, (“oak tree + [matte blue background] + [fine natural realistic textures, photorealistic, ambient occlusion, cinematic light, ray tracing, 4k, Octane, redshift, Colour Grading]”) plain text inputs are now being translated into incredible-looking images across a broad spectrum of artistic and photoreal styles by engines like DALL-E 2, Midjourney, and Stable Diffusion, in huge volumes every day.

(The same approach is also being taken to text, voice, animation and more, but for the purposes of this article I’m focusing on static images.)

Made with Midjourney, DALL-E 2, and other AI tools

The odds are good that you’ve heard of at least one of these generative services, since they broke into the headlines numerous times in 2022 by winning art contests (undisclosed), or appearing at the heart of landmark copyright debates.

And if you haven’t yet had the opportunity to experiment with using an AI service to generate images for yourself, the barrier to entry has never been lower. In addition to Midjourney and DALL-E exiting their invite-only phases and becoming open to the general public – for a fee, as cloud services – the Stable Diffusion model has been publicly released and packaged to run on consumer hardware.

If that still sounds like too much uncharted territory, Microsoft will happily put you on the waitlist for their turnkey Designer service (which has a generative component that runs on DALL-E 2), and popular creative platform Canva have already rolled out a similar service to their paying users (running on Stable Diffusion).

Suffice it to say that, despite only being truly released from it this year, this is not a genie that will be going back in the bottle. AI-generated visual content is here to stay, and it’s something that fashion will need to reckon with sooner rather than later.

But several unanswered questions hang over the promise of anyone, anywhere being able to bring their creative fashion ideas to life by simply typing them. First, just how easy is it to get good results using AI? And second, who does the output actually belong to? Do they remain your creative ideas – or mine – if a generative model designed them with minimal human intervention, and used potentially copyrighted works from other creatives in the process?

The first question is comparatively simple to answer. Unless you get lucky, it is not especially easy to get amazing results from an AI – even with the release of “v4” of Midjourney. The images that appear in news stories about the threat posed by AI art are very carefully selected from top-rated community works; they do not show the many, many dead-ends that the prompter and the model hit along the way. And while rare images do pass detailed scrutiny as being “real”, it’s still a routine occurrence to find parts of the whole that do not fit: hands with too many fingers, architectural lines that lead to nowhere, or facial characteristics that fall into the uncanny valley.

Made with Midjourney, DALL-E 2, and other AI tools

Like art created using traditional tools, the final output often fails to tell the story of the failures that occurred along the way.

So can you expect to sign into Midjourney and start instantly generating brilliant, cohesive, internally-consistent designs for apparel, footwear, or accessories? You might, but the odds are against you. The likelihood is that you will need to put in work.

How much work? It will vary depending on your objective. If you’re looking to create quick inspirations to then take into a traditional concept process, the right output could come fairly quickly. If, as we tried to do with these images, your aim is to create artificial product photography of a collection of non-existent products that are believable at-a-glance, you can expect to spent much longer. And the chances are that you will need to employ the services of several different AIs.

Our workflow for each of these images was the same. The products themselves were created by Midjourney, using the version 4 modifier and a granular set of inputs that were adjusted over hundreds of requests. Those square images were then exported and taken into DALL-E 2, where the canvas was broadened using AI-outpainting to extend the environments, again with tens or hundreds of generations per finished image. The final composition was then tidied up using more run-of-the-mill machine learning in a popular photo editing package to inpaint errors and fix visible seams and colour joins, before colour grading and lighting were tweaked by hand.

I would not describe the final results as fully believable, but these specific images emerged as the best outputs from our experiment because they most closely resembled real products that look as though they could be manufactured and worn, and they came the closest to fitting a consistent aesthetic. And although it took a meandering road and multiple different solutions to get there, that’s not so different to the circuitous route that a lot of 3D assets take to reach the end user, from geometry to texturing to staging and rendering. The fashion industry is already accustomed to using many different tools to get the job done.

But even though the amount of time and effort involved is sometimes quite a bit more than AI art’s reputation would have you believe, the stark fact remains that it’s eminently possible to ask one or more AIs to design and present footwear and apparel concepts for you, and to stage them in environments and settings it would be cost and time-prohibitive to achieve any other way. And provided you’re comfortable committing the time to learn how best to wrangle those models to produce the desired output, you will eventually get results you can use for concepting, building mood boards, inspiring material choices, innovating on existing designs, and much more.

WHAT ARE THE LEGAL RAMIFICATIONS OF AI HAVING SO RAPIDLY REACHED THE STAGE WHERE CONCEPT GENERATION ISN’T AN EDGE CASE, BUT AN EVERYDAY OCCURANCE?

So what about the second question? What are the legal ramifications of AI having so rapidly reach the stage where this kind of concept generation is not just a feasible edge case, but an everyday occurrence?

To be clear: this is probably the most potent example of technology outpacing regulations and expectations to have occurred in my lifetime. I wrote a piece last autumn about the various ways that various types of bias (conscious and unconscious) become ingrained in artificial intelligence models. And although I was dissecting the broad swathe of ethical and philosophical problems that come from blithe adoption of AI, back then it hadn’t even occurred to me that an AI could be quickly taught to clone a commercial artist’s entire style. Now that’s reality.

This is how fast things are moving.

Are brand design languages all that different from other creative fingerprints? Is it wild to consider that model could be trained, in very short order, to create genuinely believable new footwear or apparel in a particular brand’s style? Not that long ago it would have been. Today? Definitely not. In fact it’s likely to be only a matter of time.

And while any physical products that emerge from this kind of genesis will eventually be tamped down by traditional anti-counterfeiting and brand protection tools, we also (not coincidentally) happen to be entering an era where brands are working to sell virtual creations and to engage independent designers and creators to build communities around those assets. How long until AI design works its way into that ecosystem? And once it has, how quickly will it change things, and how hard will it be to excise? Based on what’s taking place in the art community, the answers are fast and impossible, respectively.

These are hypothetical (if likely) examples, of course, but they raise a common concern: that nobody really knows who owns the output of AI art services, and that neither legislative bodies nor society at large have had the time to consider the possibilities before being confronted by them.

Tellingly, AI art services themselves are largely ducking the issue. Canva currently suggest that this is “an open question” with “no easy answer”. And Midjourney – which pools every image created using its model into a community category for searching, remixing and re-use – has established an enterprise pricing bracket that gives corporates the chance to exclude their generated images from that pool, effectively giving those top-tier subscriptions a way to sidestep the issue by not having to reveal that their images were made by AI at all.

In the medium term, things are likely to become even more complex. The landmark copyright case I mentioned earlier is now being walked back and qualified, with the potential outcomes being that artists making use of generative AI in their works may have to disclose that fact, as well as needing to prove a “degree of human authorship” in order to exercise their right to claim ownership.

Quite where the line between majority-human work and majority-AI work will be drawn is difficult to predict, and could potentially stray into areas where machine learning has been accepted (such as retouching photos). Is human authorship implied by a long, iterative cycle of crafting of text prompts? Is AI authorship guaranteed if the first prompt is effective enough that a further set isn’t required?  If the person creating the prompts takes the results and uses more well-established machine learning tools, like inpainting, to improve the final image, does that threaten their claim to own the final output?

Made with Midjourney, DALL-E 2, and other AI tools

With all that uncertainty looming, it certainly seems as though the sensible approach for any brand would be to wait and see how the legal situation develops, but turning away from AI entirely also means shutting the door on some genuine behind-the-scenes use cases. A lot of creative time is spent on moods, concepts, and ideas, and AI excels at rapidly generating all of these things. And as dramatic as the power of ideating and experimenting in 3D can be (hence this publication) it still requires training, time, and knowledge to an extent that prompting an AI just doesn’t. For quick visualisation of a fully-staged, entirely novel idea that can then be either taken up as inspiration for product design and development the same way a concept board would be, there is very little that can match the speed and unbridled generative scope of AI.

Practically speaking, brands are unlikely to want to continue to use off-the-shelf, general solutions like Midjourney for long, but the same principles and the same machine learning models could be trained on a much narrow set of brand-specific data to ensure that the results they generate are on-model more often than not.

Overall, then, how big of a deal do we think generative AI is going to be for fashion? Big enough that we let it design this inaugural report’s front cover (although we went with a more abstract result, for obvious reasons) and big enough that I would not be at all surprised to see creative designers, merchandisers, and other job roles adding it into their workflows by the time we next produce one of these reports.

In the slightly longer term, there is also a strong chance that generative AI could become an integral COG in the overall DPC ecosystem, even if it doesn’t generate useable digital assets itself. The same way that companies have trusted in the optimisation algorithms that power automated nesting and improve their material yields, they could soon place the same degree of trust in generative AI as a way to kickstart the creative process and complement workflows that were previously manually-intensive.

It feels almost naive to talk about generative AI in this way, though. Platforms and solutions that make use of it are certainly going to land in brands’ and retailers’ digital product creation toolkits, but compartmentalising the whole spectrum of possibilities ignores the fact that we’re currently living through a genuine tectonic shift in where the burden of effort and invention lies between person and machine.

In the past, I think a lot of the hand-wringing around whether AI is going to devalue human craft and creativity has been overblown. I don’t feel that way about generative AI. And there remains an open question, for me, as to whether it’s likely to turn creativity into a commodity in a destructive way, or put power in the hands of people who have ideas but not necessarily the manual skills to realise them.

Either way, though, the intelligent bet right here, right now, is to figure out how to incorporate generative AI into both your personal toolset and your brand’s digital-native workflow before it finds its seat at the table some other way. Because if the latter happens, it’s going to flip the table much faster than any of us are ready for.


DISCLAIMER: For the avoidance of doubt, The Interline makes no claim of ownership to the images contained in these pages. They were created using Midjourney and DALL-E2 from extensive prompting and re-working, and are subject to the same uncertainty that governs all AI-generated creative works at this point in time.