(Header artwork by Eduard Caliman, made with Unity ArtEngine.)

Digital materials are critical components of the 3D digital product creation and product visualisation workflows that brands and retailers of all shapes and sizes are currently building.  But material digitisation can be costly, complex, and deeply involved – none of which are words that anyone wants to hear at a time when designers desperately need ways to rapidly fuel their creativity, rather than reasons to have to learn a new discipline. 

So the question becomes: how can brands quickly bring their ideas to life, and visualise their materials and products in 3D without the high costs and steep learning curve? For some, the answer may lie in adopting an example-based workflow, supported by  tools like Unity ArtEngine that leverage AI and machine learning to give creative designers an easy onramp to skills and processes that have traditionally been kept out of their reach.

Before we look at the potential of machine learning to open material digitisation to a new audience, it’s  worth first imagining what the perfect material digitisation workflow might look like. It might start with immaculate cuts of fabrics already held in inventory being digitised at source, with a top-flight scanner, at 8K archival resolution – then being used in an end-to-end 3D ecosystem that runs right the way from concept design to downstream customer experiences. What a beautiful world this would be. 

In reality, it’s a lot harder and costlier to actually build that sort of workflow, so a lot of fashion brands are starting from much simpler beginnings.  Small, ten centimetre-square swatches of fabrics they don’t have ready access to larger volumes of.  A less capable scanner, or even a smartphone camera.  And a 3D strategy that, so far, has extended to in-house experimentation only.

Image provided by Eduard Caliman, made with Unity ArtEngine.

For these brands, the barriers to high-fidelity material digitisation are high – from the prohibitive price of the most accurate material scanners, to the requirement to uplift imperfect analogue fabric samples to become pristine digital ones.  Far from being ready to accomplish the ideal, a lot of design teams would currently be satisfied with a quick and easy way to visualise what a material they’ve found in the real world – possibly from a less than optimal source – would look like on a style they’re creating digitally.

One alternative to digitising physical materials is, of course, to flip the script: to author or procedurally generate materials in-house, rather than starting with a physical swatch or roll and capturing it.  This is a totally valid approach, but it’s not without its drawbacks.  Authoring new materials in a procedural environment can quickly get complicated – making it a task for dedicated 3D material artists rather than creative apparel or footwear designers who are already being stretched by the requirement to work in 3D, remotely.  And because the materials begin life as digital entities, designers are absolutely reliant on their mills, weavers, and suppliers being able to recreate their composition and characteristics physically if their vision is going to come to life.

This adds a layer of abstraction to a process that, for the most common purposes at least, is about making sure the digital reflects the real as closely as possible.

For critical design tasks like rapidly visualising a new style, or building a line or collection plan, an example-based workflow – starting with the physical fabric swatch – is the logical one.  And the goal is clear: to empower creative and technical designers to bring real-world materials from their desks, from online image searches, and from the streets around them into their digital product creation cycles.  But those time, cost, and complexity challenges are still in the way.

Image provided by Eduard Caliman, made with Unity ArtEngine.

So it’s hardly surprising that many brands have filed material digitisation away as being beyond their grasp, and settled for sourcing exclusively from digital material libraries.  But like a lot of problems that seem insurmountable, it’s worth asking if there might be another way around.  And in the case of digital materials in fashion, AI and machine learning could be the answer – the same way it has been for other industries with similar needs to quickly visualise their products.

Before we get to that answer, though, it’s important to make sure we’re asking the right question: what do you need this particular digital material for?  Because the cost of digitisation can be brought down through hardware before we start to apply software to the problem.

For quick look visualisation and validation of a vision, a lower-fidelity capture – even a single image from the web or a smartphone – could be enough, especially when we see how machine learning can erase many of the limitations of relatively low-detail capture. For consumer-facing product visualisation, and products that will be displayed prominently in print or digital advertising, methods that capture more surface detail, such as photometric stereo or photogrammetry will be necessary.

Both approaches to photoconversion share the common objective of generating physically based rendering (PBR) materials, and capturing all the requisite material characteristics for photoreal representation, but the cost and time involved can be dramatically different.

Once you have decided on the method of data input, what then? Rather than seeing the captured material as the end result, let’s consider how machine learning can take that raw input and create not just better-quality single materials, but an entirely new possibility space for experimentation and AI-assisted artistry.

At this point you may be thinking that if material digitisation is out of your grasp, then machine learning must be a step further.  In fact, AI-aided workflows like those enabled by the standalone, interoperable Unity ArtEngine application are now as accessible and intuitive as other off-the-shelf 3D solutions aimed at a fashion audience.  And in fact machine learning’s real potential lies in its ability to make tools and workflows that have traditionally been off-limits to creatives – locked behind hundreds or thousands of hours’ experience and a near-vertical learning curve – more accessible, more intuitive, and less daunting.  Today, with the help of templates and easy-to-grasp node graph workflows, great results are just minutes away, and available to non-technical artists.

Take a lower-quality material scan from a smartphone as an example.  Or even an image captured from the web, or from a low-resolution texture archive.  That input can be increased in fidelity through neural network “up-resing” to achieve a higher-resolution result, which can also automate the removal of blurring, compression artifacts, warping, and hard shadows.  

The same degree of AI automation can be applied to removing the seams that occur when a small sample of a physical material is digitised and then needs to be tiled.  And for materials where tiling and repeats aren’t necessary, such as large pieces of leather, machine learning mutation can instead upscale a small material sample to fill target dimensions at a desired resolution.

Image provided by Eduard Caliman, made with Unity ArtEngine.

But how good can the result really be? And how does it compare to a native 8K asset?
“ArtEngine has hugely impacted my workflow by allowing me to quickly and effortlessly process my scans in a new AI-assisted way. As an example, ArtEngine’s Seam Removal node performs mutations in a completely revamped way that well surpases all the other softwares on the market that I have tested. When this is done manually in other applications, one is often able to tell where a brush has been used when trying to make something tileable.” – says Eduard Caliman, who runs an architecture visualisation studio in the UK. Eduard uses ArtEngine to digitise physical fabric samples sent by his clients, and he values both the efficiency and the quality improvements that an AI-assisted digitisation pipeline can provide.”

In workflows such as this, where high quality materials matter, but high-quality scans aren’t always available at the point of design or development, the turnkey ability to optimise, refine, and extend the materials you have to hand can be transformative.  And for brands that are in the early stages of their digital product creation journey, AI automation can allow designers to focus on the creative uses of digital materials, rather than needing to immerse themselves in their creation. 

Flokk chairs, rendered in Unity with Unity ArtEngine materials.
Images provided by Piotr Bieryt / Forte Digital, for Flokk. Made with Unity ArtEngine.

But machine learning’s potential in digital materials is not limited to quality of life improvements, or to quick and easy scanning.  With a higher-quality initial input as the source, a neural network like the ones used in Unity ArtEngine can become a playground for innovation.  As well as extending material dimensions, mutation can also be used to generate variations; combined with smart colour matching and manipulation, a single input can create a wide range of possibilities from colour to texture – all without manual intervention. 

“The best feature in ArtEngine is Mutation Structure node,” says Piotr Bieryt, 3D Artist at Forte Digital, who leveraged ArtEngine and a photometric stereo workflow to create digital twins for furniture company Flokk for use on their consumer-facing website.. “It’s magic actually. I can’t even quantify the amount of time it saves us versus other solutions because other tools don’t have the same ability to intuitively and automatically add subtle variation. ArtEngine recognizes each small part of the fabric so that we can, for example, scan 10x10cm samples and easily make them look like much larger samples, say 1x1m at 8K, with no visible repetition.”

The same approach to manipulation and experimentation can be applied to procedurally generated and authored materials, for brands that decide to bring material creation skills in-house, and even to materials sourced from digital libraries.  As products move downstream, towards consumer-facing digitisation, machine learning can assist in cleaning up materials and helping to present products in their best possible light in virtual photography and staging platforms such as Unity Forma.  And as fashion begins to make deeper use of real-time engines such as Unity, and to explore the intersection of videogames and digital fashion, the ability to easily import a material from the real world and apply it to a character or digital model, confident that machine learning can automate the digitisation process and provide a high-quality end result.

Most importantly, as more apparel and footwear brands begin to forge ahead with 3D design and digital product creation, the need to get on the ladder of material digitisation is only going to increase, but significant barriers to entry still remain.  If machine learning can live up to its potential, making material digitisation as accessible to untrained artists and creative designers as it is to experienced material authors, then it has a vital role to play in breaking those barriers – and in fuelling further digital transformation for brands and retailers of all shapes and sizes.

About our partner: Unity is the world’s leading platform for creating and operating real-time 3D (RT3D) content. Creators, ranging from game developers to artists, architects, product designers, filmmakers, and others, use Unity to make their imaginations come to life.

Unity ArtEngine is a standalone application that takes physical scans of real-world materials and processes them into high-quality textures that are ready for use with 3D models. ArtEngine helps artists across industries to bring assets to life.