For our DPC Report 2026 fashion cover we set out to commission an artist whose credentials sat squarely in the middle of that Venn diagram: bringing 3D products and scenes to life as part of a cohesive narrative, and pushing the frontiers of integrating and experimenting with different tools to achieve an end result.
Julian Blockschmidt is based in Berlin, and is a 3D artist, animator, and generalist whose work has transcended industries – from music to footwear – and who has recently made strides in applying his style and his toolkit to fashion, including a campaign with Adidas and Foot Locker.
We tasked him with applying this multi-sector experience to the goal of bringing a fully 3D streetwear scene to life in a way that captures dynamic motion in a series of stills… a tall order!
Blockschmidt’s pipeline is also a testament to the cutting edge of full digital workflows for product creation, visualisation, animation, and rendering. To create this year’s fashion cover, Blocki worked with CLO, the latest point release of Unreal Engine (5.6), the MetaHuman platform, Adobe Substance 3D, Rokoko’s performance capture tools and more.
As well as his own perspective on how each of those tools contributes to the level of fidelity and world-building he set out to accomplish, we have also captured perspectives from people behind two of those tools: Sallyann Houghton, Senior Business Development Manager at Epic Games, and Alex Kim, Software Product Manager, CLO Virtual Fashion.
As anyone who spends time in the DPC community will know, there are a lot of “polymaths” in the space: people who are designers, animators, texture artists, environment artists, and so on. And while the narrower, more focused returns on investment we expect to define the future of DPC will lead to more specialisation within large organisations, 3D generalists have a lot to teach the industry about how they think, how the tools are evolving, and why 3D matters.
The Interline: Blocki, walk us through your career as a 3D artist. What was your first brush with 3D as a creator, and what have been the big unlocks – either technical, cultural, or in your learning journey – since then?
Blockschmidt: I started creating under the name Blockschmidt in 2020, initially working in a purely digital but two-dimensional space. At the time, I was creating 2D collages, piecing together album covers and magazine-style visuals – blending existing photographic material. All of this was just a hobby, and I enjoyed that phase, but after a while, it started to feel limiting. I was creating something new, and I still like collage as a medium, but the process was always based on someone else’s imagery, and that began to bother me.
Around that time, a 3D colleague and friend, Chris Deroy, animated the original Blockschmidt logo I’d created for my website. Seeing that animation pushed me in a new direction. What really stuck with me was not just the look of it, but how much control he had over the entire process – and it was just a little animation, moving pixels but it was all new to me, with my work prior being based on non-animated 2D visuals. That filled a creative gap I had been feeling with my collages, but couldn’t really put into words before. With that initial drive, I asked him how I could learn this as well, and he told me to start signing up for an online Blender course. That’s what I did, and I jumped into 3D with zero prior knowledge. That beginner phase was important. It taught me how to move through digital space, how objects, light, and materials work together, and how a very basic 3D workflow is structured. Back then, I would never have imagined that this path could lead to directing a cinematic sneaker campaign for Adidas and Foot Locker or creating and coordinating mainstage visuals for Splash!, Germany’s biggest hip-hop festival. Looking back, it brings me joy to see the development from those basic first renders to where I am now.
One of the biggest early unlocks for me was realising that my motivation was not only tied to the fun of working with digital art tools, but to the desire for full creative ownership. This desire was based on wanting to tell stories through 3D animation. After that first Blender course, I added another class based on digital avatar modeling. I didn’t finish this class;I quit when the sculptured avatar was to be retopologized. I shared my frustration with Marcikola, another 3D colleague, and he suggested that DAZ3D would enable me to create human avatars while eliminating tedious workflow steps. And because my digital humans also deserve to wear cool outfits, I started integrating CLO3D into my workflow for garment creation and simulation.
Ultimately, clothing was no longer just something I added in the end, but it became an integral part of my focus and workflow.
At the end of 2022, I began looking for more organic ways to bring those garments and characters to life, and discovered Rokoko’s motion capture tools. Capturing my own body and facial movements and transferring that data onto digital avatars created new possibilities and new opportunities for me.
A major unlock came through a collaboration with Arseni Novo on the Spotify visualizer for A$AP Rocky’s RIOT. He directed the visualizer, and I contributed digital avatars with motion-capture-driven body and facial animation combined with garment simulations. That project not only gave me confidence that the pipeline I was building could work at an industry level, but it also opened new doors with a dream portfolio project like this in my pocket. From there, things truly started to build momentum. The A$AP Rocky project led to garment visuals for Champion’s Eco-Future campaign and numerous other capsule streetwear visuals. Around that time, I pivoted more towards stage context, creating stage visuals for tlinh, OVO’s Roy Woods, and Sido’s arena tour.
At the beginning of 2024, I upgraded my workflow and committed to diving into Unreal Engine. Moreover, I continued refining my pipeline by combining CLO3D with Unreal Engine and MetaHuman workflows, opening up new possibilities with real-time iteration. That workflow was put to a real test for Splash! Festival 2024. I coordinated the mainstage visuals and created a Splash!-specific MetaHuman. Using CLO3D together with Unreal through the LiveSync plugin, I produced MetaHuman-based merchandise animations for the mainstage. Seeing the pipeline hold up and the visuals come to life with 30,000 people on the ground was a personal highlight.
In early 2025, the Megaride campaign was released as a collaboration between Foot Locker and me on Instagram. I directed this Adidas x Foot Locker shoe campaign, focusing on a cinematic, storytelling approach within a commercial context. This project was fully assembled and rendered in Unreal.
Between then, and me answering these questions in December 2025, Splash! Festival went into a second project season,I contributed stage visuals for Coachella, and I had the pleasure of having numerous keynotes within the 3D and event industry, from Unreal Fests to CLO Summit.
Now I have the pleasure of wrapping up 2025 with this interview and the created visuals for the report. What has remained consistent throughout all of this is the initial spark that pulled me toward 3D. I still feel it today, especially when I hit technical walls, which often become the starting point for the next unlock. Across that timeline, I argue that simply staying at it made the most difference for me, plus building your network, surrounding yourself with the people who share the same passion.
The Interline: When did you start to gravitate towards working in 3D specifically for fashion? What was the catalyst?
Blockschmidt: I started gravitating toward fashion in 3D around 2021, when I began working with CLO3D. The initial catalyst was a desire to create visuals for artists whose work inspired me. I started building human character look-alikes and dressing them in garments that reflected specific aesthetics. One example was translating the visual language of Ye during his DONDA album era, where I recreated his masked outfits in 3D animation.
At the same time, I was not booked commercially for footwear projects, so I decided to create my own. I designed and visualised a Blockschmidt shoe as a personal project, treating it as if it were already part of a real campaign. Years later, that work became a direct reference for the Adidas x Foot Locker collaboration, after the right people had seen this body of work. From there, the shift happened organically, with brands beginning to ask me to visualize capsule collections and fashion concepts in 3D.
Looking back, fashion was never a planned pivot. I do not have a formal fashion background beyond the passion I developed along the way. It emerged naturally from my interest in artists, identity, and digital characters, and from the desire to translate cultural aesthetics into three-dimensional visuals.
The Interline: For previous DPC Report covers, we’ve focused on the construction and creation of clothing and footwear in relative isolation. This year we chose to mirror what we see in the wider direction for 3D strategies in-industry, and to lean further into contextual storytelling – staging digital products in a way that demonstrates the flexibility and the power of using 3D assets to capture brand and product narratives. What makes 3D stand out when it comes to brand and product storytelling?
Blockschmidt: To me, 3D animation stands out in brand and product storytelling because it sits at the intersection of realism and imagination. With digital twins, you can achieve a level of material accuracy and detail that feels real and trustworthy, while still having full control over context, timing, and perspective. Products are no longer isolated objects. They can exist inside worlds that reinforce their identity and narrative.
Personally, I am very drawn to fidelity and detail, whether they are so pronounced that they immediately stand out ,or so well crafted that they almost go unnoticed. Both approaches can be powerful, and both can leave a lasting impression. I make use of twisted realities in 3D, since it allows me to move seamlessly between these states. From an extreme close-up of a shoe where materials and surface behavior feel tangible, to a wider narrative moment where a MetaHuman leaps downward in slow motion while the environment shifts around it. Elements like directional sunlight can evolve, creating a sense of motion and atmosphere that blends realism with something slightly unreal.
That balance is what makes 3D such a strong storytelling tool. We see how digital humans experience their world, and we link it to our own view. Furthermore, 3D avatars ground products in something familiar, while the surrounding world can subtly bend reality without breaking credibility. The result is not just a showcase of assets, but a visual experience that challenges expectations, creates emotion, and allows brands to tell stories that feel both believable and unbelievable at the same time.
The Interline: What were the inspirations behind this year’s fashion cover? What story were you aiming to tell through garments, accessories, and environment?
Blockschmidt: The inspirations for this year’s fashion cover grew out of both my ongoing work and the initial moodboard and direction provided by The Interline team. Building on that foundation, I refined the moodboard by curating the garments more deliberately and developing the setting through an alley-based environment. The garments’ focus shifted toward a gorpcore-inspired aesthetic, blending technical outdoor gear with everyday streetwear and emphasizing functionality, layering, and performance within an urban context.
“One of the biggest early unlocks for me was realising that my motivation was not only tied to the fun of working with digital art tools, but to the desire for full creative ownership.”
BLOCKSCHMIDT
A key reference point was the Blockschmidt MetaHuman I originally developed for the Adidas x Foot Locker Megaride trailer. I saw this as a strong opportunity to bring that character back in a new project, placing him in a different world while keeping his core identity intact. The cover also allowed me to update the avatar using newer Unreal Engine features, refining details while preserving what makes the character recognizable.
For the environment, the direction leaned toward a nighttime urban setting. I based the scene on an alley, using its depth and wall arrangement to naturally frame the viewer’s gaze toward the hero character. Details such as Blockschmidt and The Interline graffiti tags help establish context while reinforcing the urban visual language.
Gorpcore aesthetics and urban environments already coexist, making the alley setting feel intuitive rather than contrasting. Hence, the mix of outdoor garments and streetwear reflects how these pieces are worn in everyday city contexts. Garments, accessories, and color were used deliberately to support this narrative. A fitted inner layer combined with a longer outer layer emphasizes movement and functionality. Gloves and running sneakers ground the performance look, while cream tones separate the character from the darker environment. Green acts as a secondary accent, with subtle red highlights adding contrast.
Overall, the cover depicts a character in motion, where urban environment and outdoor-inspired fashion naturally intersect to form a cohesive digital fashion narrative. Details like the performance nose bandaid and the flushed cheeks link the MetaHuman to the depicted story within that gorpcore performance context. Environmental details like the puddles on the ground link back to the functional outfit, and a wet-map treatment on the digital lens finishes this train of thought, grounding the scene in a lived-in, performance-driven space. A slight Dutch angle heightens the sense of motion, while the MetaHuman’s gaze into the key light creates a focused, cinematic moment within the frame.
The Interline: We also set a challenge for you: to capture the essence of movement in a still image. Tell us how you approached it, as someone who’s previously worked with both traditional animation and performance capture.
Blockschmidt: To approach the challenge of capturing movement in a still image, I combined elements from both traditional animation and motion capture workflows and adapted them to my needs. For me, movement feels most convincing when it is grounded in something physically plausible, even if the final result is a single frame.
For the body, I started with a dynamic running animation from Epic’s Game Animation Sample project, which I retargeted onto the MetaHuman. Rather than using the motion at full speed, I deliberately slowed it down and cleaned the finger poses. That shift in timing allowed me to focus on transitional moments within the movement, such as a sidestep or change in direction, where the body is slightly off-balance. Those in-between states often carry more energy and tension than a perfectly clean pose, which also translates onto the garment’s behavior.
On top of that, I incorporated facial performance capture using a Rokoko headcam, working with Unreal Engine 5.6’s mono video ingest face capture data extraction. This added a subtle layer of expression and effort to the character, reinforcing the sense of physical exertion without overpowering the still image. Once the underlying motion was established, it became the driver for the garment simulations. The slowed-down movement created the right conditions for clothing to react naturally to shifts in weight and direction, allowing fabric tension, drag, and flow to sell the sense of motion.
To further enhance this feeling, I explored motion blur directly within the digital camera settings. By experimenting with longer exposure shutter speeds, I created softer, dreamlike motion blur in some of the additional renderings, while keeping the final cover image more restrained. The goal was to freeze motion in time while leaving just enough blur to suggest movement, without losing the clarity needed to showcase garment details and facial expression. The result is a still image that feels like a paused moment within a larger action, inviting the viewer to imagine what came just before and what follows next.
The Interline: Both covers for this year’s report make extensive use of MetaHumans. In the case of the beauty cover, the focus is very much on facial fidelity, but for this cover we aimed to try and showcase dynamic motion, which hinges much more on the ability to dress a character in clothing created in one of the major 3D platforms, and on the way that clothing then moves as the character animates. How did you approach creating a character that fit your intent here? And what was the process of styling them, knowing that simulating movement was a key objective?
Blockschmidt: Overall, the approach was about reintroducing the Blockschmidt MetaHuman within a new context, while evolving it to serve a different narrative goal. My intent here was to create a MetaHuman that could carry energy through the body and translate that energy into believable garment behavior. Knowing that the hood would be up and that the face would be the main point of contact, expression, lighting, and subtle performance cues became especially important.
Styling was used to reinforce that performance-driven intent. Details such as the performance-style nose bandage and slightly flushed cheeks help communicate exertion and movement. The face was lit with a warmer key light, balanced by a cooler fill, to create focus and depth within a single frame. To avoid clippings during simulation, I removed the eyes and smoothed facial regions and ears in Blender, ensuring clean deformation and preventing clipping once the hood and garments were in motion.
From a pipeline perspective, character and clothing were treated as one connected system. The animated running MetaHuman was sent into CLO3D via the LiveSync plugin, where the outfit was styled, colored, textured, and simulated directly on the moving character. Safety pins were used in CLO3D to keep the hood in place during dynamic motion and prevent it from gliding away. The garments were then brought back into Unreal Engine as USD, where I further tweaked materials and surface behavior.
Layering played a key role in making motion readable. A fitted inner layer paired with a longer outer layer allowed the clothing to react clearly to changes in direction, with front and back zippers revealing the underlayer as the outer layer glides away during a sidestep. To keep the setup efficient, socks were not added as a separate garment layer but painted directly onto the exported MetaHuman body using Substance Painter. This required additional material adjustments, including tweaking the scatter map to remove subsurface scattering in the ankle region, so the Blockschmidt monogram socks behave like fabric rather than skin.
I also adjusted the opacity map in the master body material to eliminate minor clipping artifacts that can occur with dynamic garment simulations. Finally, Substance Painter was used to blend the Blockschmidt base skin textures with Unreal’s MetaHuman textures and scanned assets from The Scan Store, adding another layer of surface fidelity, accompanied by the nose bandage.
The result is a character where motion, garments, and material response are inseparable, allowing the still image to communicate performance, realism, and narrative rather than just appearance.
The Interline: Sallyann, putting together this scene relied both on Blockschmidt’s own multi-hyphenate, hybrid skillset, and on the position that Unreal Engine has at the core of a growing ecosystem for real-time tools. From your perspective, what does it mean to offer useful integrations between your core engine and the widening suite of digital product creation tools, as well as expanding the interoperability, availability, and utility of platforms like MetaHuman in a way that serves real industry use cases?
Sallyann Houghton: At Epic, we want real-time workflows to feel genuinely usable, which means Unreal Engine can’t stand alone. It needs to sit within an ecosystem where assets, characters, environments, and tools move easily between each stage of creation. Strong integrations and interoperability are what make that possible.
In fashion, this is especially important. Designers work across multiple applications to sketch, simulate materials, and visualise collections. By connecting Unreal Engine, MetaHuman, RealityCapture, Fab, and leading DCC tools, we help those steps function as one continuous pipeline rather than a patchwork of separate processes.
MetaHuman shows this in practice. By making high-fidelity digital humans more accessible and compatible with industry tools, tasks that once required specialist skills or long renders become far easier. That’s why creators like Blockschmidt can bring characters, garments, and environments together so fluidly.
Ultimately, integration is about meeting real industry needs — reducing samples, supporting virtual collections, improving collaboration, and enabling new types of experiences. As real-time becomes more central to fashion’s workflow, our role is to offer tools that are open, connected, and ready for production.
The Interline: Blocki, in a wider sense, walk us through your pipeline for this whole project – from the initial assets through to the final pixels. What different solutions and assets needed to work together to create the vision you had in mind? And how did you finalise the resulting render?
Blockschmidt: For this project, my goal was to build a pipeline that was flexible rather than reinventing every asset from scratch. I built on existing solutions where it made sense, allowing me to focus on direction, integration, and refinement. Leveraging high-quality assets from other skilled artists is an important part of how I work efficiently without compromising the final result.
The process began with asset curation. Garments were sourced from the CLO-SET platform, where I selected a cargo pant and jacket as a base and then adapted them to fit the character and performance. Additional elements, such as gloves, were sourced through Fab, while the shirt and sneakers were matched to pieces I already owned and had worked with before.
The MetaHuman was developed in Unreal Engine 5.6 using the updated MetaHuman Creator, increasing overall facial and skin fidelity. In parallel, the environment was built from an alleyway scene sourced on Fab, which I then customized with graffiti elements created in Adobe Photoshop and additional details like puddles to ground the scene.
Once the assets were in place, everything came together in Unreal Engine. The MetaHuman was animated using retargeted body motion and facial capture recorded with a Rokoko headcam. The animated character was sent into CLO3D via LiveSync, where the garments were assembled, colored, textured, and simulated directly on the moving body. Embroidered logo details were developed with Substance tools before the simulated garments were sent back into Unreal as USD and aligned with the character and environment.
Lighting and camera setup were handled directly in Unreal. I built a custom lighting rig with a warm key, cooler fill, additional lights for garment detail, a rim light for separation, and a soft overhead source to control exposure. Camera settings such as focal length, aperture, shutter speed, motion blur intensity, lens flare, bokeh, and subtle wet-map effects were used to shape the final look.
The final images were rendered as ACES EXRs and finished in DaVinci Resolve, where color, illumination, and contrast were refined to match the intended mood. Overall, the pipeline brought together CLO3D, Unreal Engine, MetaHuman, Substance, Rokoko, and DaVinci Resolve.
“3D animation sits at the intersection of realism and imagination — products are no longer isolated objects, they can exist inside worlds that reinforce their identity and narrative.”
BLOCKSCHMIDT
The Interline: Alex, as well as direct skills and core software capabilities, this project pulled on a lot of community and integration threads: the LiveSync bridge between CLO and Unreal Engine, for example, and the breadth and licensing structures of the CONNECT marketplace side of CLO-SET. There’s a lot of extensibility at work there, but it’s important to remember that, although 3D is showing up in more different ways than ever, each of these should be grounded in producible reality and accurate product data. How do you think about building and maintaining that common foundation between all the different places that 3D garments can show up?
Alex Kim: You’re absolutely right. As 3D content continues to surface across an increasing variety of channels, it’s essential that each instance is rooted in accurate product data and reflects a reality that can be manufactured.
CLO is built on this principle. The digital garments created in CLO are not just visual representations but precise digital twins. They capture construction logic, material behavior, and fit with high fidelity. This ensures that wherever the garments are used—whether in Unreal Engine, digital marketplaces, or virtual showrooms—they all remain connected to a single, reliable source of truth.
Instead of creating separate standards for each output channel, we believe the best way to maintain consistency is to let everything originate from the CLO asset itself. This way, extensibility becomes a strength, offering flexibility that is always grounded in validated, production-ready content.
The Interline: Blocki, what was your physical setup for this project? What local hardware were you running? What input device(s) did you use?
Blockschmidt: For this project, I worked across two local setups: a primary desktop workstation and a high-performance laptop, switching between them depending on the task and stage of production. The main workload was handled on my desktop workstation, running an AMD Ryzen 7 5800X, 64 GB of RAM, and an NVIDIA GeForce RTX 3080 Ti with 12 GB of VRAM. This machine was used for the heavy lifting, including MetaHuman development, garment simulation, lighting, and final rendering in Unreal Engine. I work across multiple SSDs for active projects and larger drives for archiving, which helps keep iteration responsive even with complex scenes.
Alongside that, I used a laptop equipped with a 13th Gen Intel Core i9-13980HX, 32 GB of RAM, and an RTX 4090 mobile GPU with 16 GB of VRAM. This setup allowed me to stay flexible and parallelize tasks. In practice, I could color grade and review renders in DaVinci Resolve on the laptop while the desktop was busy rendering, which sped up decision-making toward the final stages of the project. For facial performance capture, I used a Rokoko headcam, feeding directly into the Unreal Engine pipeline.
And while the laptop is already running an RTX 4090 mobile and holding up extremely well on the go, I’m still quietly hoping that Santa might consider a 5090 RTX for the main workstation. After this year’s workload, I’d say I’ve been reasonably well behaved.
The Interline: The implications of a project like this one, that focuses on the bleeding edge of 3D staging and storytelling, go further than the traditional “DPC community,” and stretch into a much broader range of in-house, partner, and consumer audiences. What do you believe it’s going to take for fashion in particular (as well as other product-centric industries) to extend the reach of the work they’ve done in 3D so far to the places that other industries have already gone?
Blockschmidt: I think fashion has already started to shift in how it handles and reuses 3D visuals and assets. While the industry has opened up pre-production processes through tools such as CLO3D, it remains largely resource-intensive and, in many cases, wasteful. That reality can’t be ignored. At the same time, I do see a positive direction emerging, especially in how fashion is beginning to understand the value of universal, cross-pipeline, multi-use 3D assets.
Looking at other industries, the transition from physical to digital has already arrived at the center of pop culture and the broader zeitgeist. Artists like Playboi Carti and Kim Kardashian existing inside the Fortnite universe are clear examples. It no longer feels unusual to spend real money on digital garments, investing in a playable avatar and an online persona. In-game assets have become culturally accepted, allowing people to express identity and value within digital worlds.
We’re also seeing the inverse direction gain momentum, where digital assets move back into the physical world through 3D-printed elements and digitally driven production methods. Garments and accessories that originate in 3D can now be prototyped, customized, or produced physically, reinforcing the idea of 3D as a continuous loop rather than a one-way process. This is where fashion and other product-centric industries can learn most from gaming, especially through cross-connections with entertainment and music that create immersive, narrative-driven experiences.
At the same time, consumers increasingly want more than just a product. They want a story they can step into, something that links their physical life to a digital experience. In my own work, I digitized festival merchandise for Splash! and translated those pieces into 3D animations that were showcased on the festival’s mainstage screens, turning fashion into a shared live moment. Similar approaches can be seen in 3D-scanned footwear experiences on mobile devices. The line between physical and digital identity is already blurred and will continue to blur further, and I’m genuinely excited to see how fashion continues to expand the reach and relevance of its 3D work.
The Interline: As someone who’s worked in 3D for other media, and who’s specifically worked with cross-industry tools for this project, what opportunities do you see for fashion to use 3D as a way to authentically collaborate with other sectors?
Blockschmidt: For me, the biggest opportunity lies in treating 3D as a shared language rather than a fashion-specific tool. I realized while answering some of the earlier questions that I had already started touching on this by accident, which says a lot about how naturally these ideas connect in practice. Having worked across music, live events, and entertainment, I’ve seen how powerful it becomes when different sectors meet on the same technical and creative ground. When teams work from shared digital assets and real-time environments, collaboration feels fluid instead of fragmented.
Fashion is in a strong position here because garments and products are inherently expressive and adaptable. Through 3D, a single asset can live across multiple contexts without being rebuilt. In the Megaride project, for example, the same 3D shoe asset used for the online website campaign was also worn by the MetaHuman in the visual campaign experience. That continuity allowed the product to exist consistently across platforms while adapting its role and storytelling to each environment.
Ultimately, I see 3D as a way for fashion to collaborate more authentically with other industries and participate more actively in culture. When 3D is introduced early and shared across creative, technical, and experiential teams, assets can travel naturally between fashion, music, entertainment, and digital spaces. When those collaborations are built on shared systems rather than surface-level integrations, 3D becomes a meaningful connector that enables richer stories, broader reach, and more immersive experiences.
The Interline: Finally, from a technical perspective, what do you want to see being developed further in the end-to-end 3D pipeline for fashion?
Blockschmidt: From a technical perspective, I would like to see the gaps between tools continue to close across the entire pipeline, especially between garment creation, character animation, and real-time usage. Plugins like LiveSync have already paved the way for how I work today. My Unreal Engine and CLO3D projects would not exist in their current form without that bridge. Cross-software connections like this make it significantly easier to move from creative intent to execution, and I hope to see them developed even further.
Garment simulation in motion is one of the most important areas for continued growth. We are already seeing strong progress, especially with GPU-accelerated simulation and cinematic-quality cloth behavior inside CLO, which opens up new possibilities for sped up workflows. Tighter integration between animation data and cloth simulation, particularly for layered garments and fast, directional movement, would make performance-driven fashion work more predictable and robust. The closer the simulation stays to the character’s actual motion data, the more convincing the result becomes.
Asset exchange and consistency are equally critical. Leaning further into USD-based workflows is a major step forward, and it is encouraging to see this direction reflected in developments like LiveSync 2.0. Stable data exchange helps preserve material intent, scale, and hierarchy across tools, reducing the need for manual fixes at later stages.
More broadly, I would like to see more built-in, artist-focused tools that reduce technical overhead while increasing real-time quality. When systems communicate clearly and reliably, creatives can focus on storytelling, performance, and design rather than technical problem-solving. The strong partnership between CLO and Unreal Engine is a great example of this direction, and I’m genuinely happy to be working at a point where MetaHuman cinematic-quality fashion pipelines are becoming a practical reality rather than an exception.
