Key Takeaways:
- This week, Roblox introduced Cube 3D: a foundational generative AI model designed for creating 3D objects and environments from text prompts, both on and off the platform. The model is open-source, allowing developers and researchers to fine-tune and expand its capabilities, and it now allows for usage outside of the Roblox ecosystem. Also this week, Chinese multinational technology conglomerate Tencent is making their own 3D generation models more widely available and powerful.
- While gaming and other digital-heavy industries are adopting generative modelling to accelerate asset creation and reduce production costs, fashion’s biggest challenge isn’t speed, but trust. The industry still struggles with the perception that digital representations lack the fidelity needed for decision-making, meaning AI-generated assets may deepen existing scepticism rather than solve core problems.
Generative modelling (text to 3D and image to 3D AI models) is making waves across 3D design this week, and depending on your role in the fashion and footwear ecosystems, your reaction could range from excitement to concern. An undeniably cool technology; a very diverse set of potential takes on whether it can really add value.
For those who rely on digital assets – whether in marketing, merchandising, or other downstream functions – the ability to have 3D objects created for them, quickly and at scale might seem like a breakthrough: faster turnaround times, reduced costs, and an expanded creative pipeline with comparatively minimal manual effort.
But for 3D designers, digital artists, and those who have championed 3D adoption over the years, the use of generative AI modelling tools may well be raising some alarm bells.
If you work directly in 3D design or simulation, or you head a department or business unit where 3D adoption has been a long-term journey, characterised by a lot of change management, cultural push and pull, and where the hands-on challenges are clearer, you might be thinking that – far from making things more streamlined – generative 3D models could deepen an already-existing challenge of appreciation for the craft of 3D – and potentially further stretch an already finite supply of trust that other teams are willing to place in 3D assets.
The catalyst for this soul-searching? This week, Roblox introduced Cube 3D: a foundational generative AI model designed for creating 3D objects and environments from text prompts, both on and off the platform. The model is open-source, allowing developers and researchers to fine-tune and expand its capabilities, and it now allows for usage outside of the Roblox ecosystem.
In their official press release, Roblox puts it this way “Imagine building a racetrack game. Today, you could use the Mesh Generation API within Assistant by typing in a quick prompt, like “/generate a motorcycle” or “/generate orange safety cone.” Within seconds, the API would generate a mesh version of these objects. They could then be fleshed out with texture, color, etc. With this API, you can model props or design your space much faster—no need to spend hours modelling simple objects. It lets you focus on the fun stuff, like designing the track layout and fine-tuning the car handling.”
The big draw the company is pitching to developers, artists, and other 3D experts, the AI could speed up the creation of simpler and non-critical objects, thus giving creative teams more time for game design and refinement.
The Interline does have some concerns with this line of reasoning (environment artists aren’t usually the ones designing gameplay elements or tweaking ‘gamefeel’ and interactivity, so this new ‘free time’ should probably rightly be reassigned elsewhere) but the principle is sound: if a 3D artist is spending a huge amount of time creating simple objects at scale, then automating part of that workflow should be a valid target for AI.
Cube 3D operates by tokenising 3D objects, much like how language models tokenise text. Instead of predicting the next word, Cube predicts the next shape token to generate fully formed 3D objects. Cube’s architecture is based on an autoregressive transformer model, enabling it to generate single 3D objects, conduct shape completion (filling in missing parts of an object) and scene layout generation (predicting arrangements of multiple objects).
Also this week, Chinese multinational technology conglomerate Tencent is making their own 3D generation models more widely available and powerful. The company released five open-source models based on its previously-closed Hunyuan3D-2.0 technology, including so-called “turbo” versions – designed to produce high-quality 3D visuals in under 30 seconds. First launched in January, Tencent claims that the model surpasses other available models in texture vibrancy, geometric precision, and visual detail.
All of this sounds a lot like progress. And for some industries and roles, it probably is – especially if we take a step back and put ourselves in the shoes of executives and departmental decision-makers tasked with stripping bloat out of existing workflows. 3D modelling and texturing are labour-intensive, and the appeal of AI-driven shortcuts is compelling, which is exactly why fashion has seen 3D budgets cannibalised by AI initiatives. With a cold, commercial hat on, it’s just faster to generate a static image with AI than to craft a 3D asset from scratch, even if the AI-generated output isn’t going to have the technical depth, but will be sufficient for range-building and other first-stage activities.
Other industries, particularly gaming, stand to benefit significantly from generative 3D, as ballooning production budgets demand more efficient ways to populate virtual worlds. The logic is simple: if a 3D artist is spending countless hours creating mundane objects, automating those tasks with AI seems like an obvious win for the efficiency of the studio as a whole.
But here’s where the problem statements of fashion diverge from other sectors. On aggregate, fashion is not struggling with how quickly it can produce simple 3D objects… It’s struggling with building trust in the idea that complex, multi-faceted 3D objects and scenes can represent the real thing to a sufficient level of fidelity to allow different teams to make decisions based on it.
The reason 3D initiatives in fashion have met resistance isn’t that artists aren’t working fast enough (again, on average, since The Interline is sure that there are 3D departments that are pursuing greater efficiency in pure modelling and texturing workflows), it’s that decision-makers don’t fully believe in digital representations as accurate enough to replace physical samples.
AI, with its probabilistic nature, seems as though it’s only going to complicate this issue. If traditional 3D workflows are already met with skepticism, introducing generative AI (especially in early-stage modelling) risks exacerbating mistrust and sacrificing accuracy at the altar of a speed and scale problem that the fashion industry isn’t really feeling yet.
Footwear, however, might be an exception. In this category, there’s potentially more value in generating starting shapes and then progressively sculpting them into initial forms in tools like Gravity Sketch, instead of always beginning with basic primitives. But for the broader industry, the business case for generative 3D feels weaker than in gaming or other fields where speed and scale are the top priorities.
That doesn’t mean, of course, that generative AI and 3D can’t work together, but the real opportunity isn’t in replacing human-led 3D modelling, it’s in augmenting the workflows that have already grown up and become codified into best practices by digital product creation pioneers. Using AI for conceptual ideation or colourway experimentation, then 3D tools for precision, and AI again for marketing refinement is a process that’s already being stress-tested in live environments.
But in fashion, for the foreseeable future, The Interline believes that the 3D part of the DPC workflow still belongs to people. Automated modelling might be the rallying cry in other industries, but in fashion trust is the real 3D currency – and generative AI faces a steeper climb to earning it.
Best from The Interline:
Kicking off this week, we spoke to Guy Yaniv, President EMEA at Kornit Digital, on the practicalities of connecting virtual designs with real-world production.
Next up, our first weekly analysis. Even as the demand for digital content for consumers grows, brands might want to think twice about using AI to create content at historic scale, without careful consideration of who the real end audience is.
Afsha Iragorri, Head of 3D Technical Design and Co-founder at 3D Fashion Solutions, on ‘Separating DPC Skepticism From DPC Success’.
Next, Romain Japy of The New Face, shares his ‘Digital Roadmap To Immersive Luxury Experiences’. The journey from transactional shopping experiences to immersive ones has not been straightforward, but the right combination of 3D assets and real-time technology could be the key – provided fashion has the right ambitions.
And closing out the week The Interline announced that our most hotly-anticipated downloadable report, covering the full spectrum of artificial intelligence in fashion and beauty, will be released in the second quarter of 2025.