Sign up to Interline Insiders to receive our weekly analysis in your inbox every Friday.
Key Takeaways:
- Nvidia’s new trillion-dollar valuation reflects a growing recognition that the future of computing lies off-site, and that the market for infrastructure designed to support complex, specialised, compute tasks like AI is likely to dominate the hardware landscape in the new era.
- The possibility space opened up by access to AI has some fascinating applications in fashion – from real-time character rendering paired with chatbot and natural language capabilities, to the training of bespoke, brand-specific models.
- Apple’s upcoming VR headset will likely further blur the line between professional and consumer hardware audiences, creating new opportunities for creators in the fashion industry, but likely leading to multiple cycles of hardware refinement and redirection before mass consumer adoption on the scale of the iPhone becomes possible.
Nvidia: striking a deeper vein of gold at the AI frontier
For anyone of a certain age (read: most of the team here at The Interline) the Nvidia logo conjures up memories of videogames, since the company built its reputation making dedicated graphics cards (or GPUs) for PCs and, later, consoles and other consumer and professional hardware. So strong is that association that many people still think of Nvidia as a gaming company – a myth that was dispelled in the strongest possible terms this week when Nvidia temporarily achieved the lucrative milestone of earning a market cap of $1 trillion on Tuesday morning.
This put the company on a similar footing – at least temporarily – with titans like Apple and Alphabet, and brought Nvidia into a tiny cohort of just nine companies to have ever achieved a trillion dollar valuation. (Nvidia shares need to remain above $404.86 to retain a trillion dollar market cap, and at the time of writing, shares of the stock were down to the point that the overall capitalisation had already decreased into the billions.)
This milestone made major headlines, but it didn’t come out of nowhere. While that reputation as a chipmaker for gaming applications and consumer electronics has persisted, Nvidia has, for some time now, also been consolidating its position as a provider of enterprise hardware on a gigantic scale: data centre infrastructure, specialised hardware for high-performance computing, robotic controllers, scalable cloud ecosystems, and much more.
Today, Nvidia’s website describes the company as “the world leader in AI computing,” and this shift in emphasis is down to both that steady progression from consumer to enterprise infrastructure, and to the company’s just-unveiled “Grace Hopper” specialised AI superchip (which is, to be clear, technically a CPU rather than a GPU), which is targeted to be deployed in a range of use cases – from relatively standalone, up to the 25X cluster of chips that makes up Nvidia’s new shared memory, ARM-based AI supercomputer.
(The missing link in that journey from consumer entertainment to enterprise powerhouse is, of course, the importance of dedicated GPUs in design and simulation workflows. While the most common creative applications for fashion users do run – to some extent – on integrated chipset graphics, and on Apple’s new silicon, real 3D workflows in design, development, and final pixel rendering have always relied on having dedicated graphics hardware in PCs, which is an area Nvidia has also come to dominate.)
But this is more than just a history lesson, or a company retrospective. Nvidia’s new, stratospheric valuation stems from a near-universal recognition that the future of the most intensive computing lies off-site, and that dominance in the market segment that creates hardware designed to support specialised, complex compute tasks is going to translate into massively increased share of the computing market in general.
And when it comes to the most intensive computing tasks, and the most specialised applications, there’s a Venn diagram that wraps perfectly around generative AI. To quote a trading firm cited by Wall Street, “It looks like the new gold rush is upon us, and Nvidia is selling all the picks and shovels”.
Or, to put it another way, in the race to bring generative AI to every industry, in every conceivable application, there is a tremendous amount of money to be made in being the company that sells the hardware platforms that run AI models. And the Nvidia keynote that sparked this week’s fire refers to the company as doing exactly that.
Which brings us to two questions: why does AI need so much consolidated computing power? And what is the fashion industry going to be doing with that power?
The first question is simple to answer, on the surface, by example. Behind Microsoft’s splashy backing of OpenAI and its subsequent reveal of Bing and Windows Copilot, the company also bankrolled the creation of AI server infrastructure to help ChatGPT run and scale – to the tune of hundreds of millions of dollars. And it’s important to note that Nvidia hardware was a major cornerstone in that investment.
These kinds of specialised hardware clusters are going to prove essential to two elements of AI workloads: training and operations. Handling the scale of requests that people currently make of services like ChatGPT is costly. And training new models – even if they’re technically narrower than the major LLMs – is even costlier.
While progress is being made on running pre-trained language and image generation models on local consumer hardware, making AI work at the scale that predictions say we’re aiming for – across industries – is going to hinge on packing a tremendous amount of computing power into data centres.
As for what fashion is aiming to achieve with AI on the scale that Nvidia has built its new reputation on? The links are both direct and indirect.
The most obvious direct link is Nvidia’s Avatar Cloud Engine (ACE), which is proposed as being a suite of AI components aimed at making digital avatars and digital humans workable at scale – from animation and speech synthesis, to conversational AI. While the immediate applications of virtual model creation and simulation are obvious, the more compelling possibility here comes from the combination of real-time rendering and the existing capabilities of large language models to engage in dialogue with users.
Fashion has already flirted with both chatbots and virtual humans, but bringing the two together into real-time experiences is a different prospect. Picture walking into a (re)creation of a brand’s chosen environment (cyberpunk cities like the ones seen in the Nvidia demo and in previous fashion / game crossovers are a popular touchstone) and having a voice conversation with a model and sales representative in a way that encourages immersion and engagement.
Indirectly, we expect to see fashion – like other industries – starting to make use of AI infrastructure to take what seems like the inevitable next step in generative AI: training bespoke, brand-specific models to not just engage shoppers, but to serve as creative and commercial assistants. And this is in addition to curios like Nvidia’s just-announced “Neuralangelo” model, which aims to reconstruct 3D geometry from flat video.
In terms of dominance, does one company holding the keys to all those possibilities represent cause for concern? Only time will tell. But it seems certain that, like most industries, fashion has little choice other than to put all its AI eggs in one basket.
Apple’s VR reveal, and where audiences divide
The rumour mill has continued swirling this week with chatter of Apple finally unveiling its VR headset at its Worldwide Developers Conference (WWDC) on the 5th of June. The conference traditionally serves as a platform for Apple to introduce the upcoming iterations of its various operating systems to an audience of developers and avid fans, and this year is expected to follow the same pattern with various updates. It’s anticipated that alongside the possible headset announcement, a range of new functionalities in macOS, iOS, iPadOS, watchOS, and tvOS will be promoted.
All exciting – for Apple-enthusiasts, at least – but the headset will be the thing that draws worldwide interest if it does indeed get a concrete street date.
But even if it does make its long-awaited appearance, is the headset (potentially dubbed the Reality Pro) going to have the impact of the first iPhone or the Mac? That depends on how you define impact. While the Mac was an immediate success, neither the iPhone nor Apple Watch emerged fully-formed, and most people would argue that it took several iterations before these devices started to “bed in” and progressively redefine the way we think about mobile computing, the web, software, and a range of other areas.
The early indications are that analysts (and probably Apple itself) expect a similar cycle this time around. Earlier this year, writer Mark Gunman for Bloomberg wrote that “Apple’s next big bet – mixed-reality headsets – won’t be anything like its previous hits”. Or at least not straight out of the gate. The suggestion here is that the product will be a major shift in strategy for Apple because unlike its other ventures (music players, phones, tablets and watches) AR and VR is not something with existing mainstream appeal.
For everyone who cares about VR, there are thousands of people who either haven’t experienced it, or who’ve tried a limited version of it and then written off the entire idea. And despite being part of the tech stack behind popular lenses and filters, the vast majority of people simply don’t care about AR either.
So rather than perfecting an existing idea – something Apple is well-known for – this time around the difficulty is going to be in creating an entirely new category (new to most people at least) and generating interest in it. So it’s no surprise to learn that Apple is being cautious, too: only expecting to produce about 1 million units in the first year. Perhaps wise considering the $3,000 price tag of the device and the difficult reception that other professional-focused devices from Microsoft and Meta have had.
That word – professional – is doing a lot of heavy lifting, but it’s also where the key difference lies between Apple and those other companies. Meta has very quickly forgotten its Quest Pro headset, which was pitched exclusively at a very niche enterprise market, and it has also just announced its upcoming Quest 3, which is aimed squarely at the VR videogame market, with extra features like full-colour passthrough representing added value for anyone who’d like to use the headset as a work device.
There is, for Meta (and for Microsoft in their HoloLens projects), a very clear line between “pro” and “consumer” devices.
That’s also a line that Apple has purposefully blurred across all its categories. From MacBooks to iPhones to Watches to earbuds, Apple has a well-documented history of selling to a very particular type of “prosumer” with its “pro”-branded products, as well as a recent history of doing a comparatively poor job of courting the enterprise audience.
In practice, this means that the introduction of Apple’s headset is also going to represent a further blurring of the lines. We all know someone who uses a “Pro” Apple device despite not technically being a professional in the necessary field – if that’s even a useful definition – and the positioning of this headset is likely to strike a similar balance between being pitched at creators, but sold to consumers.
Who are these creators? Based on prior presentations for Apple Silicon devices (notably the Mac Studio) we expect to see architects, musicians, and fashion designers being represented. And, crucially, the delineation between professionals and “prosumers” in the latter of those two industries has all but disappeared. So while brands and studios are sure to have budgets allocated for technology investments, we’re fully expecting to see considerable appeal for the growing base of creators who are not part of the establishment, but who are likely to become early adopters, creating a ripple effect to wider adoption and recognition as the new device goes through a cycle of iteration.
As for how VR itself is going to become part of fashion? In this case, the likelihood is that the device will drive the use case (or not) rather than the other way around. We don’t expect to be writing a rapid retrospective of the unveiling that’s packed with transformative possibilities for fashion design and development, but we do believe we could be writing a similar story in a couple of years’ time.