[Featured image elements courtesy of Nike A.I.R]
Key Takeaways:
- Unveiled as part of the brand’s Paris Olympics event, Nike’s AI-designed A.I.R collection serves as a reminder of how AI might lead design at the same time as being as distant as ever from the practical realities of it.
- Despite a promising concept and a presence on the fashion runway, Humane’s wearable AI Pin has faced significant criticism from reviewers – a glimpse at the challenges in developing AI-powered devices and agents that people can reliably use for everyday tasks.
- On the opposite end of the AI spectrum, a new group spearheaded by the Linux Foundation, and an open-source AI tool designed to identify instances and risks of forced labour, represent the more practical, pragmatic face of AI – even if reliability and accuracy remain concerns.
The iconic AIR name meets AI, and wearable AI gets off to an inauspicious start
At Nike’s preview for the Paris Olympics, the brand debuted “A.I.R” (short for Athlete Imagined Revolution, and a word-play on both its popular cushioning and product line and the involvement of AI in its creation.) A.I.R is a new collection of 13 concept sneakers, that involved a co-creation process between teams of Nike designers and innovators, 13 of the brand’s athletes, and a dose of the aforementioned AI.
The design and engineering process for the sporty-couture sneakers (biomechanically sound for distance running these do not look) included immersive 3D sketching alongside traditional hand sketching, computational design, generative AI, 3D printing and simulation, and “rapid prototyping”. To reach the final 13 designs, the athletes were interviewed about their vision for their particular pair of shoes, including people, places and things that inspire them. The athletes’ responses were translated into AI prompts, which were fed into a generative image model, and numerous visuals were produced for each athlete. The design team at NIKE A.I.R then collected feedback, and then transitioned away from AI engines and returned to 2D / 3D / 2D tools and workflows to develop the technical specifications for the concept kicks.
Nike’s Chief Innovation Officer, John Hoke, commented that the goal of the project was to think beyond the present, and beyond Paris 2024 – instead stirring up “a sense of unlimited potential” for Nike’s customers. While these sneakers are not intended to be practical, commercial products, they do represent the current visual edge of generative AI – which tends towards wild creative swings – and they certainly seem designed to address the criticism Nike has recently received for a lack of innovation, which lead to its worst financial performance in the past ten years.
Opinions about the shoes are divided, but perhaps not along the lines you might expect. While some people definitely take issue with the fact that AI was involved in the design process at all, there is also a growing contingent of designers who believe that the apparel and footwear industry should be moving beyond the idea that AI is a tool for unfettered creativity that’s unmoored to reality, and moving towards more grounded uses – deploying AI in service of products that strike a more careful balance between form and function.
Another company experimenting with AI, but doing so far less successfully, this week is Humane. To refresh your memory, Humane is the creator of the AI Pin: a small, square wearable device that connects to cloud-based language models to answer questions, and is also capable of taking photos and videos, and sending messages.
The AI Pin made a splash in fashion at a Coperni runway show, and the device has the dubious of honour of being the first in a coming wave of “AI gadgets” that, depending on your perspective, could be a new frontier for wearable technology, or an entirely unnecessary attempt to replicate features that are already present in smartphones – especially phones with ChatGPT or a similar LLM frontend installed.
At the time the AI pin was announced, scepticism rightly swirled around how its touted features were actually going to work, but between strong industrial design, the aforementioned fashion partnerships, and the vast possibility space of AI, a lot of people – including some of us here at The Interline – wanted to believe that a new category of ambient, wearable tech could be about to land.
In practice, reviewers who have managed to get hold of the AI Pin have been almost unanimously critical of it. David Pierce of The Verge writes: “After many days of testing, the one and only thing I can truly rely on the AI Pin to do is tell me the time”. Ouch. Another writer (and AI Pin tester and reviewer), Chris Velazco of the Washington Post calls Humane’s device “a promising mess you don’t need yet.”
It’s also in this review that Velazo touches on one of the main issues with AI at the moment: accuracy and dependability. Anyone who has tried to interact with an LLM for long enough will recognise the need to occasionally argue with it to accomplish basic tasks, but more importantly they will also know how easily, frequently, and confidently AI models make simple mistakes.
This issue of mistrust could improve as solutions like RAG (Retrieval Augmented Generation, and essentially a fancy label for bringing in separate, axiomatic data from external sources) are more widely deployed, but for the time being the idea that generative AI assistants are not ready to be trusted for basic personal tasks certainly doesn’t bode well for more complex professional ones.
As a counterpoint, though, quiet progress was seemingly made this week in the use of AI for auditing ethical business partners, and other, equally sensitive or rigorous, enterprise uses.
Recently, a partnership between Sheffield Hallam University and Northeastern University has resulted in Supply Trace: a tool that uses hundreds of millions of data points to connect the dots between buyers and the providers of their goods. The database’s technology uses both machine learning and human intelligence to establish connections between buyers and their suppliers, identifying possible supply chain relationships that may contain problematic points. It also offers references to corroborating evidence, like online media articles or research from Sheffield Hallam University, when applicable. This is particularly useful for all in the fashion industry given the Uyghur Forced Labor Prevention Act that prohibits the importation of goods into the United States manufactured wholly or in part with forced labour in Xinjiang.
Creators of Supply Trace, Laura Murphy and Shawn Bhimani, are on record as saying that they wanted to provide a starting point for companies to begin their due diligence so that they can carefully, and more easily, select which partners to work with. The intention of Supply Trace isn’t to supplant manual supply chain tracing, but rather to offer a thorough data source for pinpointing entities that may pose a compliance problem within intricate supply chains. It’s also not aimed at publicly accusing and embarrassing companies; instead, it enables those doing due diligence to fulfil their obligations to comply with the law. But a crucial part of the Supply Trace AI process is human verification by a sourcing expert. Bhimani acknowledges that when it comes to AI, there is always a chance that there could be false positives.
Also operating the AI-for-practicality space is the Linux Foundation, who this week announced the launch of the Open Platform for Enterprise AI (OPEA), that will push for an open-source AI ecosystem for business use. Its first task will be to develop open standards for Retrieval-Augmented Generation (RAG) that will be key for adapting AI models for integration into corporate environments. Through this technique, pre-trained models can retrieve internal documents to enhance their AI-generated outputs. This capability allows large language models (LLMs) to contextualise their understanding of business operations without the need for extensive and expensive retraining or fine-tuning processes. An additional advantage is the real-time updating of data. The ten premier members include AWS, Huawei, Intel, IBM, Microsoft and SAS.
So what will it take for the fashion industry to fully trust AI? And will personal, consumer-facing AI and professional AI diverge and then reconverge, the way consumer and enterprise technology have?
Guessing an exact timeline for trust is a challenge. We know that the reliability of AI systems will continue to improve over time as advancements in technology, data quality, and algorithm development progress. There are more factors at play too: ongoing research, regulatory measures, and the integration of ethical considerations into AI development. While it’s sure we can expect AI systems to become increasingly more accurate, achieving full reliability may prove to be impossible.