Key Takeaways:

  • Microsoft and Google are driving a shift towards “agentic AI” and conversational interfaces, with innovations like Microsoft’s NLWeb and Google’s “Shopping with AI” mode, allowing for more intuitive and context-aware interactions that could reshape e-commerce and creative workflows.
  • Tech giants are prioritizing design and integration for wearable AI, as seen with Google’s partnership with Gentle Monster and Warby Parker for Android XR glasses and OpenAI’s acquisition of Jony Ive’s design firm “io”, emphasizing that aesthetic appeal is crucial for mainstream adoption.
  • The true evolution of AI is not in the flashy announcements but in its increasing efficiency, cost reduction, and accessibility, enabling on-device computing and fostering a new era of personal and professional interaction.

Upcoming Webinar: Tariffs Today, Unknown Threats Tomorrow – How Europe Can Plan For The Global Supply Chain Shift

On June 18th, the Interline and TradeBeyond will host a focused, expert-led webinar designed for sourcing, logistics, and supply chain professionals navigating a changing global landscape.

Built for professionals working at the intersection of sourcing, policy, and technology, this webinar delivers real-world insight from experts already adapting to modern supply chain realities, offering a grounded, actionable view of what’s changing and how to respond effectively.

At the end of a major week for AI announcements, the most lasting story is being told between the lines.

The Interline has spent months reporting on the different themes that continue shaping the future of fashion and commerce: AI changing product discovery and shopping by essentially eating the web; AI transforming creative and operational work; and the slow, often stuttering march of wearable AI tech, where fashion has perhaps the biggest visible stake in this whole onward march. 

This week all things collided, but perhaps the most telling story didn’t sit in any keynote – it was what’s happening between the slides, beneath the press releases, and just off the edge of the screen. One thread that quietly ties the rest together: the high speed march of AI efficiency, cost reduction, and accessibility, and the impact this has on the viability of new models of on-device computing.

First up, though: yes, this was a week of showy launches. Microsoft Build. Google I/O. Along with OpenAI’s announcement of its acquisition of Jony Ive’s hardware firm. But underneath the noise, a quieter truth is setting in: the way we interact with technology – by clicking, tapping, and typing through GUIs – could have a shorter shelf life than people are giving it credit for.. Because, as much as the big model race draws headlines, with Anthropic capping the week with the availability of Claude 4, the real and lasting money isn’t being made by the company that builds the smartest model. It’s who builds the layer that gets between you and the rest of your digital life.

google i/o
claude

Frontier AI is a fascinating place to visit, as a group of technology commentators, but it’s not where most of us live. We live in the everyday, where the best AI model for the job is the one that’s fast, cheap, good enough, and available wherever we are – on or offline.

On that basis, the flashy announcements this week shouldn’t really be read as standalone stories about the bleeding edge of AI, but as precursors to how and where AI will show up in products, services, hardware, software, and experiences in our personal and professional lives. 

According to the industry-wide reference point that is the Stanford AI Index, the cost of running AI models (specifically GPT 3.5-level systems, so something roughly analogous to the launch version of ChatGPT) decreased more than 280x between that initial launch and October 2024. And this trend continued this week with a new version of Google’s Gemini 2.5 Flash (a model for everyday tasks that emphasises speed and cost) having roughly the same capabilities as the launch version of DeepSeek, but at a quarter of the inference cost.

So, judged according to that rubric, this week’s big tech circuses had a lot to say between the lines, if you’re able to look past some more moments of ‘future shock’.

Take Microsoft. Their Build event leaned heavily into AI agents: semi autonomous tools that don’t just answer prompts but carry out multi-step reasoning, remember what they’ve done, and come back with results. Right now it’s aimed at developers, but anyone working in design, creative direction, or production should be paying attention. Fashion’s next big round of productivity gains likely won’t come from better tools, but from handing repetitive workflows off to a constellation of behind the scenes bots. In effect, Microsoft is proposing an entirely new, largely automated technical and creative workforce, and they’re by no means alone in suggesting that this is where work is headed: towards professionals having armies of AI workers they direct and manage, rather than doing that work themselves.

windows ai foundry.
MCP on Windows architecture.

More quietly, Microsoft also announced NLWeb, a framework for embedding natural language search into individual websites. That may sound like backend plumbing, but it has very real front-end consequences. It means customers will be able to type, or say, something like “show me black dresses for under $100 that would be good for an evening function,” and get a useful, context aware result grounded in just that website’s data – something that feels like a conversation, not a database query. It’s the kind of shift that makes interfaces feel less like systems and more like assistants. And while that raises the bar for brands, it also opens the door to a much better experience. Done well, it could mean customers actually find what they’re looking for, rather than abandoning a clunky search experience halfway through. 

Google, of course, went even bigger at I/O. Most people clocked the flashiest part for fashion’s purposes: a slick demo of Shopping with AI mode, which now lets users try clothes on using their own photo. It’s a polished extension of virtual try-on tech that’s been improving for years, and while it still doesn’t look perfect, it’s edging closer to something that might actually be useful in real shopping flows. There was also Flow, an AI filmmaking tool that looks (and sounds, since it incorporates video and audio into a single prompt) undeniably impressive, and that’s already creating some concerningly real parody ads! Brands are already exploring the potential of generative video, but mostly through experimentation rather than production. Flow looks set to join the same space – a playground for speculative storytelling, driven by emerging prompt artists making work for campaigns that don’t exist yet. Whether it can move from an impressive demo to a reliable prediction tool is still an open question. 

android xr

And then there was the Android XR glasses, infused with Gemini and styled in collaboration with Gentle Monster and Warby Parker. These partnerships signalled something significant: Google’s no longer pretending it can brute-force wearable tech into everyday life without design. The glasses may still be early-stage, but the message is clear: If AI is going to live on your face, it better look like something you’d actually wear. Google has absorbed a simple but brutal truth: wearable tech doesn’t stand a chance unless it actually looks good. 

Which brings us to OpenAI. Their move this week was stranger and harder to parse –  an acquisition of “io,” the Jony Ive hardware firm they’d already been collaborating with. The announcement was perhaps deliberately vague –  gestural language with an evocative video that talked a lot but said very little about what this partnership would lead to in terms of a specific product. Look between the lines though, and the intent is obvious: OpenAI doesn’t want to live in a browser tab. It wants to be more than an app. Not something you open on your phone, but something built into the physical tools and surfaces of your life – something ambient, constant, and embedded, in ways your mobile device simply isn’t.

That only works if the tech underneath can deliver. And here’s where the quieter part of the week’s blowout gets loud. While everyone was looking at new demos and wearable partnerships, Google announced new improvements to its open source Gemma 3 model series, and specifically Gemma 3n

Part of a new wave of ultra efficient AI models, Gemma 3n is built to run on a single GPU and is being optimised for on-device performance on everyday consumer hardware. It’s a move toward fast, private AI that can work locally, in places where the cloud can’t, or in parts of our lives where it shouldn’t. In practical terms, this means your AI assistant could run on a phone, a wearable, or a lightweight computer without constantly needing to connect to a remote server to answer everyday queries. It means faster responses, more privacy, and functionality that keeps working even if the signal drops. For AI to feel ambient, something that lives with you rather than waits for instructions, it has to live close to you. Gemma 3n is a step toward that.

As AI becomes more embedded, it doesn’t just assist users. It reshapes how choices are made. As we covered less than a month ago, maybe your homepage never needs to load. Maybe your brand story is reduced to one line of summary. Not out of malice, just because the system found a quicker route. That’s the trade-off being engineered in real time. The interface isn’t disappearing. It’s just being slowly replaced by something thinner, faster, and more opinionated about how it can help you and mediate your experiences.

So yes, the press releases this week were loud. But the real story lives in the margins, and the really important shifts are happening in places most people don’t look. In how fast a model runs, in whether it works without the cloud, in whether it shrinks small enough to live on your body without noticing. And crucially, this is the foundation that determines whether your company, your content, your experience even makes it to the user. It might not be as exciting to watch as a new Flow demo, but it’s absolutely going to be critical  to determining which AI use cases win out, not just which tech giant is behind the curtain.