Thirteen Years On, The Future Of Wearables Still Looks Suspiciously Like Glasses

Key Takeaways:

  • Google’s confirmation of AI-enabled smart glasses for 2026 provides a clearer anchor point for a category that has struggled to find stable footing. Many recent wearables failed to gain meaningful adoption, and Google’s move toward familiar eyewear suggests a move toward formats that fit more naturally into everyday life.
  • Adoption has consistently favoured devices that resemble objects people already wear, which explains the traction of the Ray-Ban Meta glasses and the resistance facing pins, pendants and headsets. The past year has shown that wearability is shaped less by technical capability than by how comfortably a device sits within established fashion categories.
  • While design secures adoption, the underlying AI model drives utility. The current “code red” competition between Google’s Gemini and OpenAI’s ChatGPT over basic product-level competition is key. The ultimate winner will be the model that offers the most compelling capabilities and personality, incentivizing mass purchase of the wearable hardware, regardless of which company produces the device.

In May, at its annual I/O conference, Google confirmed it was working on smart glasses built on its Android XR platform, but left many details, including final form and timing, undeclared. This week the company filled in some of those gaps, setting a 2026 release target and restating the design partnerships it announced in the spring. 

When those details appeared back in May, we wrote about them in our own pages and pointed to a simple observation that felt both obvious and important at the time, which was that any attempt to place AI on the face would benefit from being packaged in a familiar form factor. 

What stands out now is how closely Google’s latest language echoes that same line of thinking – as well as how it raids the company’s own back catalogue to remind us that, for all its faults and cultural friction, Google Glass might have had the right ideas about what people want from wearable technology, even if the people behind it couldn’t have seen LLMs coming.

In a recent blog post Google described how AI and XR become genuinely useful when the hardware fits naturally into a person’s life and aligns with their sense of style, and it expanded on this by explaining that users should have freedom over the balance of weight, style and immersion. In other words: wearable technology should be something that people buy with both objective criteria (as we do with most consumer tech, where specs still rule) and subjective feelings about what they actually want to wear.

google

The past year offered a steady reminder that people rarely adopt objects that sit awkwardly on the body, and the struggles of devices like the Humane AI Pin, the Rabbit R1 and the Friend device made that point pretty clear. Each device arrived with its own pitch (and its own problems), but they all hit the similar wall. They didn’t look like things people either already wear or might want to wear. Long before performance became part of the discussion, the everyday appearance of these devices, and the simple act of imagining where they might fit on a user’s body, introduced a level of hesitation.

Glasses, by contrast (forgive the pun) don’t require any of that introspection. Yes, AI-enabled glasses are bulkier than typical frames – especially the Ray-Ban Display – but the form and function of regular spectacles is so well-established that they aren’t likely to invite many questions about form and function.

So if Google did anything wrong back in 2012, when it pioneered Google Glass (and inadvertently walked into a cultural quagmire about surveillance, mediating reality, and being a “glasshole”) it was to not lean even further into making that product look even more like ordinary eyewear.

When we think about how platforms like Android XR will interact with the fashion community, through collaborations primarily, but as AI layers on the body as well, it’s important to also consider the other end of the spectrum: wearables that put the technology first and the fashion a distant second.

google

Apple’s Vision Pro was the opposite case in point. The display, the fidelity, the sense of immersion, all of it showed what Apple can achieve when it pushes the hardware as far as it can go, yet the admiration for its technical depth never translated into comfortable public use. The weight, the bulk, the sheer visual presence of the headset made it hard to imagine anyone wearing it outside a controlled space, and the internet was full of clips that showed exactly why – some of which very much reminded us of the mocking that the original Google Glass promotions invited. 

It’s telling that, according to insiders, Apple has since shifted its focus toward lighter glasses, a move that feels like an acknowledgment of something Vision Pro never managed to overcome. In the end, the form felt too alien, too far outside established fashion categories, and that single fact outweighed everything the device managed to achieve on the inside.

China offers perhaps the most straightforward clue about where this category is heading, mainly because manufacturing moves to where demand already is, not where people hope it might be. Production lines for lightweight smart glasses have been expanding, and the frames coming off those lines look far closer to everyday eyewear than to the experimental shapes we might be expecting to see when the next big tech company shows up at a European Fashion Week.

Factories tend to follow confirmed interest, and when they retool at this scale it usually means the market has already made up its mind. So on the basis of pure capacity being booked, the focus for wearables has landed firmly on glasses rather than pins, pendants or handheld boxes.

Do these glasses, or the next generation of them, stand a chance of becoming as ubiquitous as the smartphone? Time will tell, but from a form perspective it seems increasingly clear that when the Trojan Horse for AI does land on our bodies, it will be in a shape that none of us find remarkable.

But focusing solely on design ignores the part of the competitive playing field that’s much harder to predict: which AI model eventually corners consumer adoption enough to incentivise people to buy into wearables who might not otherwise have done so, no matter how familiar the form factor.

Recent reporting shows Gemini moving quickly, and the change in fortunes between the launch of ChatGPT (widely reported to have caught Google wrong-footed) and today, where OpenAI are declaring a “code red” and rerouting all available resources to compete at the basic product level between the Gemini app and ChatGPT, is notable.

What makes this moment even more interesting is the position OpenAI now finds itself in, because its hardware project has seemingly a very different path. After spending a substantial amount to bring Jony Ive’s design firm into its orbit, the company has been working through reports of development hurdles, competing ideas about the device’s purpose, and early rumours that the hardware might take the shape of a small, screenless box. None of this means the project won’t land, but it does place the work slightly at odds with the direction the rest of the market is moving in, since a new product category will prove a harder sell than an existing one. 

Yet for all the talk about model sizes and breakthrough interfaces, the most reliable truth in this space still feels almost embarrassingly simple: people wear what looks good, and they talk to the AI model that has the right personality and set of capabilities or integrations for them. Somewhere in the middle of those two sliders is the balance that people expect to kickstart the next wave of wearables – even if it does pick up a baton that Google itself dropped more than a decade ago.

Exit mobile version