Key Takeaways:

  • Google has introduced the Agent Payments Protocol (AP2), a new open standard to facilitate AI agent-led commerce. Developed with over 60 partners, including Mastercard and PayPal, AP2 uses cryptographically signed “mandates” to create a verifiable, auditable trail of user consent, which will be crucial for secure, AI-driven transactions. This effort is focused on the unseen backend of e-commerce, ensuring trust as AI systems are given the authority to make purchases.
  • At its recent Connect event, Meta unveiled new wearable hardware, including the $799 Meta Ray-Ban Display smart glasses. The device features a full-colour lens screen and a Neural Band wrist control. By pairing with EssilorLuxottica and sticking to a familiar eyewear design, Meta aims to avoid the stigma of Google Glass and the bulk of Apple’s Vision Pro, making AI wearables easier to accept.
  • While seemingly disparate, these two developments represent a dual, forward-thinking approach to AI adoption. Google is building the invisible, standardised infrastructure for agents to transact, while Meta is creating the visible, fashion-focused hardware to bring AI directly into the consumer’s daily life.

Some weeks, one story grabs our attention and dominates our analysis, demanding all our focus on a singular event. This week we’ve got two signals heading in very different directions. One is buried in the machinery of digital commerce, unlikely to get much oxygen in the mainstream. The other is loud, staged, and already ricocheting across social media feeds. They don’t need to connect, but together they show the spread of where AI is being pushed in two totally different directions: deep into invisible systems that nobody really thinks about, on the one hand, and into public attention, and fashion’s agenda, on the other.

Google Agent Payments Protocol (AP2)

Let’s start with the quiet one. This week, Google introduced a new protocol designed to standardise how AI agents make purchases. Developed with more than sixty partners across payments and retail (including Mastercard, Paypal, and American Express), The Agents Payments Protocol (AP2) is an open standard (rather than a Google-only scheme) that gives AI agents a shared, auditable way to make purchases across different providers. Some hesitation is always sensible when it comes to a tech giant proposing a non-proprietary standard, since these often end up benefitting a closed ecosystem in the long run, but for the time being we need to analyse what’s in front of us.

At the centre of the proposal are what’s referred to as “mandates”. An intent mandate sets what the user has authorised an agent to do, and defines the limits that apply. A cart mandate confirms the specific purchases once a choice is made, including items, price, and merchant. Both are cryptographically signed and checkable by the agent platform, the merchant, and the payment provider, so every party  works from the same record of consent, even if those parties are all automated systems to begin with. Separating “intent” from “cart” is intended to enable a staged approval and a clean audit trail, and aims to avoid any single or multi-party concerns about AI autonomy and authority.

Google is positioning AP2 to sit alongside other agent standards – most notably Model Context Protocol – rather than replacing them, making it part of a growing stable of tools that can be placed in the sandbox available to general purpose systems like Claude and the ChatGPT family. 

Payments have never been the most outwardly compelling part of the product journey, but they nevertheless represent a key instance where the rubber of back-office systems hits the road of outward-facing consumer experiences and brand perception and engagement. For retailers, it could mean smoother subscription models or replenishment systems that don’t need human clicks to authorise. For consumers, it will likely mean less visible friction and more transactions happening before you even notice. Nobody will be wowed by a protocol spec sheet, but the things it could enable will be at the heart of a much wider push into agentic commerce that has the potential to take us into some interesting, potentially problematic, places. 

If all of that sounds a little abstract, though, Meta’s Connect event went in the opposite direction: stuffing AI into the real world, with varying degrees of success. The hardware (the latest version of the tech giant’s Ray-Ban smart glasses) now includes a small, full-colour screen that sits in the lens, while a new Neural Band wrist device pairs with it and reads subtle muscle signals to enable users to control the interface without having to keep their hands somewhere visible and pinch, the way existing cameras-outwards headsets such as the Vision Pro and Quest line work. The bundle starts at $799 (considerably cheaper than other glasses-styled headsets, but a marked price increase on the audio-only Meta Ray-Bans that have existed so far) and, according to Meta, arrives September 30 in select US retailers, with broader availability to follow. Battery life is quoted at up to six hours of mixed use, with a carry case topping that up across the day. Alongside it, Meta also launched the Oakley Meta Vanguard, a display free, athlete oriented model that trades HUD features for 3k capture and fitness integrations.

Oakley Meta Vanguard glasses

From where we’re sitting, it feels like with this announcement, that Meta is revealing its bet on what comes after the phone as the everyday home for AI. Humane tried to make the pin work and stumbled, while OpenAI, flush with cash, handed Jony Ive a vast budget to dream up something still completely unseen (though some details have recently leaked). Everyone is betting that the phone is not the final word in personal computing, and that whatever comes next could be as valuable to people (and to the companies extracting value from those people) as the last fifteen years of smartphones have been.

Meta’s gamble is to pick a form factor people already know, and one where brand, self-expression, and fashion are already known quantities when it comes to decision-making. Glasses are easier to accept than headsets or pins. They have aesthetic legitimacy baked in, especially with Ray-Ban branding on the temple. Add a display, a camera and an assistant, keep the gestures discreet, and it continues to feel as though this is the most viable future we have for wearable technology… even if the on-stage showing didn’t exactly go smoothly.

Still, practical limits will likely still set the bounds. If the glasses feel heavy after prolonged use, need a midday charge, sit too far from the price of regular eyewear, or fall into the strange morass of fashion / tech brand collaborations that end up languishing as e-waste, the vaunted platform shift could wind up being for enthusiasts only. Google Glass, after all, aimed for everyday eyewear and ran into social friction, the stigma of “Glassholes” still hasn’t worn off completely, while Apple’s Vision Pro pursued maximal ambition and met a different wall: weight (they were heavy, we know first hand), social isolation and perhaps most significantly of all, price. Neither result closes the door on head-mounted computing, but they do show how hard it is to make something people will actually wear on their faces.

meta ray-ban

In that light, though, Meta’s long-running, investment-backed partnership with EssilorLuxottica seems to have already avoided most of these pitfalls. The first-generation glasses looked like Ray-Bans and had features that people seem to genuinely value, and the second-generation models look like slightly chunkier Ray-Bans with even more features that largely align with what the consumer market wants from wearable technology.

None of this guarantees a future where people actually want AI projected into their vision, but it does underscore the fact that the primary objections to a device like this are going to be the reliability and ethics of the AI, rather than anything to do with the device itself. 

On the surface, there’s little that unites a new payments protocol and a new wearable computing platform, but both are the kinds of steps that accumulate. Over time, standards bed in, devices normalise, and what once seemed speculative starts to feel ordinary – which is where deeper transformation tends to come from.