Podcast: How Fashion And Beauty Show Up In AI Search

Hey, and welcome back to The Interline Podcast. 

Now, if you’re a regular listener, you’re going to have heard me have a couple of conversations over the last few months with tech executives about how AI is changing the web and, with it, the entire journey of brand and product discovery, engagement, and shopping.

Those two prior episodes, with Maria from Daydream and Jonathan from New Generation, focused on the product and the retail experience in a very broad context. Today, I’m picking the same conversation back up and we’re really zeroing in on search and discovery. 

As you probably know from your own interactions with apps like ChatGPT, Gemini or Claude, AI models are heavily reliant on searching for contemporaneous information. Because it turns out that a lot of people’s queries are about real-time concerns, not history. So yes, you know, technically speaking, GPT-5.2, Claude 4.6, and all their different variants are all-knowing oracles of a kind we’ve never had before, and you can get answers from them across the whole corpus of human history and knowledge. But what people actually want from most of their interactions with LLMs is what they wanted from Google. They want sports scores, they want updates on war and politics, and most relevant for us, they want product recommendations. 

To answer those kinds of real-time queries, LLMs go out to the internet through a variety of different means, from structured and aboveboard and approved to brute force unapproved scraping. And the answers they come back with determine when, how, and why different brands and different products get presented to the user. So for those brands, it’s obviously vital right now to be looking at search and understanding how that influences the way that AI sees the brand and presents them. But unlike traditional SEO, this isn’t just a dry technical conversation for a shrinking audience over time. It’s also about the future of consumer behaviour. It’s about truth, it’s about trend intake, it’s about culture, it’s about a whole lot that manifests itself in the web and in search, but has much wider implications. 

To do that all justice, I’ve invited on Malte Landwehr, who is the Chief Product Officer and the Chief Marketing Officer for Peec AI, which builds itself as the leading solution for AI visibility, and which raised more than $20 million in a Series A round towards the end of last year.

Malte was game for me quizzing him about a lot, from his own personal use of AI to the long-term outlook for generative content creation. This is a slightly longer show than usual on that basis, so let’s get started.

NB. The transcript below has been lightly edited.


Okay, Malte Landwehr, welcome to the Interline Podcast.

Thanks for having me.

Not at all. The pleasure is all on this side of the table. 

Now, we usually start these shows with definitions when I feel like there are definitions we need to get out of the way. So we’ve previously done a couple of episodes on the inroads that AI is making into e-commerce. And we started those ones with defining things like GEO and AEO and what have you. I actually want to start on a different tack this time. And I want to start talking about behaviour because this has been on my mind a lot working in an office space and seeing how other people use AI versus how I use it. Having children, seeing how they communicate around that in the education system now.

And I want to know how you use AI day to day. When do you interact with it? Is it on your laptop, on your phone? How are you doing it and what are you using it for? 

So just as a bit of clarity, from my side, I don’t use AI to write, for a mix of quality and ethical concerns. I do use it more and more in admin as kind of a first layer between me and the web, and I’m also increasingly going prompt first as one of the primary modes of interfacing with my computer, which is not an unusual sort of behaviour pattern, I don’t think, at the minute. I don’t use it for shopping. I don’t know why, but at the moment I don’t. I’m keen to hear what you use it for though and why.

Yeah, sure. So it’s a little bit different for work and private reasons. So for work, I use it on my computer. I mainly use ChatGPT because I just have the ChatGPT app on my MacBook. I’ve been using the app very, very intensively – ChatGPT in general on various devices very intensively for the last two years. It knows a lot about me. It’s just part of my workflow now for basically everything.

Recently, I’ve started to use Gemini more and more. By recently, I mean the last couple of months, because I have the feeling for data analysis, transforming data, it often actually does a better job than ChatGPT. And it’s mainly a habit to use ChatGPT. And I like the user interface of ChatGPT more than of Gemini. I don’t use it for writing as well, but I often write something and then I copy it in ChatGPT and I say, give me feedback. I intentionally do not ask it to rewrite it, but I say, tell me what sentence I should make shorter, look for typos, grammatical errors. And usually it gives quite good feedback. If I tell it to just rewrite this, it usually isn’t a great outcome because it sounds too little like me. It sounds too much like AI. 

At home, I mainly use it on my phone and with voice mode a lot. It very, very often happens that my wife and I, sit on the sofa, we have a conversation and then we are like, wait a minute, what exactly is this or that? And then I just pull out the ChatGPT app and I ask something, I get an answer. 

And since you mentioned shopping, I do use it for shopping. For example, I bought a new electric toothbrush just last week. And I described to ChatGPT what my requirements are, what I want, and to please suggest a few models. And then it suggested four models from two different brands. And then I was like, okay, I mainly pay attention to this, or I don’t understand the difference between these two. Please ask me a couple of questions that are very simple to come to a conclusion. And that way I actually then reached a decision which electric toothbrush to buy.

But I mean, then I went to the manufacturer’s website and bought it directly because I don’t want to accidentally buy a fake product by using Amazon or another marketplace. But for making the decision for buying, I also do use AI.

Interesting and I honestly couldn’t tell you why I don’t use it for shopping. Part of it is I have three children, one of which is a three month old baby, and most of my shopping is baby stuff these days rather than anything for me. So that’s a big part of it. 

I use a lot of different AI models. Most of my work and personal stuff is – so I have an application called Raycast which is macOS and Windows, and through that there’s AI chat which I can plug into a bunch of different models. So the Gemini family, Claude family, GPT family, Kimi, Minimax, that kind of stuff. A lot of that is professional curiosity, understanding how these different things work from a tool calling point of view, how well they work from a writing research perspective. And it’s interesting for me to see the progression in the models themselves.

But I feel like where most people land with this is where you’ve landed, which is you go with the one with the best user interface. The model quality is almost kind of academic. And you go with the one that feels the best to use, that lives on the devices that you want it to live in. And crucially, the thing you said about memory, you go with the one that knows the most about you. Because there’s a blend of user habit and user knowledge and visibility that I think are the two keys.

Yeah, I agree.

Okay. And then you mentioned shopping. Have you ever used AI to discover new brands? So you have fashion and luxury clients listed on Peec. So I’m not asking you to give away any of the secrets about how they think about where they show up, but you personally, other people around you, have you used AI in a specific kind of fashion context, aside from the electric toothbrush, to get into or to be

story told around brands? Because there’s a difference, I think, between wanting to find a very specific utilitarian product, your electric toothbrush, I might need a shovel for the garden or something like that. And then the more subjective stuff in fashion, which is a blend of product attributes to some extent, but kind of brand storytelling and lifestyle and everything else.

Yeah. So for the vast majority of fashion products and brands, I have to admit, I discover them by my wife either as buying it for me or showing me a picture and ordering it. Two disclaimers, my wife works in the fashion industry and has worked in the fashion industry all her life. And I have a rather severe form of aphantasia. So I don’t have visual thinking. So if you show me two items of clothing, I cannot in my mind form a picture of what they look like together. So that’s why my wife takes care of that part. 

But there are fashion items that I buy for specific purposes or reasons. And, for example, a laptop bag is something where I have used AI in the past to describe what I want, what my requirements are. And then I have discovered a lot of brands that I otherwise would never have discovered that way.

Or I now went down to wearing only one specific kind of t-shirt, like the same t-shirt from the same brand. I searched for that for a very long time. And there I also used AI in the process to describe what kind of t-shirt I want, what material it should be made of. So for very, very specific things, I do use AI. I believe a lot of fashion is more discovery based. Like you don’t have a concrete need in your head, but you see something that inspires something and then maybe AI is part of the journey. 

But I do not do fashion buying regularly with AI. 

I was just gonna say, just to pick up on the one thing you mentioned there, I think there are some categories of fashion that are more objective attribute and performance driven. So you think about a pair of running shoes, an outdoor jacket, something where the conversation might kick off from the perspective of, I’m gonna be in this outdoor scenario. I’m gonna be running this distance. What is the best functional fit for that? 

But that’s really only those kinds of categories. It’s athletics, it’s performance, it’s workwear, things along those lines. The vast majority of it fits more into the category of what you’ve described, which is a much more subjective and semantic and wooly sort of questioning.

Yeah, but for example, when I bought my sneakers, I already knew I wanted to go with On because people tell me they’re very comfortable. I like the brand. I think they are cool, but I had no idea which ones to buy and going to the On website, you are bombarded with like the Cloud walker, the Cloud soul. This is comfy. This is super comfy. What does it mean? So I went to ChatGPT and I asked her, here are six shoes that I like, which one is actually the most comfortable one? Which one is the best for walking 10,000 steps, 20,000 steps per day? And that is actually how I then decided on the model of shoes that I want to buy from On.

It’s interesting, I think it says something about On’s product strategy almost as much as it says about the capabilities of AI. 

Or about my inability to understand marketing speak about sneakers.

One of those. To be honest, I think I’ve had similar experiences with On. I do own some On trainers and I do remember coming away thinking, there’s an awful lot of platforms here and I’m not sure which one is the correct one for me. 

I do want to zero in on what we’ve just talked about in terms of the way people interact with AI and why they pick what they pick. I said that I’ve experimented with a lot of different models. That continues to be true, but I don’t think that’s common. We’re all familiar with the user figures for ChatGPT, you know, in excess of 850 million, Gemini’s not massively far behind. And then you’ve got apps like Claude, which is having a moment and overtaking ChatGPT for a variety of cultural reasons. Perplexity, Mistral, those sorts of things are kind of small fry.

That behavioural stuff is interesting to me when I think about The Interline’s audience of fashion and footwear brands, retailers. When they’re considering how to target AI users, which they increasingly are, you know, they’re increasingly interested in the idea that the discovery journey, the buying journey is more likely to start with a chat-based interaction in a third-party application from their point of view than it is through first party channels, social media, and so on. 

Should those brands be thinking about targeting AI users as just a big blob? Or should they be thinking about: we need to target Claude users; we need to target ChatGPT users. Or are we assuming that people just move from model to model and that kind of behaviour is going to remain the case? 

I know you’ve got the intermediary layers like UCP and things that kind of complicate stuff, but it’s interesting to me to think about targeting the AI user base, like how homogenous is that? And how does it break down?

Yeah, also one question is how well can we actually target right now, since in most of these models you can do organic things, but you have very limited options to buy ads. There’s the test on OpenAI, there was something on Perplexity for a while. 

So the way I see it, let’s first talk about the models and then MCP, UCP, etc. Model-wise, the biggest one that the most people see in the world is probably still the AI overviews from Google that are now on a large percentage of searches on top of the search results. And then if we go by size, it’s ChatGPT, it’s Gemini. Those are the very, very big ones. In recent weeks, I almost want to say days, Claude has been catching up like crazy, but Claude is very, very B2B. And until about a few weeks ago, it was very “prosumer” focused. So the regular consumer would not use Claude. It’s also the only model where you have to make an account before you can run even a single prompt. Almost everybody else allows you to at least run one prompt without logging in just to show it to you. 

This is now changing because there is a lot of negative sentiment against OpenAI. If this will be gone in a week or if this will now continue, nobody knows. What I would almost ignore is Mistral and Perplexity. They were very much in the conversation about two years ago, but I think Mistral is just similar to Claude, but at a much smaller scale just doesn’t care too much about their main user interface. They want to sell to companies and Perplexity has been overtaken by Claude, even by Grok, by many others. 

Now, if we think about optimisation, actually the fundamentals are more or less the same. Of course, if I want to be on Grok, I need to be on X because that’s like half the input. These models use different search engines for their search grounding. But again, who is optimising for Bing? Who is optimizing for Brave Search? Nobody, right? Like everybody’s optimising for Google and that usually also works for the other search engines. And the only thing that you could specifically do for one model is checking what are the sources in Grok, what are the sources in Claude, what are the sources in Gemini, and then specifically try to get your brand mentioned in those. 

But if you optimise for, let’s say, ChatGPT, you will automatically also optimise for all the others. So unless somebody has very strong information about what the future market share of these models will be I would not overly focus on optimising for one specific one. I would probably start tracking ChatGPT, Google AI overviews, Gemini and see where that takes me. I could consider AI mode if I’m in a business model in a company that is currently very dependent on traditional SEO, because it’s very likely that the AI mode on Google will grow and grow. And that is how I would think about the models.

Mm-hmm.

So we’ve covered UCP, we’ve covered MCP, we have covered ACP to some extent, ⁓ but there’s a lot of CPs, there’s a lot of protocols, there’s a lot of proposed surfaces and interfaces for models interacting with databases, models interacting with other models. Just give me a flyover of how that all stands today.

Yeah. So, I mean, you said you already covered MCP, UCP, ACP. So I will not go too deep into those. There’s also A2A from Google, an agent-to-agent protocol to allow agents to talk to one another. We also have AP2, the agent payments protocol also from Google for agents to do the payment versus for a merchant. We have more of a concept than a protocol, the agentic checkout from OpenAI. And then to make it super complicated, we have ACP again. But the ACP that people usually know is the agentic commerce protocol from OpenAI that allows agents to communicate with merchants. But there is also ACP from IBM and it’s the agent communication protocol that allows agents to talk to agents. And very recently Google launched WebMCP, which basically brings MCP functionality to your website, which is more focused on an agentic browser where you are on a website in your browser, you see it visually and then it’s easier for the browser to have agentic interactions with the website. 

Which one of these will be the standard? I can’t tell you and many of these, they can actually be combined. What I’m very sure of is if OpenAI has something called ACP, and IBM has something called ACP, that OpenAI is the one that we talk about because most people don’t even know about the agent communication protocol from IBM. So that is the one I would probably ignore. Sorry. Sorry, IBM.

I think you are not alone in noticing that IBM is not the dominant force in consumer computing in particular these days. That’s fine. 

That is super helpful. And I think like with any kind of multi-standard environment, just adoption will dictate which of those wins out over time. It seems like the common principle, is providing more atomised, atomic, whatever you want to call it, fine-grained tools to AI models agents to allow them to interact with databases, which for fashion and e-commerce purposes, let’s just call it your e-commerce catalog, and to interact with website front ends and things that surface those. That seems to be the general direction that stuff is headed.

Yes. So in an abstract fashion, I would think about, how do I make it simple for an agent to decide that my product is the best, buy my product, use my comparison feature, whatever I usually offer to humans. How can I offer this to agents in the future? But I would not expect significant revenue to come by this channel for the next 12 months. But percentage wise, I think the growth rate is already very, very, very high. 

I think Shopify has recently published some numbers, but they are involved in the development of some of these protocols or were the first partner. But yeah, if right now organic traffic from Google is how people discover my brand, find my website, I would definitely at least think about these protocols and maybe start a first prototype to just be part of the ecosystem in one small way.

Yeah, now this is going to sound like a basic question, and I know we’ve talked around it already, which is: how does an LLM language model actually use search? So you’ve mentioned Gemini already. That’s the major AI lab that also basically owns all the search volume in the world outside of China. But then you have other search services like Exa, Firecrawl, Tavily, Linkup, things like that that all promise a better AI search experience. And you’ve got concepts that people who are in traditional SEO won’t be familiar with, like fan-outs and agent swarms and things like that. They’re different to how traditional web crawlers have worked. 

Just humour me for a second on the basics. When an AI model goes out to the web, typically what search index is it using and how much of what we think of as optimising for that type of search is going to carry over to the future? 

Yeah. So I think we need to differentiate between a simple web search that is conducted when I ask a question of ChatGPT or when I go to Manus and start a six hour research task. So let’s start with the simple thing. Somebody goes to the Gemini app, the ChatGPT app asks a question. What’s happening is that, if web search is triggered, then that question or that command that I wrote or that essay that I wrote is split up or fan out queries are generated out of it. 

So if it’s a very long prompt that I wrote, a couple of short queries will be created, like short in relation to the prompt. If I wrote a very short prompt like best running shoes and I click enter, then context will be added to these fan out queries like best running shoes 2026 or maybe context from my chat history like best running shoes for a man. Or the location, like best running shoes in Germany. And then these fan out queries are sent to various search indices. I will talk about which ones in a second. And I get search results back or not I, but the LLM gets them. And out of these, it makes a decision which ones it wants to use as a source. Most of the time they are then also quickly downloaded. And then out of these sources, some of them are used for the citations that really influence word for word what is written in the answer. 

And this process is sometimes done with official partnerships. For example, the Microsoft Co-pilot and Yahoo! Scout, have Bing as an official partner. They use some kind of official API from Bing. Of course, all the Google models can either use the Google grounding API, which is an API where you can buy your access to, or they use maybe a special version of some internal API call at Google, to interact with the search index. With ChatGPT, when they started, they had this official partnership with Bing. But nowadays they seem to use third party providers to scrape Google search results against Google’s will. So similar to how SEO tools are doing it.

And research that one of my coworkers did actually showed, and it’s now very interesting for the e-commerce part, there is sometimes this product slider or product grid in ChatGPT. And there are a few partners – I believe Etsy, Shopify, probably by now Walmart, probably by now Target – that have some form of direct integration there. But everybody else, they end up in there because ChatGPT is also creating e-commerce fan out queries and they don’t put these in Google. They put these via a third party provider in Google shopping. And then they take the top Google shopping results. They do a very little re-sort, sometimes they throw out the result and that is what they display. 

So unless you are on Shopify or you are one of these big partners of ChatGPT, then right now, if you appear in this shopping grid on ChatGPT, it’s just because you were in Google shopping where you can get in via the Google Merchant Center. And ChatGPT has just stolen that probably again against Google’s will.

Yeah, and I think people don’t necessarily realise just how much scraping is going on right now. I’ve experimented with Firecrawl as the platform that I’ve used for just testing against our own content to see how it works, testing against broader web content. It is a very brute force method of getting answers. We also see from our own traffic just how much scraping goes on of our content. We’re a free to read publication, so we are a pretty logical target for this. Nothing that we have is paywalled. We actually permit AI agents scraping for a bunch of reasons. There’s just a tremendous amount of that happening. 

And by the same token, I don’t think people necessarily realise just how predictive the quality of a web search, even a multi-part, first-party, third-party, blended kind of search strategy, how predictive that is of the quality of output that you get from an AI model. You know, if you can have the biggest range of parameters, the largest model out there, if it doesn’t know something in the sense that that thing is not part of its training data, it is going out to the web to get an answer. And the quality of the information that it gets back heavily influences the experience that people get on the other end. And I reckon that a lot of people who have been dissatisfied with the way that they have kind of chatbot experiences and things are not actually unhappy with the output of the models, they’re unhappy with the input those models are getting from their searches. 

I think a lot of people in general are also not happy with search. Like completely independent of AI, a lot of people are not happy with the results that they get from Google. How do you see that side of things? Do you think there’s any kind of reset opportunity where we can maybe just sidestep the fact that a lot of people aren’t happy with search, a lot of people aren’t happy with the way that search is showing up and do something differently? Do you think we get better answers just in AI over time through training? I’m keen to get your take. 

Yeah. So the fact that people are unhappy with Google search results has been the case for a while. But could anybody really gain a lot of market share? Like, yes, a little bit on being a little bit of brave search, which is probably more privacy driven than result quality driven. But by being 10% better than Google would not help anyone. You have to be probably a hundred percent better. And maybe AI can be that layer where people then actually stop using Google. But the fact that ChatGPT, who had this partnership with Microsoft, replaced Bing largely or completely with Google, shows that probably right now Google still has the best search results, especially if you want to do it globally. 

And I don’t think that it’s that easy to build an equally good search engine. Because otherwise, ChatGPT would have just done it, right? OpenAI could just have said, here is a billion dollars. Somebody build us something that is like Google. But they didn’t so far. They are obviously trying. But either it’s not their priority project or it’s hard and it’s difficult. And Google has this big advantage that nobody blocks the Google bot because you want traffic from Google search.

If now a new search engine pops up that sends you zero clicks per month, you are much more likely to block them. So this is probably a multi-year project to get people to allow you to crawl their website on a daily basis regularly. So I think that is also why all of these AI companies are pushing things like MCP, etc. because it could be a different way to obtain data, a different way to interact with documents on the internet. 

Maybe you don’t need to crawl the New York Times website. Maybe you just send a search query to the New York Times MCP and it gives you the answer. It’s not even a search query. It’s like a prompt, like check up the best statistics for this and then you pay 0.1 cent for it and you’re fine. I think that is the only way we might get away from the Google index as the foundation. 

Now you also mentioned how much of these SEO activities are still relevant. And there I would say almost everything because one goal that you have is probably to have your own website cited as a source. That is probably what every brand wants. And for that, all the traditional SEO things still work. And then there are things on top you can do, but there’s no reason to stop doing SEO just because of the switch to AI search, because the first step is rank in Google.

Yeah, I would agree with that. One question I did have was, so when we think about that side of things, like how you show up now at search, so how, and what we mean by that is how you show up at the point of inference, right? How you show up as a brand at the point where somebody is having a conversation with ChatGPT or with Gemini or so on, and it goes out in real time to fetch information from the internet and that influences its responses. 

There’s the other side of things which is over time, if you are appearing more prominently in those kinds of interactions, is there a pathway where you as a brand can also then appear more prominently during training runs? Because if you disconnect a model from the web completely, which you can do, in my instance, just deprive it of a web tool and get it to give you answers. It will still talk about and recommend brands even if it doesn’t have access to that real-time information. It’s doing it based on the training data that it was given. 

Is that a reliable way to think about things or is that more of a historic kind of look? A lot of the pre-training and web scraping stuff is done and actually it’s the inference and the search that matters.

I think there are many instances where you also want to be in the training data. So to give you one example, if you ask ChatGPT for the best running shoes without any additional context, sometimes it will start a search, best Adidas running shoes. And this is of course a huge advantage to Adidas, than any other brand. We see this behaviour where on Sundays in certain countries ChatGPT is sometimes doing fewer web searches on average. Then the brands that have been around for a long time, increase in visibility and the newer brands go down in visibility because they only show up via the web search and this grounding process. So it is a huge advantage if you can get as a prominent well-received brand into the training data. 

Now the problem is, who knows what is the training data for GPT-6 because there’s a real risk that OpenAI says: too many people are manipulating data on the internet. We are only going with highly, highly trusted online sources. And otherwise we will use GPT-5 to create fake training data for GPT-6. This is something that you cannot do millions of times in a row because at some point most people believe that models will collapse and there’s sort of research that if you only train on your own data, the model will collapse. But if GPT-6 and 7 and 8 are different in their approach from GPT-3, 4, 5, maybe it can actually work to have this row of training data. And there will always be some new trusted data that they can put in there.

But I don’t think it will be as easy anymore as it was for GPT-3. Like if you had known in advance how GPT-3 will be trained, you could have just created your own subreddit, linked every single page on your website and upvoted each one three times. And then every page from your website would have been with very, very high weighting in the training data. 

And with GPT-4, it already stopped completely that anything was revealed. I think one reason is, OpenAI doesn’t want smart SEOs to reverse engineer how it works and then manipulate it. But also they want to avoid lawsuits about using maybe material where they didn’t really have the copyright, where in GPT-3 they just said the data source is books 1 and books 2. And there was this case where employees at Meta, Facebook, they downloaded books via BitTorrent, which is kind of illegal, and they did it for training some version of llama. And there are a lot of accusations that somebody might have trained on data that they didn’t have a license for. Like there’s a rumour that the Sora model, this video model from OpenAI was trained on YouTube where they didn’t have a license. 

On YouTube. Yeah.

They keep their mouth shut and they don’t reveal anything. 

So it’s very much a black box from a model training point of view. And I think if you’re a Nike or an Adidas, you have to accept that you will show up in that respect just by dint of how over-indexed you are in the sneakers category, for example. That’s the reason that at least the early waves of image generators, you ask them to generate a sneaker, they will put a swoosh on it because that’s just over-represented in their data. 

One thing I just wanted to harken back to a tiny bit was we’ve talked about the difference between objective information and some of the more subjective sort of queries. I feel like as a brand or a retailer, somebody selling product through the internet, you kind of want both. So when people are putting a query into ChatGPT, they might want to know performance attributes and things like we’ve talked about. They might want to know objective information such as price, shipping date, things along those lines which are not open to change and are not subjective. But they also are going to be asking much harder to define things about reputation and trust and fit and stuff like that. 

I’m curious if that is actually manifesting itself in the way that people input queries into LLMs? Are people actually learning to behave differently now that AI is everywhere? Because I see this generationally in terms of how people interact with Google. Like, I’m oldish, I’m in my mid-40s, I’ve always put very dry kind of queries into Google because I know that it used to dispense with all of the linking words like ‘and’ and what have you. I see my kids use Google and they ask it full questions with question marks on the end. And I’m interested to see how that behavioural side of things is manifesting itself and how that then translates into what gets served up to them from that blend of objective and subjective criteria.

Yeah. So the only reliable statistic I have there is that prompts on average are significantly longer than Google searches or traditional web searches. And actually it has been happening for the last at least 10 years that younger people have asked more longer questions, even of Google, and more often in natural language. And you and I were raised, and you mentioned this, where we type, I don’t know, Prince Philip age, right? Because all the other words don’t matter, but younger people just ask, how old is he? And this already has been a trend for voice versus typing. Like when you use voice as an interface, even just dictating, you tend to ask longer questions. They are more of natural language. So grammatically correct, full questions. And this is continuing with AI, of course. But that is the only real data I have on that. Everything else would be guessing interpretation or talking from a small amount of data.

That’s interesting. There’s another part of this. So unlike something like beauty, an industry where claims are really tightly regulated, where brands do not put things online that they cannot substantiate because the reputational and enforcement downside to that is real and pronounced and punitive – fashion isn’t there yet. It is not a tightly regulated industry. It’s changing, but it’s not there.

That feels like a dangerous thing to be taking into the AI era of the web, because as a brand, you can just say anything, really. You can say you use better cotton, or you can say that your t-shirt fits better than the competition, just taking it out of the enforcement realm. And then your retail partners, if you stock that way, can repeat that claim. Your marketplace partners can repeat that claim. And AI can pick up on that and report it, pretty authoritatively to the user, even if it’s not true, or at the very least it’s not data backed. 

What’s the outlook there? We seem to very much live in an era of you can just say things and those things will become fact if they are repeated sufficiently. And AI does not really distinguish between them.

Yes, that is definitely happening. But also your competitor could create a hundred Reddit accounts and write something about you that is not true. But if it’s written in enough Reddit threads, then many AI systems will just repeat it. I have a good example for this. We have a lot of competitors who create listicle-type content, like ‘10 best AI visibility tools’ and then they always rank themselves on number one and they put us somewhere else on that list. And one of them used AI to write such a listicle and it hallucinated a fact about the company I work for. It said Peec AI has the best PDF reports. We do not have PDF reports. And since many other people use AI, there are now four or five of these listicles out there on our competitors’ websites that use that first listicle that made up that fact as a source probably that say we have the best PDF reports and I have not seen this being picked up by the LLMs. But if this continues, at some point LLMs will say this about us and then we probably have to build it because people expect it.

So, yes, AI can pick up things that are not true. And I have a customer example. It’s a car manufacturer. And in different countries, there are different standards and requirements for cars. And what sometimes is happening is that the AI is using a different country version of the content to cite their facts. But if your car in South America is less safe than your car in Europe, because the regulations are different and people are more price sensitive and also people do not care about the safety features, then that can have a huge impact. If somebody searches for something and then it is incorrectly said, like incorrectly for a potential buyer in Europe, that your car doesn’t have a passenger airbag, for example, or doesn’t have a side airbag. That is a huge problem. And these things are happening.

Yeah. And I think the part you said there about that you will have to build PDF reports at some point. There will be a tipping point where that demand wasn’t real. Nobody was actually asking for that. I mean, maybe they were, but through different channels. But you’re going to have to end up doing it. And I think from a brand point of view, that’s fascinating from like R&D and design and development. 

You can easily see, if we think back to On again as our earlier example, you could easily see something similar happening where there’s a marriage of upper and cushioning that doesn’t actually exist in their product catalogue at the moment. And I don’t know enough about On’s product catalog or platforms to know how that would work. But hypothetically speaking, that could happen to them or any other sort of footwear manufacturer. And that blend of like midsole, cushioning, upper and stuff is an easy analogue here. If enough AI says, this exists, you can get that cushioning with this sort of flexible, knitted upper and this kind of performance. It doesn’t exist in the product catalogue. And it starts to show up enough in this sort of self-reinforcing loop. All of a sudden, you have to make that product. And you have to actually research how possible it is.

Yeah, it could happen.

Yeah, okay. Who – you mentioned your company, Peec AI – when we think about how web search is picking up different signals and hitting different endpoints across brand’s own channels or from partnerships and mentions and things, as a fashion brand, you show up in a lot of places across a lot of channels. That also means there’s a lot of people potentially responsible for owning AI optimisation within a brand. 

Who’s your typical user? And have you seen that user base changing?

I mean, on a company level, it’s 50% agencies, 50% brands. And there the big shift was that at the beginning, it was mainly agencies. And now it’s more and more brands who are starting to get real about the topic, start investing, to track. As a user, it is primarily SEO teams. They are usually the ones who get tasked with this. We also see more and more PR and communication teams care about it, especially on the agency side. And there are some PR agencies that did not offer traditional SEO services, or they only offered it when a client asked for it, they were like, yeah, okay, we can do something, but it’s not our bread and butter. We are not experts who now go out and do AI search optimisation and do that also very successfully. So it’s a new business opportunity for them. 

I’ve also built and sold SEO software in the past. I think the biggest difference is that the conversation starts higher. So it’s often CEOs, CMOs initiating this conversation. Of course, in the end, the decision is made by the SEO team most of the time, but with SEO that never happened. And I think that is a big difference, this level of management attention on the topic. We even have some clients where the lead literally came in from the investor. Like the investor sent an email to us and the CEO and said, you should look at the software. You should track this. You should measure this. You should optimise for this.

Okay, that’s super interesting. I’m surprised you didn’t mention e-comm teams in there from a brand point of view and that it tends to be more in the marketing and the SEO side of stuff.

I mean, if it’s a person that has the job title SEO manager and sits in the e-comm team, then I wouldn’t necessarily know. And I think most of, yeah, like honestly, maybe I have a blind spot there where sometimes I think I’m talking to somebody from the SEO team, but it’s the e-commerce team. 

A lot of blending goes on in those roles. Just to bring us to a close, I’ve got two last questions. Is there anything that you see that audience that we’ve just talked about, your CMOs and on downwards, is there anything you think that people are obsessing over today when it comes to optimising for LLMs generally for the AI age of the web, whatever you want to call it, that you think is not going to last, that you think is a short term distraction or a red herring?

Yeah, I mean, there are two things that people are doing at the moment that work incredibly well that are clearly not going to work long term. One is churning out giant amounts of AI generated content. It is the best thing you can do if you only care about the next couple of weeks. It’s going to give you more visibility in Google, more visibility in all of the LLMs. But there is now this name called Mount AI, because if you look at the visibility curves in an SEO tool or in an AI search tool of these domains, it goes up like a mountain, but it also goes down like a mountain. And we recently did an analysis where we looked at the reference customers of a software that creates AI content, and half of them have already lost their visibility. And these are their reference customers. So I’m not going to name any names, but think about what the average customers might look like.

And the second thing that people are overdoing is creating these listicles that I mentioned earlier. So the idea is you go to your own website, you create a new page. I don’t know, ‘top 10 podcasts in the fashion space’ and you put yourself in position one. Every human would think that is a bit weird. Like, is that ethical? Is it maybe a bit cringe? But the LLMs right now, they are like, this is ranking on Google. It lists the top 10 podcasts. Wow. I just take it as a source. And these self promotional listicles are quite effective right now. But there is first research that shows that if you overdo it, Google will take away your Google rankings. And then you also don’t show up in the LLMs anymore. But then you also lost your Google rankings, which had value in the first place. Right. So if you spam too much with these listicles, with AI generated content, you risk losing all of your Google rankings and then as a consequence also all your appearances in AI search. And that is something that many people are doing way too aggressive at the moment. Also very, very large brands are doing it and many of them will lose visibility in, I would say, the next six to 12 months.

Mm-hmm.

Yeah, I have feelings about high volume AI content churn that we don’t have time to get into today. From a publisher point of view, not a fan. And it’s good to know that it doesn’t have a long tail on it. 

Final question. If we were to sit down and record this episode again in a year’s time, so early 2027, what do you think we’ve either tackled or missed that’s going to be more important? So we’ve just talked about what we think has a short shelf life. What do we think has a longer shelf life? And, of that, do you think the bigger change over the next year or so is going to come in how consumers and brands interact with AI or in the AI models and applications themselves?

I think there’s gonna be a shift in people using agents. The hype that OpenClaw, previous Claude Bot had, has, I think, exposed many people to the idea of automating things and combining automation and prompts – even though that has been possible with N8N or Zapier for many, many years. It’s, I think, similar to this 3D printing moment where many, many, many more people are exposed to a technology that has been around for decades (in the case of 3D printing) and that unlocks a lot of creativity, that unlocks a lot of solutions. 

And I do believe in a year there will be more people that have their own personal AI agent running on their own computer or cloud hosted somewhere. I don’t know if it will be OpenClaw or something else. But I believe more people will have such an agentic interface. And then they might just tell that interface, hey, buy me new running shoes. And then it will spin up and a couple of agents – like one is doing market research, the next selects the right product, the next does the price comparison. And maybe there will be some MCPs in there. Maybe not. Maybe it will all still be based on scraping and in the last step, just using the website in a fake browser window. I think that would be the biggest change that I expect.

I think the interesting thing there is, that’s asynchronous behaviour in a way that it currently isn’t. Like if I interact with an AI now, for the most part, it happens then and there. I ask a query, inference is run, I get a result. Sometimes it takes a while, there’s multiple tool call steps or what have you, but I’m not setting work off and coming back to see if it’s done. I know that’s not true in software engineering. That is more of the paradigm there, like multi-agent models and orchestration and what have you. But from a consumer point of view, that is a big behavioural change: the idea of like workers almost in the cloud, the way that Cloudflare talks about them, triggering price searches, long-term indexes and things and saying, buy me this, but buy me it when it drops below a certain discount threshold and things like that. That’s a very different way of thinking about AI to the current way.

Yes, yes, but especially the last use case you just mentioned. There are already so many tools that send you a price alert via email if your desired product drops under a certain price. And I think that is the most obvious thing where you should just have an agentic purchase experience in that very moment. Makes a ton of sense to do that, I think.

Versus receiving an email and then having to act on it yourself within a small window. It changes the way brands think about markdowns and everything else. 

Hey, there’s a ton we can talk about here. I tell you what, let’s actually sit down and record this episode again in a year’s time. That feels like a good way to loop back on some of this. 

For now though, Malte, thank you so much for your time today. This has been a good one.

Yes.

Thank you for having me.


And that’s the end of my conversation with Malte. If this has given you a bunch to think about, I’d encourage you to actually check out the website for Peec. They’re not a sponsor or anything and I don’t endorse the product, but theirs is one of those websites where scrolling top to bottom gives you a really good overview of the environment and the problem, rather than just telling you about the solution. So if you’re at all curious about AI analytics, AI search visibility, it’s worth giving that page a scroll. And their blog actually, and Malte’s own LinkedIn, have been really good reference points for me over the last couple of months for staying on top of how AI is interacting with the web and keeping current with how brands, websites, and e-commerce catalogs get indexed and become visible in people’s interactions with AI. 

We’ll be back really soon with a very different topic next week, the week beyond.

Thanks for listening and I’ll speak to you again really soon.

Exit mobile version