Key Takeaways:

  • New data from Zylo and HBR suggests that the current wave of AI integration isn’t liberating workforces, but burdening them with “shadow work” and unexpected costs, complicating the narrative for IT leaders who are paying premium rates for features that may be driving burnout rather than efficiency.
  • Google’s issuance of “century bonds” to fund a massive CapEx roadmap acts as a stark financial signal that the company views AI not as a passing trend, but as a generational infrastructure shift that requires a funding horizon longer than the lifespan of most technology monopolies.
  • As semantic interfaces threaten to make traditional software UI irrelevant and agentic coding becomes viable, the fashion and retail sectors face a future where the concept of the “software seat” dissolves, potentially driving a shift back towards bespoke, proprietary tools built in-house rather than bought off the shelf.

Summarise and debate with AI:

Take the content and context of this article into a new, private debate with your AI chatbot of choice, as a prompt for your own thinking. (Requires an active account for ChatGPT or Claude. The Interline has no visibility into your conversations. AI can make mistakes.)

    In what we think might be our prediction-to-reality record, two of The Interline’s defining trends for 2026 are already taking shape, in ways that are moving markets and unsettling workforces. And we’re just a few working weeks into the new year. 

    Unfortunately neither are exactly positive moves if you’re in the business of selling traditional software, or of the opinion that AI should be easing workday pressure rather than piling it on. On the (slightly sarcastic) plus side, though, this happened to be a great week if you want to make the longest-horizon bet on the capital expenditure flowing into AI eventually paying off, even if you’ll probably have to do the victory chant from your coffin.

    So what’s going on?

    Adding AI to fashion’s enterprise tools isn’t a straight shot to value, and even getting it right might end up damaging talent.

    First, we have the lowest-stakes prediction from our 2026 agenda: the integration of AI into core enterprise systems. Anyone, of course, could have seen this one coming. It doesn’t take any sort of expert to notice that the SaaS solutions we interact with, for both personal and professional purposes, are all having chatbots stuffed into them. From basic, broad productivity suites to more domain-specific solutions, there’s now a conversational interface hovering in the corner, begging you to call on it to query an underlying database or… get help with summarising and rephrasing text, maybe?

    Now, depending on the data model underneath it, and the work put into preparing it, the addition of a natural language AI layer to a given application can be incredibly powerful, or it can just as easily be borderline useless. 

    If we take a system of record like PLM as an example, then our thoughts from the beginning of the year still hold water: “the [platform] where all mission-critical product data lives is the one that would benefit the most from the addition of a “copilot” that was grounded in everything from slot plans to supplier scorecards.”

    This remains true, at least in principle. By rights, the system that consolidates information, context, and a chain of communication that spans the product lifecycle is the one that has the most potential to deliver a step-change in user experience if that system can shift from menu-centric navigation to semantic search and conversation – without compromising on data integrity in the process.

    However you feel about AI in general, there are very few people who would say, with a straight face, that they’d prefer to click about, drill down, roll up, and report to get an answer to a question when they could just ask it by typing or through voice. Provided, of course, that they could trust the response.

    And therein lies the rub: a lot of the time, existing systems that are being augmented with AI capabilities are being approached in an additive, sidewise way, rather than as an opportunity to rearchitect. 

    If we consider that hypothetical, traditional PLM database, and everything it should house, from POMs to production calendars, the simplest route is to replicate the data that already exists in a SQL-family relational database and represent it as vectors in a separate database, rather than rebuilding the whole system on Postgres for a feature that nobody’s certain the user community actually wants. 

    This route gives the providers of existing solutions a relatively uncomplicated way to build semantic search experiences on top of their current interfaces, but it also creates state-representation challenges that can quickly undermine user confidence, as well as reinforcing the idea that “AI” is a modular component that exists kind of orthogonally to core systems, and that will sometimes give you the right answer, but sometimes won’t. That approach is not, The Interline expects, the route to longterm uptake.

    On the flipside, the technology companies that do it right, and build new data architectures for core systems, as well as adding logical tool calls via MCP to other platforms that can then be reliably queried (or even written to) alongside essential data, do have the potential to offer their users something novel and empowering.

    This disparity between “AI I might use for a bit and then abandon” and “AI that helps me do my job” should, in theory, be creating the richest buyer’s market in tech history! Consumers and enterprises should be able to pick between incumbent products that have had meaningful AI capabilities added to them, incumbent products that have paid lip service to AI, and new products that have a principled stance on AI – whether that’s to embrace it as an axiom, or to shun it. There’s a fertile delta here.

    In reality, the cramming of AI into existing enterprise products looks like it’s proving to be the worst kind of “tide that raises all boats”. According to research published this week by Zylo (The 2026 SaaS Management Index), more than 60% of SaaS companies are now embedding AI as a “supporting feature” versus just 36% that see it as being core to their product offering. And a resounding 90%+ have either already deployed, or intend to deploy, AI features as additions to their existing products.

    So AI is in everything. So what? 

    Well, someone, somewhere has to pay for it, whether it works or not.

    There will be no small number of readers of this article who work for Google Workspace-native companies that swallowed heightened prices that came bundled with previously-partitioned access to Gemini, because they weren’t given any choice. Some, to be clear, have turned that access to their advantage! But many now have Gemini in their documents, spreadsheets, emails, meetings, and calendars because the baseline business pricing tier includes it by default.

    To wit: the same Zylo report also revealed that close to 80% of IT leaders believed that they had been “unexpectedly charged” for AI features that had been added to existing products – many of which also introduced token-based pricing that could, if the company really took to the new features, shovel even more cost increases on top if usage thresholds were passed.

    And other research, also released this week (by the Harvard Business Review) suggests that the compact between user and solution provider – the promise that these new AI features would justify their cost by making work easier – is proving to be a bit of a devil’s bargain. 

    Now, this study is very narrow in scope, being based on AI usage at a single decent-scale company, but at least some of the findings will feel familiar to anyone who’s immersed themselves in using AI at work. And when we consider what it might look like when more technology providers approach AI-integration with enterprise tools the right way, it’s going to be important to consider that giving people a way to unshoulder administrative burdens without also changing organisation structure to account for talent-compression and role-blurring could lead to burnout in areas where skills are already scarce.

    To quote the study’s conclusions, based on an eight-month monitoring period of 200 employees who have adopted generative AI tools: “we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so”.

    This may be new behaviour in one sense, but it’s also redolent of the timeworn story of new technology systems promising to automate work and remove friction, only to shuffle that friction to somewhere else in the product journey, or to place additional work on resources that are now theoretically time-rich again.

    As with previous “AI is doomed” studies (the 95% pilot failure research being the most prominent) there’s a lot to criticise about how this research was run, and it’s certainly not clear that the findings from one company can be extrapolated industry-wide. But it’s still safe to say that the addition of AI into existing enterprise systems and workflows is something that neither technology providers nor brand customers should take lightly.

    The build-out of AI infrastructure is a generational bet in more ways than one.

    As a quick palate-cleanser, this week gave us perhaps the most interesting reminder of the sheer scale of the capital expenditure commitment that’s behind the training, deployment, and integration of generative AI models. Google – whose capex spending in 2026 is expected to reach $185 billion USD, up from just $52 billion two years ago – is now offering so-called “century bonds,” denominated in British Pounds Sterling (the one currency symbol The Interline doesn’t have to remember the keyboard shortcut for!) as an alternative to shorter-term US dollar bonds.

    This is both a remarkable story from a financial point of view – the last tech company to sell a 100-year bond was IBM, in 1996 – and as a barometer for institutional investment sentiment around the potential impact of AI.

    While The Interline is not a financial publication, you don’t need four stock chart displays arrayed around you to realise that the reason technology companies don’t often raise debt on this kind of horizon is that tech is a fairly transient investment opportunity – and that even the biggest technology companies, in both the consumer and enterprise sectors, tend to only remain on the top of the heap for more than a couple of decades.

    Google itself, in fact, is a prime example of the kind of rot that tends to set in around technology products over time, when you consider just how quickly sentiment turned against Google Search. That product feels like it’s been with us forever, but the timeline from first general availability to today is less than 28 years.

    And, not least because it gives us an opportunity to link this fascinating urban exploration video of one of the company’s abandoned corporate hospitality facilities, consider the waxing and waning fortunes of IBM, which has seen several cycles of dominance and slide-back: the “Big Blue” era of the 80s to filing the largest corporate loss in US history in the early 90s, followed by a climb-up, and then a halving of market cap between 2011 and 2020.

    Without wanting to write a full history of IBM (there’s an exhaustive and very dry one of those still in print) the purpose of this analogy is to remind us that even 50 years in technology is an extremely long time. So Google’s offering a 100-year bet – at a 6% coupon rate no less! – is a bold statement that AI is strongly considered to be the undergirding force behind not just this generation of technology, but two generations to come.

    If you feel the same way, then there’s now a chance – via brokerages and the secondary market – to make a bet that your grandchildren can assess the acuity of.

    Software ate the world. Is AI about to make it the next course?

    Finally, the second of our 2026 predictions: eroding technology footprints and AI as the “great leveller”. Just last month we had this to say:

    “When the same underlying intelligence can operate across multiple parts of the workflow, the distinction between tools starts to matter less than what data they can access and what they’re actually allowed to do with it. And when AI interfaces become the predominant mode of interacting with software, then the ability for platforms to stand out based on their design and usability also disappears.”

    “Where that line of thinking leads, and  is still an open question, whether it collapses categories entirely or just makes them less tidy, but we do fully expect to see the lines that have traditionally separated software beginning to break down this year.”

    This is not a new idea in software circles, but it is one that The Interline believed was going under-acknowledged. Today, in the open software market for fashion and beauty, we see a lot of existing products with new AI capabilities (see the first story from this week’s analysis), and we see a lot of new solutions that promise to revolutionise old categories by being AI-native version of extant categories, but until this week we’ve been seeing less recognition that maybe software categories will just cease to matter in a world where anyone can seemingly roll their own applications.

    There is, to be clear, a lot of prematurity to what’s happening here. It’s a long hop from vibe coding an idiosyncratic personal to-do list app in Swift, to developing a fully-fledged, homegrown alternative to big enterprise software. But it’s not an unfathomable leap the way it was until very recently.

    Consider the fact that the major AI labs now “dogfood” in an extreme way, by using their own models to help develop the application environments they’ll run in. This is still a high-wire experiment, but it’s evidence that AI-assisted, AI-directed, or even full-blown agentic software engineering workflows are churning out workable products in sensitive areas. And while The Interline can’t name names, there are certainly conversations happening in brand and retail boardrooms to the tune of “if we can’t find a solution that does what we need, don’t we have a brand new opportunity to make one?”.

    It remains early days when it comes to translating that kind of speculation into MVPs, but there is a gnawing sentiment emerging that industries – fashion and beauty among them – may be able to witness a third ground-shift in the way software is made, deployed, and sold. 

    The Interline is staffed with people old enough to remember boxed personal computer software on shelves, which was itself a big shift from mainframe applications. And most of our readers were also around for the move from boxed products (or on-premise deployments) to cloud-native, SaaS tools. 

    That second transition saw the phrase “software is eating the world” get coined, because the combination of ubiquitous deployment, piecemeal pricing, and low-interest-rate funding led to an explosion of solutions that wound up becoming essential to the operation of many, many different industries and sectors.

    Even with that wide distribution, though, the reins of software development stayed in the hands of founders, CEOs, and the technical experts they worked with. It was easier than ever to get, for example, a CRM system up and running, but the barriers to launching your own were still largely in place. And needless to say, fashion and beauty have lived through heavy consolidation periods that followed the move to the cloud, to the point where the shelves carrying essential platforms might be virtual, but they’re just as dominated by big players as their physical counterparts would have been a couple of decades ago.

    But this week provided some evidence that AI could actually disrupt the basic idea of what it means to make and sell software, with SMEs and huge enterprises alike becoming caught in a sell-off that’s being driven, by all accounts, by fears that AI development could make traditional software less valuable.

    And big tech CEOs themselves are certainly not pouring any water on the fire, with Databrick’s top executive giving an interview this week that includes the line “once the interface is just language, the products become invisible”. Which sounds familiar!

    Time will tell how far this trend extends, but we’ll end this week’s analysis by citing a paragraph from our own AI Report 2025:I suspect that, sooner than a lot of us are comfortable with, a new class of fashion software seats will have artificial teammates sitting in them by default.

    When we look back from an agentic AI viewpoint, that prediction still holds. But this week it has a different resonance for people selling software, since there’s an equal chance that the idea of a “seat” just vanishes. It also has fresh implications for the companies buying seats, since we’re now walking into a potentially new era of the “build vs. buy” conversation. And it has some unpleasant echoes for end users, because it seems as though AI might just be building a different seat for them, and chaining them to it for longer.