Key Takeaways:
- The cultural push and pull of AI continues, evidenced by Disney and Universal’s copyright infringement lawsuit against Midjourney. The legal challenge, aimed at a platform with over $100 million in subscription revenue, signals a critical juncture for creative industries like fashion and beauty as they confront the increasingly blurred line between AI “inspiration” and outright appropriation.
- Governments are beginning to formalise AI’s role across sectors. The UK’s new guidance encouraging teachers to use AI for lesson planning and admin tasks suggests that institutional adoption is not only accelerating but becoming policy-backed. What began as experimental is rapidly becoming standard.
- At the same time, the tension between AI’s reach and its consequences is becoming harder to ignore. Klarna’s AI-powered hotline, voiced by a cloned version of its CEO, launched just weeks after he warned publicly about AI-driven job losses. And in his recent blog post, Sam Altman proposed that digital superintelligence may already be emerging, an idea that reframes the conversation around AI’s trajectory, and places new urgency on questions of control, access, and alignment.
Ahead of next week’s release of The AI Report 2025, it’s been another week of push and pull between the promise (and specifically the over-promising this week) of AI and how people across different disciplines actually feel about it. It began with a court filing. Then came a policy nudge. Then a voice, similar, but synthetic, speaking on the other end of a hotline. The week ended with a quiet note from the most powerful AI lab in the world, declaring that superintelligence is almost here. Which… well, let’s just say there’s a lot of ground still to cover between here and there, if “there” is even an achievable thing.
None of these moments happened in fashion or beauty, directly. But the questions they raise reach directly into the heart of both industries (and beyond): what counts as authorship, who gets to control identity, and how far can automation go before it frays the relationship between brand and audience.
Not coincidentally, these are all also angles our writers, partners, brand friends and others are going to be tackling directly in next week’s report.
On Tuesday, Disney and Universal filed a lawsuit against Midjourney, accusing the AI image generator of producing unauthorised visuals of characters like Elsa, Darth Vader, and the Minions. The argument is that these were not abstract interpretations or inspirations: they were recognisable, commercially exploitable images that could easily be mistaken for (and erode the market value of) the originals. The suit alleges that Midjourney built its model on scraped, copyrighted material, then sold access to the system without compensating the original creators. And with more than $100 million in subscription revenue last year, the platform has become a clear target for rights holders who see this as industrial scale infringement.
The Interline does not claim to know anything about the likely legal outcome here, but as we’ve written before, we remain surprised that no fashion brand has yet pursued a similar suit, considering how keen generative image models are to create recognisable brand marks, materials, and other protected elements.
The legal fight ahead will centre on both how these systems are trained, as well as what they produce. Generative AI has often been framed as a tool for inspiration and ideation, but the Midjourney case forces a more pointed question: where is the line between influence and appropriation, especially when a model can use decades of visual IP with no licence, compensation, and critically, no oversight? For creative industries like fashion and beauty, where distinctive visual identity is a core asset, the implications are clear. If a campaign’s composition or signature silhouette can be ingested and echoed by a model, it raises real questions about originality, ownership, and enforcement. The case moves those questions out of the think pieces, and into the court room.
Across the Atlantic, The UK government has issued new guidance for schools, inviting teachers to use AI tools to support lesson planning, low stakes marking, and administrative work. The framing was cautious, this isn’t a replacement for educators, just lightening the load. A practical response to workload, one that will free them up for more important aspects of the job. Many teachers have already been using these tools in the background; this move simply acknowledges that reality and offers a framework for using them more transparently. What had once been quiet experimentation is now officially encouraged. Even in a domain as sensitive as education, where the stakes are generational, AI is moving from tool to policy. And fashion education will be no different – especially considering just how much of it now is knowledge-based, rather than being aimed at the transfer of manual skills.
There’s a broader story here, one that signals that AI isn’t just being adopted, it’s being authorised and by governments no less. And once a tool is permitted, it tends to persist. What starts as support, often becomes standard practice. In fashion and beauty, where generative AI is already being used to write emails, model clothing, test tone of voice, and mock up packaging, the same arc is underway. The framing may vary, but the function – making the experimental feel inevitable – is repeating itself everywhere.
Which brings us to Klarna. This week, the company launched a phone line that lets the public speak to a cloned voice model of its CEO, Sebastian Siemiatkowski. The AI listens, responds, and then files feedback. On its surface, it’s a tacky PR stunt. A little weird, sure, but clever enough to make a news story from -and even better, to share. The thing is, earlier this same week, Siemiatkowski publicly warned that AI could trigger a recession by displacing white collar workers at scale. And here he is, volunteering himself as proof.
That contradiction isn’t a footnote. It’s the actual story. A founder warns that jobs are at risk, then offers up his own voice model of how it might happen. This story sits in a strange place. It’s part branding, part systems design, but also, part warning. A warning that points to a version of the company where one person gets stretched across the whole business. One founder doing everything, then turning that into a product. Klarna is still a big company, but this stunt pulls us back to something we’ve asked before: could you build a fashion unicorn – run entire campaigns, launch products, grow fast- with just one or two people and the right AI stack?
The tools are there. The products can be generated, the story written, the storefront built, and the outreach automated. In this way, the founder becomes the brand, and the brand becomes a system. No internal teams needed. Klarna’s phone line was packaged as a stunt, but the implications run deeper long term, raising a much bigger question along the way: if AI can extend one person’s reach this far, how far can the systems themselves go?
That was the undercurrent of OpenAI’s message this week. Not with a product demo, or new feature reveal, just a blog post from Sam Altman, quietly published, and inviting reflection more than reaction.
“We are past the event horizon; the takeoff has started,” he wrote. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
It’s a strange sentence – and hardly a believable one. But what he’s getting at is this: systems like ChatGPT might already be more powerful than any one person who’s ever lived. Not because they’re smarter, necessarily, but because of what we’re using them for, and how many of us are doing it. The concept of “capability overhang” in AI isn’t new, and you’ll hear it expounded upon by CTOs from most of the big tech labs, but the idea that exceeding human capability means making better and wider-scale use of tools is a difficult one to square with the reality that large language models are not, by any traditional definition, aware or smart in the way we mean those words when we talk about human intelligence.
Altman’s tone wasn’t alarmist, but it wasn’t relaxed either. He called alignment an “unsolved problem.” Talked about the need for real guardrails, not just against what these systems can do, but what they might do unless we steer them carefully. He also raised a harder question, one we’re all in some way dodging: if superintelligence does arrive, who gets to use it? Presumably the people who can pay upwards of $200 a month. Who are, at a guess, not the same people who AI research labs expect to lose their jobs because of the selfsame models.
Taken together, these four stories don’t offer resolution, but they do offer shape. Each touches a different edge of the same unfolding truth: AI isn’t on the horizon, it’s right here, in the room. And depending on who you believe and how fervent of an evangelist or a detractor you are, it’s moving faster than the rules we traditionally hew to when we need to quantify and trade in creativity, knowledge transfer, identity and critically trust.
In fashion and beauty, where image is the product, the implications are sharper than most. This isn’t about being for or against AI, that binary is outdated. What matters now is clarity. Which tools are influencing decisions? Where do they sit in the process? And what assumptions have we already made without realising it?
This week didn’t break anything, but it didn’t resolve anything either. What it did was add four new data points to a growing picture: AI is not a moment, but a presence. Something to work with, define against, build around. Something that’s already here.