Key Takeaways:
- The new Responsible AI Licensing (RSL) standard is a proposed framework to license content for training generative AI, ensuring publishers are compensated. It builds on previous web standards like RSS and uses metadata tags to instruct AI systems on usage, aiming to create a formal market for content licensing and royalties.
- Legal precedents are emerging, with cases like Anthropic’s recent $1.5 billion payout to authors, which highlights a growing legal and financial pushback against uncompensated use of copyrighted material for AI training. This sets a precedent that the open-web “free-for-all” may be ending.
- The cultural impact for fashion could be significant. While the industry has a long history of remixing and reinterpretation, AI’s ability to scale this process blurs the line between reference and reproduction. This new framework could provide clear boundaries for intellectual property, a critical shift for brands that invest heavily in their visual identity.
We’ve been publishing The Interline on the open web for almost six years. During that time everything we produce has remained free at the point of consumption, because our team believes that better-educated and better-connected fashion and beauty industries will be better able to capitalise on the potential of technology and drive it in the right direction.
We have also supported RSS (really simple syndication, one of the earliest feed standards for the web) from the beginning, serving up full stories in a way that allows readers to make our content – and the web as a whole – work for them, rather than always being beholden to algorithmic recommendations. We understand why paywalls and content gates exist, but they don’t work for our aims..
But publishing content to the open web is also a fast-changing game, and, according to Google’s recent legal filings, a dying art. It’s important to us to show up wherever people are asking questions and seeking practical information, and as we barrel towards the end of 2025, that increasingly means making sure that our authority comes through when people turn to AI – something that 5% (and growing) of our readership is already doing.
That makes this week’s unveiling of a new “Responsible AI Licensing” standard (RSL) particularly interesting to us – not just as fashion and beauty commentators and analysts, but as publishers ourselves. This is one of those rare occasions where what matters to fashion is also, directly, what matters to us.
So what is RSL? First up: it’s being spearheaded by one the co-creators of SRS. And the aim, on the surface, is simple: to give publishers a way to license their work for inclusion in generative AI training and AI outputs, and crucially to be compensated for it.
It’s also an attempt to give structure to a relationship that has so far, outside of standalone agreements negotiated between the biggest publishers and the biggest AI labs, been one-way. AI models have taken from the open web without permission or payment during training, and they largely continue to do so when they turn to web search to answer questions that are not covered in their training data, or that are better answered through up-to-date research. And this is true for both text generation models, image generation models, audio generation models… essentially the entire off-the-shelf AI ecosystem.
At a technical level, RSL is proposing a way for publishers to tag their content with metadata telling AI systems how it can be used. It will then be up to the companies running the models to decide whether or not to honour that request, and it’s important to realise that even though the open web has run on an effective honours system for decades (made up of syndication standards like RSS, and handshake agreements like robots.txt files that live in a website’s public root directory and instruct web crawlers how to behave), there has never been a formal structure in place to reward adherence to those agreements, or to disincentivise ignoring them.
At a business level, RSL is aiming to create a market for content as used by AI during training and inference, with a view to aligning a lot of different interests around standards and rates for licensing that are roughly analogous to royalties in other sectors.
For our own purposes, The Interline is not rushing to adopt or reject this proposal; it’s still far too early and it will require widespread buy-in to mean much. And it’s also worth remembering that there are numerous AI-related lawsuits working their way through the courts that are also attempting to do the same thing: define what the market for content used by AI is, and set rates that are acceptable to the parties on both ends of the deal.
Now, publishing and fashion have precious little in common in many respects, but this time around there’s more that aligns the two worlds than separates them. Both sectors create a constant stream of content. Lookbooks, product images, marketing campaigns, videos, social posts… it’s not an exhaustive list but it illustrates the sheer volume and overlap of “stuff” these industries must create to remain relevant both culturally and financially. A publisher’s output is written first and visual second, whereas a fashion or beauty brand’s output is balanced the opposite way. But in both cases, all of that material lives on the open web, and all of it is fair game for AI training unless proposed standards like RSL take hold. This matters going forward more than it does looking in the rearview, because generative systems already carry the imprint of the industry’s biggest brands and creative leaders, and there seems to be little recourse to unpicking that – even if Anthropic has recently needed to pay out $1.5 billion to authors in a long-running copyright case that’s now seen the largest damages in history awarded.
Of course long before AI, brands were borrowing and re-selling ideas. Fashion has always been self-referrential to a degree that few other industries approach. Merchandising teams around the world would go on organised shopping trips to capture ideas by browsing the competition. Designers still study runway shows not only for inspiration but for cues that would trickle down into mass market collections. Street style has been documented and reinterpreted and remixed, and will carry on being so for the foreseeable future. The cycle of borrowing, reinterpreting, and re-sellling has long been built into how fashion works – but automating and scaling that process to the extent that AI has represents a different kind of threat.
It’s no longer a matter of traveling to Paris or Milan to see what’s new. It’s now a prompt box capable of producing thousands of synthetic interpretations of an image that was, more than likely, scraped from the web.
Given the breakneck speed of this new technology, it’s no surprise legal frameworks are yet to catch up. The boundary these days is much less clear than it was prior to the widespread rollout of generative AI models that have been trained on the world’s creative output. A generated image may include elements from many brands at once, but it’s still effectively impossible to call what AI does “copying,” especially with closed-weights models.
That’s why initiatives like RSL could resonate beyond publishing, and why it’s encouraging to see the publishing sector taking a more pragmatic and realistic approach to AI, since the chances of the genie being put back in the lamp, at this stage, are effectively zero. If publishers are starting to marshal collective action, and to build structures that allow them to ask for compensation when their work fuels AI systems, it is not far-fetched to imagine brands wanting the same when their catalogues, campaigns, and social content become the DNA of generated fashion imagery.
The business logic is clear. The largest brands have invested heavily in visual identity. If that identity becomes raw material for competitors, there’s at least an expectation that they will push back. The creative logic is a little fuzzier. Fashion thrives on remixing and reinterpretation, and a licensing regime that locks down inspiration risks cutting against that tradition.
So where does that leave us? From our perspective, the immediate impact is cultural rather than legal. The reality is that brands can’t stop others from referencing them, but as generative AI blurs the line between reference and reproduction, the appetite for clear boundaries will likely grow, and if publishing can develop new frameworks for licensing compensation, it is reasonable to expect fashion might follow.
For the foreseeable future, The Interline will keep supporting the open web – both as part of our commitment to our readers, and as an exercise in continuing to document, from the inside, what it means to push content to an internet that’s increasingly being consumed by AI. For our beauty and fashion readers, the idea of gating content is likely to remain inconceivable, so we would advise our audience to also keep tabs on how the content licensing frameworks for AI develop, since it certainly seems as though the web is heading in a very different direction.