Key Takeaways:
- OpenAI’s Sora app is seeing rapid adoption despite a growing backlash, climbing to the top of Apple’s App Store charts and dominating social feeds with user-generated videos. This rapid viral uptake underscores that the debate over generative social video likely won’t halt its mainstream entry.
- The debate around generative AI is shifting from cultural critique to legal governance, evidenced by the Motion Picture Association’s warnings about unprecedented infringement and Disney’s decision to “opt out” of the Sora dataset. This highlights a critical need for systems to offer “more granular control” over data and likenesses for rightsholders.
- Fashion brands continue to balance public hesitation with internal efficiency gains from generative AI. Campaigns from Mango, Guess, and J.Crew show a recurring pattern, criticism may follow each experiment, but the workflows remain. The evidence so far suggests that time and cost pressures are proving more durable than cultural resistance.
Last week, we said OpenAI’s new social video app would force a round of introspection for fashion and beauty. The launch of Sora 2 felt like a moment of friction, a technology capable of cinematic realism arriving before the industries that trade in image could decide how to respond. A week later, those questions haven’t gone away, and have already begun to draw industry responses.
The backlash, perhaps unsurprisingly, came quickly. The Motion Picture Association warned that Sora enables unprecedented infringement and urged OpenAI to stop allowing copyrighted material to appear in generated videos whileHollywood talent agencies added pressure, calling for tighter control over how likenesses are used. In response, Sam Altman published an update on his blog promising to give rightsholders “more granular control” over character generation, alongside additional provenance tools.
But while the backlash has been gathering pace, adoption has kept rising – putting paid to any suggestion that social generative video will simply go away because people don’t like it. Evidently people like it a fair bit: the companion app, itself called Sora, climbed to the top of Apple’s App Store charts where it stays as of time of writing, and social feeds have been filled with clips made using it. Tupac on holiday, Stephen Hawking speculating, and Michael Jackson stealing chicken.The tone varied from awe to outrage, but the behaviour remained consistent, and people kept sharing. Whether those shares came from a place of genuine amusement or shock is probably beside the point, attention remains the universal currency. Sora appears to have entered circulation faster than any generative platform before it.
Now that it seems clear that social AI is here to stay, the conversation about where its limits should sit has started to move from culture into law. Altman’s “granular control” commitment is being welcomed by some rights holders: Reuters confirmed that Disney has opted out of the Sora dataset, and other studios are said to be reviewing their positions. Though in contrast to that, Sam Altman has claimed copyright holders are begging for their characters to be included in Sora – presumably because presence is better than absence where virality is concerned.
Even the week’s other major AI flashpoint followed a similar script. Taylor Swift, who has repeatedly warned against AI mimicry, faced backlash after fans found generative imagery inside her new “treasure-hunt” campaign. The discovery, as it usually is with internet sleuths, was quick, and the reaction predictably intense, but as a result the campaign’s reach was enormous.
For fashion, this moment should feel a touch familiar. The industry has spent much of the last 12 months balancing the same contradiction: public hesitation, but internally progressive efficiency. Brands such as Mango, Guess, and J.Crew have all tested generative imagery in campaigns and product visuals. Each time, the responses have followed a similar curve, curiosity, critique, and probable continuation, albeit with revised public usage, so far at least.. Mango’s produced a second AI campaign even after their first one received backlash. J.Crew’s Vans collaboration was accused of “counterfeiting its own vibe,” yet the shoes at the centre of it sold through, while Guess’s AI-generated Vogue ad provoked criticism but produced no visible corporate reaction or policy shift, and no clear evidence of financial impact.
The consistency of that pattern feels telling. It suggests the resistance isn’t functional, but is instead cultural. The debate turns on meaning and legitimacy, while the workflows turn on time and cost. That arithmetic is what keeps AI in the workflow even as the cultural debate continues to flare around it.
This is where the story for fashion and beauty becomes specific. The sector isn’t wrestling with fictional likeness in the same way Hollywood is, but it does trade in cultural reference. A world where certain names, characters, or aesthetics are locked behind permissions could complicate the creative process. AI systems may soon need to learn to distinguish between influence and infringement – a distinction that’s proving anything but trivial.
At the same time, generative tools are now part of everyday creative infrastructure. Fashion houses use them for pre-visualisation, styling tests, and campaign ideation. Marketing teams use them for lookbook variations and localisation. The use cases are multiplying faster than the compliance layers that will eventually govern them. Sora’s rapid rise doesn’t so much offer a lesson as a reminder that governance and creativity now grow side by side
Last week’s story asked whether Sora’s creative potential outweighed the unease it provoked. This week’s developments keep that question open, and expand upon it. The outcome may depend less on enthusiasm or resistance and more on how capability, caution, and governance learn to coexist inside the same creative systems.
Where it all goes from here is hard to call. Maybe that’s why the calls for clarity feel louder than usual. The optimistic reading is that this tension produces maturity rather than paralysis, a move from novelty to normalisation. The sceptical one is that, in tightening the rules, we risk narrowing the imagination that drew people to these tools in the first place. Both could turn out to be true.