Key Takeaways:

  • In software development, the emergence of “Vibe Code Cleaning Experts” – professionals who fix AI-generated code prompted by non-programmers or developers working with a copilot – could be predicting a scenario where patternmakers and technicians become critical translators between AI-generated designs and producible reality.
  • Instead of eliminating expertise through automation, AI instead appears to be moving it around. By opening the creative funnel to anyone with an idea, and lowering the floor of creativity, it’s also increasing demand for seasoned professionals who can distinguish between what looks good on screen and what can actually be manufactured, and who have the technical skills to undertake that translation.
  • How the work of “AI cleanup” will be distributed and acknowledged will also raise some uncomfortable questions about multi-stakeholder working relationships that are already lop-sided. If AI allows brands to capture efficiency gains in theory, but in reality shifts the burden onto upstream partners, then AI has the potential to become simply another hoop for under-recognised technical experts to jump through.

A few months back, we speculated on what might happen if the “vibe coding” expanded from its place in software development and became a bigger factor in fashion. The upshot of that piece was that AI can certainly help to lower the barriers of entry to specialist domains, but experts from those domains are still likely to be essential to distinguishing between ideas and practical reality.

Over the past week, that idea has started to crystallise, thanks to scattered reports of a new job description (informal and official) in the software space: the Vibe Code Cleaning Expert. Their task is exactly what the name suggests: taking code generated by non-programmers (or at least programmers with limited experience) using AI, and working to make it viable, secure, and scalable. 

How much of that role is really “cleaning” versus re-engineering, or simply throwing out bad code and rewriting it from scratch, is going to vary case-by-case. But that’s a spectrum of different outcomes that will also sound familiar to any seasoned patternmaker or technical professional who’s received an AI brief, accompanied by a generated image, and found themselves needing to “translate” it into producible reality… a job that could prove to be a light touch or a full recreation.

This is, in other words, a sober reminder that, behind the potential of generative AI to open the floodgates of creativity, there will still need to be an army of professionals who know their craft and can take the results the final mile – or who might be needed to step in when generative models produce results that look visually compelling but can’t actually be created.

Back to the software engineering example: in recent months, multiple reports have shown that code produced through heavy vibe coding (i.e. programming that is all but automated, whether it’s conducted by novice programmers or seasoned professionals with a copilot) can harbour inefficiencies, security flaws, and can wind up breaking down under real world use. In these cases, a tool that was meant to accelerate the job, or to make that job do-able by people who haven’t had the education or professional experience that used to gate-keep participation, has taken the user part of the way before basically abandoning the race.

In practice, AI here is not reducing time to market so much as it is compressing it in one place (initial development) and then shifting the same burden (or perhaps an even greater one) to senior, seasoned experts who then need to debug, rewrite, or even restart projects.

These so-called “AI babysitters” are now a real task force in some software development workflows, either watching over outputs that can look fine at first but fall short once tested, or being tasked to give an idea a fundamental do-over. The work, from the post-hoc perspective, still gets done, but the path to delivery is bumpier, and completed in a way that re-centres expertise and leans back into multi-party reconciliation and manual alignment in a way that runs counter to the promise of AI as a force for streamlining and automation.

So how far can we currently draw parallels between what’s happening in software creation, and what could happen in fashion? The obvious analogue is that a generated design might look good, but somewhere along the line it has to undergo a process of technical development, cutting, sewing, testing, and shipping. 

If you believe that fashion has a bottleneck in creativity, then that workflow might feel worth pursuing, with the understanding that the bridge between digital image and physical garment runs through pattern rooms, technicians, and producers – all of whom have expertise that AI models currently do not. 

These teams are, as a matter of fact, already used to performing the invisible translation between idea and reality, and between creative idea and commercial reality. Does their job become more difficult if the volume of ideas increases but the technical accuracy doesn’t? General purpose AI tools can be quick to start the ball rolling, but the promise of full automation only stretches so far before intervention is still required – at least until fashion-specific pattern generation tools and other models trained in the technical side of product creation, can demonstrate their reliability.

The parallel between fashion and software only goes so far, obviously, but it does underline a bigger idea: that expertise doesn’t vanish when AI arrives. In fact, experts are set to become even more sought-after as the early stages of the creative funnel widen massively, placing huge demand on the scarce sets of people with the skills to identify the right ideas and then realise them.

It is worth noting, too, that this shift is not always evenly rewarded in software – and fashion is hardly an industry known for universally acknowledging the input of designers and patternmakers. In some cases, engineers taking on vibe cleanup work have been able to charge premiums for their services; in others they’re no doubt being pulled from other projects, without thanks, to pave over problems. In fashion, is same kind of cleanup work more likely to be absorbed quietly, treated as part of the process, or venerated as a evidence of just how much skill, craft, and expertise matter?

There is also a broader context to keep in mind. Demis Hassabis (CEO of DeepMind), one of the people steering AI’s development at the “big research lab” end of the funnel, has said that the critical skill of the next generation will not be mastery of a single discipline but the ability to learn how to learn, or the flexibility and neuroplasticity to take on technical questions and skills in multiple areas at once, and becoming able, as a result, to discern good output from bad. 

“Taste” in other words.

Hassabis also pointed out that unless people see personal benefit from AI, and unless the distribution of gains feels fair, scepticism will rise. Fashion offers plenty of places where that imbalance is already showing up. If AI-driven design makes life easier for brands but leaves consumers with flimsy, ill-fitting products, the upside is one-sided. And if the hidden work of fixing those designs falls on factories already working under incredible cost and time pressure, then once again the gains flow upwards while the strain spreads outwards.

It’s clear, now, that the tools will continue to accelerate. AI development is not slowing down, and AI strategies and applications are still drawing the lion’s share of software spending and talent budgets. But it’s also becoming clearer that the last mile between the vibe and the garment is not closing as quickly as the hype suggested. 

The lesson from software may not map neatly, but it does offer a caution worth keeping in mind. When the vibe wears off, the cleanup begins. And the cleanup still falls to the people who know what they’re doing.