Released in The Interline’s AI Report 2025, this executive interview with Hyland is one of an eight-part series that sees The Interline quiz executives from companies who have either introduced new AI solutions or added meaningful new AI capabilities into their existing platforms.

For more on artificial intelligence in fashion, download the full AI Report 2025 completely free of charge and ungated.


Key Takeaways:

  • AI adoption is still in its early stages for most businesses, with few achieving mature, sophisticated use. Despite rapid evolution, companies often lack the time, financial means, and resources to accelerate beyond initial exploration into advanced AI integration.
  • The explosion of content across diverse consumer channels has amplified content management challenges, yet AI is emerging as a critical solution. It intelligently handles content creation, classification, metadata enrichment, search, discovery, and targeted curation, significantly streamlining operations and addressing the sheer scale of modern content.
  • While AI offers substantial potential, organisations must carefully manage expectations and be aware of hidden costs. Key success areas include semantic search, metadata enrichment, and automated asset production. However, businesses must guard against the expenses of AI, the lack of human judgment in autonomous content creation, and potential security and IP compromises when leveraging external large language models with sensitive internal data.
Where do you believe we currently are on the progression curve from AI as an extremely broad set of capabilities and promises, to AI as the foundation for applications and services that can deliver a measurable return on investment in well-defined areas?

Most clients are either in the “early adoption” or “early majority” stage of the technology adoption curve. And I say that because as you describe AI is an extremely broad set of capabilities that most companies haven’t completely rationalized. When it comes to GenAI and commoditized Chatbot services, most everyone is in the early adoption category, but when it comes to the most sophisticated uses of AI there are very few companies that have achieved that level of maturity. The pace of evolution we are seeing in this space is unparalleled. Very few companies have the time, financial means, and the resources to keep up and accelerate through the exploration to advanced stage adoption.


Generally speaking, we are at the very forefront of having well-defined areas of measurable return that make AI adoption a no-brainer and a default investment mandate for every organization. However, we are seeing a common set of capabilities and popular AI adoption trends, but the ROI results remain mixed and, in some cases, difficult to quantify so not everyone is jumping in head first without hesitation.

That pertains to the end user side of the market, but almost every technology and service provider is leading with AI as the foundational value proposition of their offering and Hyland is no different in that transformation.

AI has the potential to operate across a lot of different surfaces, but there’s perhaps no surface wider than content at the moment. From the stories brands and retailers tell consumers, to the assets and objects that internal teams rely on for communication and collaboration, there are more channels to populate with more content than ever. How have you seen that content management problem evolve? And how have you approached applying AI to it?

You are absolutely right that content continues to proliferate and the consumer channels are also expanding in ways that couldn’t have been envisioned, but in many ways the content management problem hasn’t changed that drastically. The scale and the gravity of the challenge is what’s evolved most prominently. What I mean is that organizations have always been faced with the need to manage diverse content types; multiple content versions; multiple content sources and various data models for process automation and reporting. They also had to deal with governance, security and access controls. 

The difference now is the scale of those content dynamics. Organizations are burdened with the pressure of creating more content than ever before and along with that growth comes the need to intelligently manage more content than ever before. With the channel proliferation, the magnitude of places of where content is stored, distributed and consumed is broader than ever before making discovery, governance, reporting and auditing a more pronounced challenge.

AI can play a major role in every aspect of that content scale and operational complexity. AI can be used to create new content; auto-classify content, enrich metadata and content lifecycles, trigger content rules and automation scenarios, assist in search and discovery and recommend and curate content for syndication, distribution and process insights.

At Hyland, we are delivering on all of those AI strategies to amplify intelligence for both structured and unstructured content to help our customers better serve their customers.

A major challenge of any asset management initiative is data structuring and governance. Fashion companies are only capable of deploying technology to manage the information they have properly centralised and available. What’s your perspective on the readiness of companies, large and small, to really take advantage of AI from that perspective? Or is AI itself a solution to the challenge of what information and assets live where?

You often hear that AI is only as valuable as the data you put into it. And there are elements of truth to that sentiment, but successfully leveraging AI requires iteration and experimentation. You have to start somewhere and leverage the data that you currently have access. Leveraging crowdsourced large language models is a good place to start. View those public sources as a launch point vs. a barrier of entry. Aspiring to create a perfect data set or looking to create a low cost way to train a model based an organizations custom vocabulary isn’t practical particularly if you are a smaller organization where resources and budgets will always be constrained.

This segways to the readiness aspect of the question. Readiness is a multi-faceted consideration. Contemplating whether your organization is ready for AI enablement should be viewed from different perspectives regardless of the size of your company; Are your processes mature enough to support the activation of AI intelligence? Crawl-walk-run methodologies aren’t circumvented simply because of technology advancements. Is your enterprise data accessible and qualified for AI activation? Do you have teams or 3rd-party resources dedicated to the AI oversight, governance and on-going monitoring? AI shouldn’t be treated differently than any other technology program simply because of its popularity or because the C-Suite is telling you to prioritize it.

Is AI a solution to the challenge of what information and assets live where? I would say it’s a partial solution because it’s only a piece of the equation.

You’ve been deploying AI for long enough now to have seen both the potential pitfalls and the success stories of customers who have implemented it. Where have you seen them achieving the most success?

The clients that I have seen have the most success have properly managed their leadership’s expectations and haven’t made KPI commitments without assessing what’s feasible and what’s not given the level of effort the organization is willing to put into the AI program(s). I stress the word program in that last sentence as its important to recognize that AI is not a project that you simply kick off and end with a specified set of deliverables. AI requires all of the core tenants of any other program; vision, sponsorship, staffing, cross-functional collaboration, change management and accountability. 

The organizations that are achieving the most success with AI appreciate these program prerequisites and have implemented Centers of Excellence around these efforts with an intent to experiment, learn & adapt with AI as an undisputed contributor in their on-going transformation initiatives. 

Successful use cases examples are: 

  • Semantic search – search beyond limits of existing tags and keyword metadata and to capture results too broad or subtle for what’s possible with traditional metadata attribution.
  • Metadata enrichment and classification – auto-generated metadata and automated custom taxonomy metadata population, classification, tagging.
  • Content discovery – content analysis, summarization and extraction from diverse sets of content and integrated enterprise datasets.
  • Asset production automation – automated image editing of repetitive tasks (e.g. background knockouts) and focused, guideline-based content variation generation.
  • Design visualization support – product assortment visualization and exploration
  • Content validation and approval – guideline and template-based validation and regulatory reviews that trigger automations and workflows
On the opposite end: what should our readers avoid or what potential negative impacts of AI initiatives should they be aware of?

I really appreciate this question because its not talked about enough and its rarely a topic on most AI meeting agendas. The first elephant in the room to bring up is that AI is not FREE and in fact its rather expensive. Most organizations are not well versed in AI cost models and what the expense drivers are over time. The irony is that most organizations are looking to save money by turning to AI and there is a high yield return that should be accepted from the AI program’s inception.

The second impact that I would call attention to is that AI services still don’t represent judgement or authenticity assessments. When you are talking about content creation scenarios or consumer facing services, you want to have oversight and safeguards in place to ensure you haven’t autonomously outsourced your brand voice and brand identify to a non-human. Just because you can use AI in certain areas of your business, doesn’t mean you should.

And the last opposite end position I would mention relates to AI security and IP protection trade-offs. Large language models (LLMs) are becoming more and more accessible, but their value is only as influential as the training data they leverage which are uncontrolled sources. The value of the LLM is largely based on the application & accuracy of the predictions it provides which are only as credible as the data that is feed into model. Organizations want more reliable and business specific insights, but that only happens if those same organizations relinquish access to their data which necessitates security and IP compromises.

What do you believe are the next steps for how AI in general is deployed and used? Is it more likely that AI will solidify its place as a new human interface paradigm the frontend of tools and workflows? Or is its future closer to what cloud infrastructure has become today – a quieter commodity that is still the foundation for the next generation of applications, but in a less obvious way than what we’ve seen over the last couple of years? Or is it both?

There is a lot to unpack in this closing question. First I believe technology should be a behind the scenes enabler, but that ideal has taken a backseat to AI promotion and interface exposure in almost every digital experience we interact with these days. The reality, however, is that AI must work and work well inside the application where it is being used. It is critical that it is embedded in the process that truly helps the user complete a task or make a decision. To accomplish this, the AI will need to be configured and/or customized to accommodate each organizations’ asset bases, adjacent data sets and overall business-specific use cases. Packaged AI tools today simply can’t accommodate this variability 

Eventually, I think there will come a time when its prominence isn’t as in your face as it is today, but I also don’t believe its going to become a quieter commodity that just relegated to a behind the curtain orchestration. AI is like a magic trick. Its equal parts awe and curiosity that makes it attractive. AI providers must balance its tangibility with its mystique so it will continue to be both; a foundation of next generation applications and an experiential aspect of the front-end interfaces that we engage with.