This interview was originally published in The Interline’s AI Report 2025. To read other opinion pieces, exclusive editorials, and detailed profiles and interviews with key vendors, download the full AI Report 2025 completely free of charge and ungated.
Key Takeaways:
- AI adoption benefits most from a strong structural foundation. Clear governance, legal oversight, and employee training create the conditions for safe experimentation, protect intellectual property, and build internal confidence in the tools being used.
- In Perry Ellis’s experience, the most effective AI tools are the ones that integrate naturally into existing workflows. Value emerged from speeding up creative processes, streamlining content production, or reducing repetitive work, while tools without a clear purpose quickly lost traction.
- A people-first culture remains essential for long-term success with AI. Open communication, clear boundaries, and a focus on augmentation over automation help align teams, encourage responsible use, and create space for deeper integration over time.
AI is a polarising topic in society at large, but it’s also quickly becoming a pivotal part of the way the world is going to interact with software, perform work, and experience and shape culture. Fashion brands the world over are microcosms for this evolving frontier where possibility meets real people. As an early adopter of AI, Perry Ellis has learnt from experience how concrete results come from responsible innovation.
What was the first real moment AI entered the conversation at Perry Ellis?
Isaac Korn: At Perry Ellis, we strive to be at the forefront of technology. We’ve been early adopters of AI, utilizing AI tools long before it became a mainstream trend. AI has been a topic of discussion within our company for quite some time. What truly propelled AI into the spotlight was the widespread introduction of Generative AI (Gen AI) to the general public. Since the launch of ChatGPT in late November 2022, the AI conversation has broadened from just the IT team to a company wide dialogue. Our management and executive teams have always supported our initiative to adopt new technologies early on, and they recognized the significance of AI very quickly.
When that “permission” to use AI was given, what happened next, and what didn’t?
We made a conscious decision to avoid an immediate, unrestricted rollout of AI. We took a careful and planned approach to rolling out artificial intelligence. As a company, we prioritize a deep understanding of emerging technologies. Our Information Security team recognized early on the potential risks, especially to our intellectual property, of freely providing AI access to all users. Therefore, we first established safeguards and collaborated with Legal to develop a company wide AI Governance Policy. This policy makes sure AI helps us work better and serves our customers well, all while staying within our security and legal rules. To effectively use AI technologies, we provided associates with training on ethical use, data management, and risk assessment of our approved AI tools. Basically, we made sure AI was introduced into the company in a smart and safe way.
What kinds of AI use cases emerged naturally, and what did they teach you about where the value actually is?
Many use cases emerged naturally, especially those that improved creative workflows, streamlined operations, or even just helped someone get home earlier by automating tedious tasks. For example, we saw designers using AI for mood board generation and initial concepting, which really sped up the creative process. In marketing, AI helped with content generation for social media and even with initial drafts of product descriptions. These experiments taught us that the real value of AI at Perry Ellis lies in augmenting our existing processes and empowering our teams to be more efficient and creative. We also saw some things fizzle out – often tools that were overly complex or didn’t directly address a clear need.
How did employees figure out what to do with AI and what support did they need (or not need)?
Our approach to AI adoption was very deliberate. It wasn’t a free-for-all; we understood that a structured approach was essential for AI to be truly beneficial and safely integrated. The first crucial step was establishing clear boundaries through our AI Governance Policy, outlining what’s allowed and, just as importantly, what’s not.
To ensure our teams felt confident and supported, we implemented a structured training program for all employees who would be directly using AI tools or making decisions based on AI-generated data. We worked closely with them to understand their workflows and identify areas where AI could make the biggest difference, prioritizing those. Our training focused on understanding AI’s capabilities and limitations, ethical use and compliance, data handling best practices (stressing that only public and non-sensitive information should be uploaded), and identifying and managing risks. This foundational support, with clear guardrails and training, was key to minimizing missteps and fostering confident, responsible AI adoption.
What kinds of checks and balances emerged as AI use matured?
As our use of AI matured, several important checks and balances naturally came into play, all driven by our commitment to using AI responsibly. Our AI Governance Policy is the bedrock of this; any AI tool needs explicit approval from our dedicated AI Governance Team, which helps us avoid security and compliance issues. The policy also clearly states that AI can’t make critical decisions autonomously, especially when there are significant ethical, legal, or personal impacts. Our AI Governance Team, which includes our CIO and CISO, is continuously reviewing and supporting approved tools, ensuring they remain secure and up-to-date. These measures ensure our AI journey is not only innovative but also responsible, secure, and aligned with our company values.
How do you think about the moving cultural frontier?
AI is absolutely as much of a cultural transformation as it is a technological one, and we’ve experienced both aspects as this cultural frontier evolves. We’re constantly tuned into this “moving cultural frontier” of AI, knowing that both technology and societal norms are changing rapidly. Our approach emphasizes being adaptable and continuously learning. We also cultivate a culture of responsible innovation, meaning we encourage exploring AI’s potential while always prioritizing ethical considerations and compliance.
On the consumer side, we’re very aware of how our customers feel about AI. We’re extremely careful about transparency and authenticity in anything facing the consumer. Internally, with our creative, technical, and commercial teams, it’s about navigating the perception of automation versus augmentation. We’ve focused on communicating that AI is here to enhance their abilities, not replace them, and to free them up for more strategic and creative work.
In 2025, how do you measure what “successful” AI adoption looks like at Perry Ellis?
This year, there’s definitely been a shift from pure experimentation to focusing on concrete results. For us, successful AI adoption looks like a combination of boosted productivity, enhanced creativity, and significant time savings. While we always consider cost efficiency, our main goal has been to empower our teams. We’re tracking things like how much time is saved on specific tasks, the sheer volume of creative ideas generated, and positive feedback from employees about how AI is improving their daily work. These feedback loops are vital for sharing our success stories internally and demonstrating the real value AI brings. Ultimately, successful AI adoption for us means measurable business improvements, a confident and engaged workforce, and a strong, adaptable governance framework.
What have been some of the hard lessons or growing pains of adopting AI in a people-first way?
One of the hard lessons has been managing expectations. The AI landscape changes daily, and we’re also learning to identify potential dead ends. What didn’t work as expected was sometimes assuming that a new tool would be immediately intuitive or that every team would embrace it at the same speed. We learned that even with a people-first approach, consistent communication, ongoing support, and showcasing successful internal use cases are absolutely crucial. It also taught us the importance of truly understanding our teams’ unique needs rather than trying to force a one-size-fits-all solution.
What does the next phase of AI look like for Perry Ellis, and what are you watching closely?
Now in 2025, the next chapter for AI at Perry Ellis involves exploring more structured integrations and potentially using intelligent agents to automate more complex workflows. We’re still fully committed to our people-first adoption and experimentation approach, as that’s where we’ve seen the most organic success. The goal we’re working towards is a seamlessly integrated AI ecosystem that truly amplifies human potential across all areas of our fashion business—from design and production to marketing and retail. To get there, we need continued technological advancements, but more importantly, a sustained cultural embrace of AI as a partner in innovation. We’re closely watching how AI model capabilities evolve, how the wider industry addresses ethical considerations, and, most excitingly, how our own teams continue to discover new ways to leverage AI to push the boundaries of creativity and efficiency in fashion.