Key Takeaways:

  • Generative AI is being used by brands in various ways to create designs, personalise customer communications, and accelerate business procedures. There are legal implications and risks associated with utilising AI tools that brands must consider with regards to copyright and data privacy.
  • Copyright protection for clothing design is limited to the non-functional elements of the garment, and authorship has been consistently construed to be limited to humans.
  • AI platform terms and conditions may govern ownership of the output and indemnification in the event of a third-party claim, and brands should weigh these risks when determining whether to utilise AI in the creative process.
  • The use of AI to process and utilise consumer data raises privacy concerns and must abide by the General Data Protection Regulation (GDPR) if they wish to do business within a covered country.

It is no secret that generative artificial intelligence (AI) is taking the world by storm. Brands are using AI in a variety of ways: to create designs that will sell better, to decrease expenses related to marketing, to personalise customer communications, and to accelerate more mundane business procedures.

While AI has opened immeasurable opportunities for brands to create and connect with consumers, generative AI system providers frequently offer limited protections and assurances to users of the technology. As a result, there are legal implications and risks that brands must consider with regards to copyright and data privacy, when determining whether and how to utilise AI tools.

AI enables designers to swiftly and creatively generate new collections, products, and visuals by inputting sketches, prompts, and additional details like colours, fabrics, and shapes into their chosen AI platform. However, there are copyright issues that should be considered when doing so. Creative asset ownership (whether fashion designs or campaign materials) is dependent on whether the item is protected by copyright. Copyright protection for clothing design is limited to the non-functional elements of the garment.

However, the purely creative aspects (such as the patterns or decorative elements) may be protected if there is sufficient human involvement. Authorship has been consistently construed to be limited to humans and the U.S. Copyright Office has advised that materials generated solely by AI cannot receive copyright protection because they do not meet the human authorship requirement. In March 2023, the Copyright Office clarified that there may be instances in which “a work containing AI-generated material will contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” While the Copyright Office has issued guidance, the Copyright Act itself has not been updated since 1976 and does not account for any updates in technology that have occurred over nearly 50 years. 

Earlier this summer, fast-fashion clothing brand Shein, was sued for allegedly using an AI-based algorithm to find art and other creative content online in order to produce, distribute, and sell exact copies of artists’ creative works. While it is unclear how Shein utilised AI in its design process, the lawsuit highlights Shein’s use of AI to discover nascent fashion trends and generate exact copies of the trendy results it finds. The plaintiffs claimed that “Shein’s design algorithm could not work without generating the kinds of exact copies that can greatly damage an independent designer’s career—especially because Shein’s artificial intelligence is smart enough to misappropriate the pieces with the greatest commercial potential.” The plaintiffs are suing for intellectual property infringement, amongst other claims. 

As a result, there are substantial implications for a brand that uses AI in the creative process because the output may be considered public domain due to the lack of human authorship. For example, a brand prompts AI to create a clothing design that incorporates a unique AI-produced print. If someone then takes the output materials and modifies the design elements, but keeps the print as-is, there would be no protection for the print on its own. However, there could be protection for elements of the design that had sufficient human contribution after it came out of the AI.

It is difficult to estimate when legal frameworks may catch up to advancements in technology. Litigation can take years to resolve and regulators and lawmakers are often slow to enact change or guidance regarding updates in technology. For example, analogously, the Federal Trade Commission (FTC), just recently released updated guidance for its endorsement and testimonial guides (Guides), which includes many references to updated social media usage and societal modernization. The last time these Guides were updated was 2009, long before many modern social media platforms were in existence (or at least widely used). 

In addition to ownership concerns with AI-generated designs, brands should be aware of how the AI tool has been trained to gauge the risk of its use. For instance, it is important to understand what inputs were used to ‘train’ the AI tool to understand if other copyrightable materials were used. There are several ongoing lawsuits aimed at AI companies that have used copyrighted content, without consent, to train  AI models  resulting in a potential  infringement on other third-party rights.

AI platform terms and conditions, whether a directly negotiated contract with a private AI tool or a click through agreement with a public tool, will govern the protections a brand has (or doesn’t have) with respect to third party claims of infringement. Such terms and conditions may also govern ownership of the output. For instance, AI vendor indemnification is likely not available if prompts used to generate images might cause the AI tool to violate third-party rights (i.e., in instances where the user specifically requests images and inspirations that evoke another brand’s aesthetic). In the event of a third-party claim, it may be difficult to determine whether the violation occurred due to the prompt or due to the algorithm and input data. Brands should weigh these risks when determining whether to utilise AI in the creative process.

Data privacy and AI risks

The use of AI can also improve consumers’ shopping experience. In the past, data has often been used to target consumers with product recommendations. However, in those instances, data did not usually include sensitive personal data, such as biometric information. Now, AI can use data provided by consumers – like height, weight, and skin colour – to tailor products to them,  making it easier for customers to virtually try on clothes or try out beauty products. Others have used AI to adjust the kinds of models shown to consumers based on this data. Products can be customised by scanning facial geometry and adjusting based on customer style preferences.

This use is not without privacy risks associated with processing and utilising consumer data and an increasing number of states in the United States have passed state-specific privacy laws. In the EU & UK, data collectors and processors must abide by the General Data Protection Regulation (GDPR) if they wish to do business within a covered country. Some laws have stringent requirements for use of data that include informed consent and a valid reason to use the data. Additionally, some laws provide consumers with the right to request their data be removed or corrected.

This raises issues with AI tools because, due to their nature, there is often substantially less control over the data used- something that is in direct conflict with many data privacy regulations. Notably, the Biometric Information Privacy Act (BIPA) in Illinois regulates the collection, use and storage of biometric data, requiring informed consent from individuals before biometric information is collected or used. Numerous recent cases have been brought against companies, alleging violations of BIPA by allegedly capturing consumer facial geometry through virtual try-on tools (which allow consumers to try on cosmetics, clothes, and accessories) without informing the consumers how the data is collected, used, or retained and lacking a publicly available policy establishing how long such data is retained and when it is destroyed.

However, the Illinois supreme court has found that there is a health care exemption under BIPA for “information captured from a patient in a health care setting.” Therefore, the courts have sided with defendants in situations in which the plaintiff is considered an individual awaiting medical care because the virtual try-on tool facilitates the provision of a medical device (e.g., sunglasses for protecting vision).

In February 2023, the Illinois Supreme Court found in the plaintiff’s favour, who alleged that White Castle unlawfully collected the plaintiff’s alleged biometric information and disclosed it to its third-party vendor in violation of BIPA. The court found that a new claim accrued each time the plaintiff scanned her finger and White Castle allegedly sent the biometric data to its vendor, rejecting White Castle’s argument that collection or capture of biometric information can only occur once.

The European Union often works faster than the United States in adopting consumer protection legislation. For instance, the GDPR was enacted long before states started developing state-level privacy acts, and notably, the United States has yet to adopt a federal privacy act. Therefore, any updates in U.S. legislation or regulation regarding AI may be slow compared to the European Union. In fact, in April 2021, the European Commission proposed the first EU regulatory framework for AI, classifying various levels of risks and obligations for users depending on the level of risk from AI. For example, AI ‘social scores’ which classify people based on behaviour, socio-economic status or personal characteristics, is considered unacceptable. Similarly, real-time and remote biometric identification systems (e.g., facial recognition) is considered unacceptable, but there is an exception if identification occurs after a significant delay in order to prosecute serious crimes, after court approval.

Contracting with private AI system vendors may help reduce some of these risks by contractually limiting what can be processed, however, the risks of potentially violating data privacy laws should be weighed against the benefits of such systems. Brands should consider the financial viability of such vendors to ensure they are in a position to insulate the brand from risk via indemnification. Further, even with indemnification, brands must consider the PR risks if there is a data privacy violation versus the rewards of using AI.

Final thoughts

As generative AI is still nascent, regulation surrounding it is continually developing. Brands should consider the risks of utilising AI that potentially incorporates other copyrighted works in its system, as well as the viability of copyright protection surrounding AI creations. The risks associated with the use of consumers’ personal data in light of the varying data privacy regulations should also be carefully weighed.