Tailoring Compliance Before AI Walks The Runway

Foley & Lardner LLP
Contact

Foley & Lardner LLP

This article was originally published by Law360 on June 24, 2024 and is republished here with permission.

Generative AI tools such as DALL-E 2, Midjourney and Stable Diffusion are increasingly being used to produce static 2D images, and technology from companies like Runway AI Inc., which recently hosted its second annual AI Film Festival, are capable of video output.

These advancements promise various benefits to the fashion industry, including reduced costs, optimized inventory management and pricing strategy, improved analysis of customer preferences, and enhanced design and creativity.

Additionally, there have been various proposed use cases involving generative AI across the fashion value chain, such as in the fields of merchandising and product management, supply chain and logistics, marketing, digital commerce and consumer experience, store operations, and organization and support functions.[1]

Will AI Replace Human Fashion Models?

Whether AI will replace human fashion models is a question that has been buzzing around the modeling industry. Some fashion models and consumers are concerned with the overall effect AI could have on the modeling world.

For example, what would you do if you were a fashion model who walked a runway, but later found out your face was replaced with an AI-generated face? This is what happened to Shereen Wu.

An incident from October 2023 involving Taiwanese-American runway model Shereen Wu has sparked conversations about the ethical use of AI in the fashion industry. Wu alleged that a well-known fashion designer replaced her face during a recent Los Angeles fashion show with an AI-generated one to make her appear more Caucasian.

Specifically, Wu alleged that fashion designer and “Project Runway” alum Michael Costello posted an Instagram photo after a Los Angeles fashion show in which Shereen’s face was edited out and replaced with an AI-generated one — from Asian to White.

In response, Shereen posted a TikTok video to share her thoughts and concerns, questioning Costello’s motives; the video has since generated over 1 million views.

Costello received backlash and negative headlines due to these allegations. He disputed Wu’s claims, arguing he reached out to her on numerous occasions with no success, and that neither he nor his editing team edited any of the videos or images. This dispute has raised both misappropriation and moral dilemmas.

Use of AI in e-commerce channels and commercial modeling are on the rise. Approximately 73% of fashion executives expect to make generative AI a priority this year as a means of assisting in the creative process, such as design and product development. However, many businesses remain hesitant.

Certain companies like Levi Strauss & Co. have begun using generative AI to display more diversity on e-commerce channels. In March 2023, Levi Strauss announced in a press release its partnership with digital fashion studio Lalaland.ai to generate diverse AI models. The company stated this partnership would assist Levi Strauss by “supplementing models” and “creating a more personal and inclusive shopping experience” for consumers.[2]

After this press release, Levi Strauss received backlash from the fashion industry, due in part to the idea that the use of AI would replace humans and blur the line between “true” representation versus “manufactured.”

Additionally, critics argued Levi Strauss was not authentically addressing the issue of representation in the fashion industry or lack thereof. Rather, Levi Strauss was taking a shortcut to save on costs — avoiding the need to hire human models, and secure stylists or makeup artists.

In response to the backlash, Levi Strauss released a clarifying statement that its use of AI was not its entire approach toward diversity, equity and inclusion goals.

When Levi Strauss made its initial comments, models of color expressed concerns that they already do not book as many castings or other opportunities as their counterparts.

Commentators assert that the use of AI is essentially perpetuating inequality because it is only creating an illusion of diversity, when in reality, the fashion industry as a whole has been dealing with the issue of diversity for many years.

Following the social unrest in the summer of 2020, fashion companies and brands developed initiatives to improve their DEI efforts, including more diverse hiring practices, and marketing and advertising campaigns. However, there has been skepticism about whether anything has really changed.

This skepticism, coupled with the growing unease about AI replacing human models altogether, has increased concern from models of color who are not comfortable with fashion companies and brands that claim to use AI to increase diversity.

With AI-generated models, no time is spent finding a location, and there is no need to pay models to try on clothes or to pay photographers at all.

However, industry experts believe there should be a balance between brands and companies using generative AI to promote efficiency and reduce costs, while also enhancing consumer satisfaction, making AI more inclusive for all and tackling the issue of diversity.

As an example, The Diigitals Ltd., an AI and 3D modeling company, collaborated with Down Syndrome International and the creative agency Forsman & Bodenfors, to create Kami, an AI-generated, virtual influencer who displays physical features associated with Down syndrome.

According to Diigitals, “the purpose of creating Kami was to celebrate and promote diversity within the metaverse, showcasing the incredible talents and capabilities of individuals with Down syndrome.”[3]

Various young women with Down syndrome from different countries volunteered to assist in the creation of Kami, allowing Diigitals to create an authentic AI-generated individual with Down syndrome. For many, the creation of Kami has been seen as empowering individuals with Down syndrome, allowing them to feel acknowledged in a world that often casts these individuals as outliers.

Addressing Bias

Given the criticism over the past decades that the modeling industry does not accurately represent and include members from underrepresented groups, the use of AI has raised additional concerns, particularly of algorithmic bias.

Algorithmic bias occurs when systematic and replicable errors in a computer system lead to inequality and discrimination based on legally protected characteristics, such as race and gender.”[4]

Karima Ginena, a social scientist, recently conducted a test and found that when he prompted certain AI generators for images, “the programs almost always delivered a picture of a white person.”[5]

AI algorithms are trained on the information provided to them. If information is not entered in or is excluded, the results may not be representative. Underrepresented models who have historically been excluded from castings, runway shows or photo shoots would continue to be omitted.

Broderick Turner, a Virginia Tech marketing professor, recommends that there not only be greater representation in AI data, but also in the individuals that are coding the data. This would allow for greater data input that is closely representative of the current demographic.

Disclosure

Although not a legal requirement, disclosing which models are AI-generated on e-commerce channels would be one way for brands and companies to be transparent with consumers and models.

On Feb. 2, the European Union unanimously approved the EU Artificial Intelligence Act, requiring, among other things, transparency — in disclosure of AI-generated content and its original content.

These disclosure requirements apply to the use of AI and the people using it. Although the EU is the first to require such disclosure of AI use, other markets may soon follow this trend.

The EU AI Act is seen as revolutionary, as it is the first-ever legal framework addressing the risks of AI systems. This regulatory framework categorizes certain levels of risk for AI systems, placing them into four categories: (1) unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal risk.

For instance, providers of high-risk AI systems will have to undergo an assessment, complying with AI-specific requirements including registration in an EU database. Further, the systems will bear a formal “European conformity” marking and be placed in the marketplace, with any substantial changes that occur during the lifetime of the system requiring a new assessment each time.

Penalties will be imposed for noncompliant AI systems.

Although the EU is the first to require such measures, other markets may soon follow this trend. The AI act could affect the U.S., serving as a blueprint — or at a minimum, guidance — for U.S. federal agencies to consider in assessing AI system deployments that may affect overall societal livelihoods, i.e., hiring practices, health care or transportation.

Federal agencies in the U.S. have noted that they seek to police AI systems, ensure responsible innovation, and provide enforcement efforts against discrimination and bias.

Right of Publicity

Under most laws, fashion models are considered independent contractors and therefore unable to unionize to combat name, image and likeness issues, unlike actors or performers. Normally, models sign model releases, handing over certain rights and conditions about their individual NIL for commercial use.

With AI’s increased use and the way in which it uses data received to create an output, AI could essentially generate the faces of many models without their consent.

However, the Model Alliance’s New York State Fashion Workers Act, which recently passed both the New York State Senate and Assembly, seeks to address this issue by prohibiting model management companies from, among other things, “creating, altering, or manipulating a model’s digital replica using artificial intelligence without clear, conspicuous[,] and separate written consent from the model.”[6]

This legislation appears to be headed in the right direction, toward providing models with labor protections, in addition to protection against AI.

Copyright Considerations

Copyright concerns are rising as fashion designers and creatives begin to experiment with using AI to generate designs and details, such as fabrics, colors and patterns. But under U.S. law, copyright only protects nonfunctional creative elements.

In 2023, the U.S. Copyright Office noted there are circumstances in which works containing AI-generated material will contain “sufficient human authorship” to qualify for copyright protection. That said, it is unclear whether AI-generated fashion designs are protected.

The “Compendium of U.S. Copyright Office Practices” provides guidance on who is deemed an author: “The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being.”

The compendium further provides that “works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author” may not be registered.[7]

Although the compendium does not fully account for current and future uses of AI-generated fashion designs or patterns, there have been various cases involving AI-generated art and copyright.

For example, in 2023 the U.S. Copyright Office ruled an award-winning AI-generated piece of art, Théâtre D’opéra Spatial, did not qualify for copyright protection.

Jason Allen, the artist behind the AI-generated art, used Midjourney, a generative AI program, to create his work. Ultimately, the Copyright Office, noting that copyright protections are not extended to AI, concluded that parts of the work that Allen modified with Adobe amounted to original work that could be protected, but that other parts of the art that were generated solely by AI could not be copyrighted.

Meanwhile, there have been lawsuits that focused on AI art generators, original copyright holders and fair use.

For example, in a February 2023 claim filed in the U.S. District Court for the District of Delaware, in Getty Images (US) Inc. v. Stability Al Ltd., Getty alleged that Stability AI, an AI art generator, copied its images to train its AI model without permission.[8]

There has yet to be a ruling. However, the court will surely analyze and consider the U.S. fair use doctrine to determine whether use on the part of Stability AI was fair.

What Does This All Mean?

With the current concerns and questions that are prevalent about AI and its effects on the fashion industry, fashion leaders, companies, agencies and executives should ensure they are maximizing and balancing AI’s perceived positives, such as reducing costs, enhancing efficiency and providing personalized consumer experiences, while minimizing its perceived negatives, such as algorithmic biases and potential job displacements.

Fashion companies and brands should look at AI as an asset and a means for unlocking future opportunities to propel business forward.

Meanwhile, models, designers and creatives should look at AI through the lens of taking a collaborative approach with brands, knowing they are an essential component of the industry, bringing things such as authenticity, genuine emotion, personal experience and character to the table.

Ivory Djahouri, a former member of Foley & Lardner LLP’s Trademark, Copyright & Advertising Practice Group, was also a contributing author.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Foley & Lardner LLP | Attorney Advertising

Written by:

Foley & Lardner LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Foley & Lardner LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide