Artificial intelligence (AI) has gone from buzzword to backstage workhorse in the fashion, apparel, and beauty (FAB) industries. When Foley previewed this space in 2024, AI was already being used to power recommendation engines, searches, and personalization. Since then, AI now drives design tools, virtual try-on experiences, synthetic models, and even the digital “resurrection” of fashion and beauty icons — and lawmakers have noticed.
Across the industry, AI can swap outfits in lookbooks, generate virtual fabrics and prints, and build campaigns around synthetic models. Virtual try-on tools help consumers test lipstick shades, match foundation, and visualize outfits, while beauty apps offer skin diagnostics and product recommendations. Behind the scenes, these tools rely on vast training datasets: fashion photography, runway imagery, influencer content, scraped social media, and e-commerce catalogs. Increasingly, they also depend on full digital capture of models, including 3D body scans, face mapping, voice cloning, and digital replicas.
The Business of Beauty Bots: Why Brands Should Care
For brands, the appeal is obvious. AI cuts costs, speeds creative cycles, allows assets to be generated and localized quickly, and promises highly personalized shopping experiences. Generative systems can produce new imagery in hours rather than days, while AI styling engines curate outfits and beauty routines based on individual consumers.
The catch is that these systems often rely on real people’s images, bodies, voices, and performances, or on creative works never meant to be used as training material. The same tools that let brands “do more with less” also make it easy to reuse a model’s likeness in ways they never agreed to.
Rights on the Runway: Publicity, Digital Doubles, and Deepfakes
Because there is no comprehensive federal AI statute, FAB brands operate within a patchwork of legacy doctrines and emerging AI-specific laws. Right of publicity and name, image, and likeness (NIL) laws in states such as California, New York, Tennessee, Florida, and Arkansas have long protected commercial use of a person’s identity, often even after death. A brand that “brings back” a beloved fashion or beauty icon in an AI campaign without the estate’s permission may be inviting litigation.
New York has become the main runway for AI-and[1]model legislation. The New York Fashion Workers Act,1 effective June 19, 2025, directly addresses AI use involving fashion models. The required consent must specify scope, purpose, duration, and compensation; however, routine retouching is excluded. The New York Digital Replica Law,2 effective January 1, 2025, goes after “rights-grab” contracts. A digital replica agreement is void if it (i) allows a replica to be used instead of work the person would have done in person, (ii) fails to give a reasonably specific description of the intended use, and (iii) was negotiated when the individual did not have legal counsel or a union to consult. Brands and agencies can no longer rely on vague language to lock up a model’s digital double indefinitely.
New York’s AI Transparency in Advertising and Synthetic Performer Disclosure Law,3 effective June 9, 2026, moves to the consumer side. If a commercial ad uses an AI-generated synthetic performer, or a realistic digital person created by AI, that fact must be clearly disclosed, with civil penalties for failing to label the ad. A separate New York posthumous right of publicity law4 requires consent from heirs or executors before using a deceased person’s name, image, voice, or likeness in campaigns, which directly affects launches or collections built around “resurrected” icons.
Other states are moving in parallel. Tennessee’s ELVIS Act5 and Arkansas’s HB 10716 explicitly extend publicity rights to AI-generated likenesses and voices, banning unauthorized commercial use, and imposing civil (and in Tennessee, some criminal) penalties. At the federal level, the proposed NO FAKES Act7 would create a nationwide right against unauthorized digital replicas of a person’s likeness, voice, or performance. The Deepfake Liability Act, the Take It Down Act, and PADRA aim to make platforms and creators more accountable for harmful deepfakes and to streamline takedowns, both for influencers and models whose images are misused and for brands that distribute AI-heavy content.8
Advertising, Privacy, and Biometrics
Traditional advertising and consumer protection rules apply even when content is created by AI. A synthetic model or AI-generated endorsement that misleads consumers can trigger the same false advertising and unfair or deceptive practices claims as any other campaign.
Privacy and biometric laws add further constraints. Statutes like Illinois’s Biometric Information Privacy Act (BIPA) and California’s CCPA/CPRA regulate how brands collect and use face scans for virtual try-on services, body scans for size and fit tools, and voiceprints for voice-based experiences. Failures in obtaining consent or handling this data can lead to statutory damages and class actions.
New transparency and detection proposals point to where law is heading. California’s proposed AI Transparency Act9 would require detection tools for AI-modified media. New York’s proposed synthetic content provenance bill10 would push AI systems to embed cryptographic provenance data, creating a verifiable trail of how an asset was generated. These proposed laws signal a future in which watermarking and provable origin information for AI content are expected rather than optional.
Copyright Catwalk: Training Data and AI-Made Designs
In Thaler v. Perlmutter, 11 the D.C. Circuit confirmed that purely AI-generated works without human authorship are not copyrightable under current U.S. law. Brands using AI to generate prints, patterns, or imagery need meaningful human creative input to claim protection.
At the same time, AI training data is under increasing scrutiny. The proposed Generative AI Copyright Disclosure Act12 would require AI developers to disclose their use of copyrighted works in training data. California’s AB 412,13 which has stalled, would required AI developers to obtain permission and pay licensing fees to use copyrighted works for training.
Pennsylvania’s HR 8114 urges Congress to exclude predominantly AI-generated works from copyright protection and to clarify how copyright and fair use apply to the use of copyrighted works in AI training.
Outside the United States, the European Union’s Artificial Intelligence Act15 introduces a risk-based framework with transparency obligations for AI-generated and deepfake content. Multinational FAB brands using virtual try-on tools, recommendation engines, and synthetic models in Europe will need to align with those standards, even if U.S. law continues to evolve more slowly.
Contract Couture
Talent and vendor contracts need to match how AI is actually being used. Agreements should spell out whether the brand will create digital replicas, scans, or 3D models; where and how those assets may be used, including specific campaign names, channels, territories, and time periods; and whether talent imagery may be used to train internal or third-party AI systems. Compensation for AI-related uses should be addressed separately from standard day rates or flat fees, rather than assuming unlimited synthetic reuse is included.
Legal drafting is only one part of the solution. FAB brands also need internal guardrails. Many are forming cross-functional AI committees that bring together legal, marketing, HR, and IT to inventory tools; map how they touch talent images, third-party content, consumer data, and creative IP; and flag higher-risk uses before launch. Vendor management is becoming more rigorous, with brands asking their business partners how tools are trained, whether they support watermarking and provenance, and demanding indemnification for claims related to use of AI. Workflows that include obtaining informed consent, especially for biometric scans, and for labeling AI[1]generated or heavily altered content where required by law or expected by consumers, are becoming standard operating procedures.
Done thoughtfully, AI can deliver meaningful efficiencies: fewer returns, better fit and shade matching, faster campaign production, smarter inventory decisions, and more relevant product recommendations. The difference between a competitive advantage and a public relations crisis often lies in how a company treats the humans behind the data, including having clear and transparent contractual provisions.
The Rise of AI and the Expanding Risk to Talent Likeness
While brands adopt AI to streamline campaigns, reduce costs, and extend creative output, these technologies pose unique and often underappreciated risks to models and other creative talent. Chief among them is the unauthorized or overextended use of a model’s likeness beyond the scope of the original engagement.
When AI tools are used to replicate or manipulate those attributes without clear consent, models may lose control over how, where, and for how long their likeness appears.
A Common Scenario: From Single Campaign to Synthetic Expansion
A model is hired to appear in a single campaign or to model a specific garment. The agreement contemplates traditional photography and a defined scope of use. After the campaign concludes, the brand uses AI tools to digitally replicate the model’s likeness and place it into additional outfits, campaigns, or promotional formats that were never discussed or approved.
From the brand’s perspective, this may feel like an efficient extension of licensed content. From the model’s perspective, it represents a fundamental expansion of use, one that may affect future bookings, dilute exclusivity, or associate the model with products or messaging they did not choose to endorse.
This disconnect often arises because AI-generated content blurs the line between “use of existing images” and the creation of entirely new representations. While the original photographs may have been authorized, the resulting AI-generated images may depict scenarios, garments, or branding that never existed at the time of the shoot.
Contractual Gaps in Traditional Modeling Agreements
Currently, there’s a discrepancy between legacy modeling contracts and modern AI capabilities. Many agreements were drafted at a time when campaigns were limited to physical photographs and clearly defined deliverables. As a result, contracts frequently grant rights to use “images,” “photographs,” or “recordings,” but do not contemplate synthetic media, digital replicas, or AI-generated derivatives.
In the absence of express language addressing AI, brands may rely on broadly drafted usage clauses to justify expanded use, while models may reasonably assume that consent to appear in a single campaign does not authorize the creation of unlimited digital versions of their likeness.
This ambiguity creates legal and commercial uncertainty for both sides. For talent, it raises concerns about consent, compensation, and control.
Potential Legal Claims Available to Talent
Depending on the jurisdiction and the specific contractual language at issue, models may have several potential legal avenues if their likeness is used beyond the agreed scope. These may include right of publicity claims, particularly where AI-generated content exploits a model’s identity for commercial gain without consent.
Models may also assert breach of contract claims where AI use exceeds the defined scope of permitted use, or where the contract implicitly limits usage to specified campaigns or formats. In certain circumstances, claims based on false endorsement or unfair competition may arise if AI-generated imagery implies an ongoing relationship or approval that does not exist.
While these claims may provide leverage, litigation is often an imperfect solution. AI-generated content can be deployed quickly and at scale, making it difficult to contain once released.
Proactive Contractual Protections for Models and Influencers
Given these challenges, preventative strategies are increasingly critical. Models and their representatives should focus on addressing AI-related risks at the contracting stage, before any images are captured or content is created.
Key contractual protections may include explicit prohibitions on AI-generated replicas or derivatives absent separate written consent. Separate compensation structures for synthetic or extended uses can help ensure that talent is fairly compensated for the ongoing exploitation of their likeness.
Approval rights are also an important consideration. Models may seek the right to review and approve AI-generated content before it is published, particularly where such content could affect brand alignment or public perception.
Monitoring, Enforcement, and Ongoing Oversight
Beyond initial contracting, talent should consider mechanisms for monitoring and enforcement. Audit rights, notice requirements for new uses, and contractual takedown provisions can provide meaningful leverage without immediate resort to litigation.
For higher-profile talent, additional carve-outs may be appropriate. These can include restrictions on sensitive product categories, protections against political or controversial uses, and provisions addressing moral rights or reputational harm.
Looking Ahead: Protecting Likeness in a Synthetic Marketplace
As AI continues to reshape creative industries, the imbalance of bargaining power between brands and individual models is likely to increase. The key to navigating this evolving landscape lies in proactive contract drafting, informed negotiation, and early legal review.
By addressing AI-related risks upfront, brands and talent can better protect the integrity, value, and longevity of their likeness in an increasingly synthetic marketplace, while allowing innovation to proceed in a manner that respects consent and creative ownership.
References
- N.Y. Lab. Law Art. 36 (Fashion Workers Act), S.9832 (2024); See N.Y. S.B. S9832, 2023-2024 Leg., Reg. Sess. (N.Y. 2023), https://www.nysenate.gov/legislation/bills/2023/S9832.
- N.Y. Gen. Oblig. Law § 5-302 (Digital Replica Law), 2024 N.Y. S.B. 7676-B.
- 2024 N.Y. S.8420-A/A.8887-B (AI Transparency in Advertising and Synthetic Performer Disclosure) (signed Dec. 11, 2025; effective June 9, 2026).
- 2024 N.Y. S.8391/A.8882 (posthumous right of publicity).
- Tenn. Code Ann. § 47-25-1101 (2024) (Ensuring Likeness, Voice, and Image Security Act, “ELVIS Act”).
- Ark. H.B. 1071, 94th Gen. Assemb., Reg. Sess. (2023) (amending Arkansas publicity-rights law to cover AI-generated likenesses and voices).
- See, e.g., Nurture Originals, Foster Act, and Keep Entertainment Safe (“NO FAKES”) Act, discussion draft, 118th Cong. (2023).
- Deepfake Liability Act, H.R. 6334, 118th Cong. (2023); Take It Down Act, S.4569, 118th Cong. (2023); Protecting All Digital Realities Act (PADRA), H.R. 10550, 118th Cong. (2023).
- Cal. Assemb. B. 853, 2023-24 Leg., Reg. Sess. (Cal. AI Transparency Act).
- 2023-24 N.Y. S.6954 (synthetic content provenance and watermarking).
- Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025).
- Generative AI Copyright Disclosure Act, H.R. 7913, 118th Cong. (2023) (proposed by Rep. Adam Schiff).
- Cal. Assemb. B. 412, 2023-24 Leg., Reg. Sess. (Generative AI: Training Data).
- Pa. H.R. 81, 2025 Gen. Assemb., Reg. Sess. (clarifying AI and copyright).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 21 May 2024 on Artificial Intelligence (Artificial Intelligence Act).