Current Laws Leave Gaps in Addressing Deceptive Advertising in AI User-Generated Content
Companies have long used user-generated content (“UGC”) to market and promote products and services, by recruiting social media influencers to create video reviews. Now, many brands are turning to AI influencers, to skip the human error, slow speeds, and high costs of contracting with human social media influencers. Because these tools blur the boundaries between authentic and fabricated endorsement, they may raise ethical and legal concerns. The law, however, does not have an easy solution or application.
Section 5 of the Federal Trade Commission (“FTC”) Act prohibits “unfair or deceptive acts or practices in or affecting commerce,” which means that the FTC can regulate companies engaged in AI UGC. The FTC’s powers here can be relevant to AI influencers in commerce, where inauthentic promotional content or reviews mislead consumers by simulating the endorsement of real people. Such use risks violating Section 5 if it misrepresents the nature or source of the content in ways likely to materially influence consumer decision-making. The focus here would be the harm to the consumers, not necessarily on the models whose likenesses were used without permission.
A proposed bill in Congress may address unauthorized digital replicas, which could assist human influencers who have found unauthorized AI-generated video of themselves promoting products—as has already happened. The proposed NO FAKES Act would create a quasi-intellectual property right for an individual’s voice or likeness and establish civil and criminal liability for unauthorized digital replicas. The application of the Act becomes more complex when individuals voluntarily license their likeness to AI UGC platforms, but were under some confusion about what they were agreeing to. In these cases, consent may make use of a likeness lawful, but may not provide for protections where they are used in wholly fabricated narratives. For example, a model may authorize the use of their image, but not anticipate it being attached to false hooks such as, “I’m a doctor and here’s what I recommend,” or other statements that imply personal experience. This poses risks for all parties: the individual whose likeness is misused, the company using such content for commerce, and consumers who are misled by what appears to be a genuine testimonial.
With respect to state laws, multiple states have introduced or enacted legislation requiring that companies disclose to consumers when they are interacting with automated software, or bots. A reading of these laws shows that they mainly apply to chat bots, not AI influencers. California became the first state with a bot disclosure law, in 2019, through its Bot Disclosure and Accountability Act. Utah followed suit but has since amended its law to require disclosure only in response to a clear and unambiguous request from a consumer. Colorado’s law will go into effect on February 1, 2026. Several other states have proposed similar bills, including New York and Illinois.
The language in these bills varies, affecting likelihood of applicability. For example, a promotional video of an AI actor likely would not fall under the New York bill’s (A00222) definition of a “chatbot” (“an artificial intelligence system…that simulates human-like conversation and interaction through text messages, audio, or a combination thereof”), as this bill focuses on exchanges, or Alabama’s bill (HB516) that applies to AI systems that “interact with consumers” (though “interact” is not defined). Likewise, though including applicability to AI “agents” and “avatars” alongside chatbots, the Illinois and Alabama bills both require a “textual or aural conversation” to take place to trigger liability.
Although these laws may not apply to a consumer viewing AI UGC content before making a purchase, comments and responses on these promotional videos may cross the line into interactive or communicative territory. The added factor of mimicking authentic endorsements, (e.g., “I used to be a model and these were my beauty secrets”) increases the risk of misleading consumers. Brands and companies using AI-powered UGC in ecommerce should err on the side of caution and label such content clearly. It may be beneficial to avoid scripts that fabricate human experience when using AI agents altogether.
As the use of AI UGC in ecommerce proliferates, current legal and regulatory frameworks fall short of addressing forthcoming risks. Social media platforms such as TikTok have implemented “AI-generated” labels for disclosure, but application and enforcement remain inconsistent: the label is often found on authentic content or missing on AI-generated content. While frameworks like Section 5 of the FTC Act, the proposed NO FAKES Act, and state bot disclosure laws lay important foundations, many legal and ethical nuances remain uncertain.
Written by Susanna Khachatryan, intern at AMBART LAW PLLC, under the supervision of attorney Yelena Ambartsumian.