Delaware Court Denies Meta Insurance Coverage for Social Media Ad


A Delaware court, applying California law, recently denied Meta insurance coverage for thousands of lawsuits alleging that Meta designed its Facebook and Instagram platforms to maximize engagement by exploiting psychological vulnerabilities, embedding addictive features, and intentionally targeting minors. The lawsuits were brought by two classes of plaintiffs: (i) Individuals who seek recovery for the harms children exposed to the platforms allegedly experienced, including addiction, depression, and self-harm; and (ii) school districts and state actors who seek recovery for the resources they expended to respond to the youth mental health crisis that allegedly arose out of children’s exposure to Meta’s platforms.

The Delaware opinion has limited precedential value — it is subject to appeal, applies another state’s law, and reflects only one court’s view of California law. But it highlights issues related to insurance coverage for harms allegedly caused by our interactions with increasingly sophisticated and personalized software. As AI agents replace human agents and as our relationships with them deepen, questions necessarily arise about insurance coverage for harms allegedly caused by those interactions. Traditional insurance programs aren’t necessarily matched to this new world.

The Bodily Injury Exclusion in E&O Policies

Most software companies rely on Technology Errors and Omissions (E&O) policies to protect against financial losses caused by software glitches or service failures. Standard E&O policies often contain a Bodily Injury and Property Damage (BI/PD) exclusion. The definition of “bodily injury” in these policies typically includes emotional distress, mental anguish, and humiliation. In the litigation against Meta, the underlying plaintiffs seek recovery for addiction, depression, and self-harm.

Media Liability and Section 230 Paradox

Publishers and media companies carry Media Liability insurance to protect against claims like defamation or privacy invasion. These policies also may contain a BI/PD exclusion, but carveouts are generally available for mental and emotional harm. One might expect social media platforms to find refuge here. However, the current wave of litigation creates a “Catch-22” due to Section 230 of the Communications Decency Act.

Under Section 230, online platforms are generally not treated as “publishers” of third-party content. To bypass this legal immunity, plaintiffs have pivoted their strategy: they are not suing Meta for what users posted (a publisher’s act), but for how the platform was designed — specifically, addictive features like infinite scroll and algorithmic content delivery. Software companies have an argument that coverage is determined by the facts, not by the plaintiff’s theory of liability, but insurance companies generally deny coverage for these claims on the ground that the claims do not allege the requisite “Wrongful Act” — generally some type of publishing activity.

The Meta Court’s Narrow Definition of an “Occurrence”

Commercial General Liability (CGL) insurance policies are designed for product liability and other bodily injury claims, and these were the policies at issue in the Meta coverage lawsuit. But these policies require that the injuries were caused by an “occurrence,” which is typically defined to be an “accident.” Meta argued that the harms the underlying plaintiffs alleged were unintentional and, hence, an “occurrence,” but the court disagreed.

Most courts have ruled that a policyholder’s intentional act can nevertheless constitute an “occurrence” if the result of the act is unexpected and unintended. The Meta court acknowledged this rule but applied a narrow interpretation of the term. The court ruled that because Meta intended the design choices (maximizing engagement), the resulting harm was not an accident.

It is unclear why the intentional design of a software product should be treated differently from the intentional design of any other product. While software companies like Meta may argue that they should not be subject to the same product liability laws as other products, the extent of their liability remains to be seen. The Meta court’s ruling involved the duty to defend, which is attached when a lawsuit is filed and is based on any possibility of coverage. It did not involve the insurance companies’ duty to indemnify Meta for any adverse judgments in the underlying cases, which is generally a narrower duty based on the outcome of those lawsuits.

Damages “Because of” Bodily Injury

CGL insurance policies cover damages “because of” property damage or bodily injury. The insurance companies in the Meta lawsuit also argued that coverage was barred because the response costs asserted by the school districts and state actors did not allege damages “because of” bodily injury. The Meta court did not address this argument because it found no coverage based on the lack of an “occurrence.”

Some courts have ruled that damages like those sought by the school districts and state actors are not damages “because of” bodily injury. A majority of courts, however, have ruled that “because of” broadly incorporates consequential damages that flow from bodily injury, such as the cost of treating those injuries.

Insurance for AI Agents

As we move from a human-centered workforce to one that increasingly includes AI agents, companies are beginning to grapple with whether they will be liable for actions taken by those agents and whether their insurance will cover those liabilities. Due to the uncertainty of whether liabilities created by AI agents will be covered by standard liability insurance policies, specialty policies and coverages are starting to appear.

ElevenLabs, which creates and supplies AI voice agents, recently announced that it had purchased a first-of-its-kind insurance policy for liabilities allegedly caused by its agents. The policy covers risks inherent in AI interactions, including hallucinations, unauthorized actions, and data privacy issues. The insurance is backed by the Artificial Intelligence Usage Certification-Level 1 (AIUC-1) from the Artificial Intelligence Underwriting Company, which verifies agent safety, reliability, and security according to industry-leading standards. We should expect to see more policies like this one, as well as changes to definitions in the standard policies discussed above to address AI-related liabilities.

Conclusion

The Meta decision is only an early skirmish in the battle over insurance coverage for our increasing and deepening interactions with social media and AI products. It remains to be seen how other courts in other states will address this issue, and how insurance markets will respond. But given the trajectory of AI, this issue goes far beyond the claims made against Meta. Companies need to decide not only how to implement AI in their organizations, but also whether and how the potential liabilities created by those decisions will be covered by their insurance.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *