If you scroll through any popular social media app, you’ve probably seen the format: a man approaches a woman in public (at a shop, on a beach, in an airport lounge) and tries to chat her up, flirt, “rizz her up,” as the kids say. The footage is shot from his perspective, often at close range, and it looks casual, almost candid. What the women in these videos frequently don’t know is that they’re being recorded at all. The camera isn’t a phone held at arm’s length; it’s a pair of smart glasses that look like ordinary eyewear.
A BBC investigation in January found that dozens of male influencers across TikTok and Instagram were using Meta’s Ray-Ban smart glasses to secretly film women for exactly this kind of content. One woman, Dilara, was 21 and on her lunch break when a man struck up a conversation and filmed her without her knowledge. The footage went up on TikTok, hit 1.3 million views, and included her phone number. She was flooded with messages and calls. Another woman, Kim, was filmed on a beach in a conversation where she shared details about her employer and family, none of which she would have disclosed had she known a camera was running.
Smart glasses do not merely add a camera to ordinary eyewear; they collapse the distinction between social interaction and data capture, making recording both frictionless and difficult to perceive. Recent reporting and litigation involving Meta’s Ray-Ban smart glasses suggest that the resulting privacy harms are not confined to the purchaser-user relationship. Rather, they extend to bystanders, intimate partners, household members, and others who never consented to participate in the product’s data practices. The emerging dispute thus raises a foundational legal question: when smart glasses companies promise a product “designed for privacy,” whose privacy is being protected; the wearer’s, the platforms, or everyone else’s?
The Lawsuit
In Bartone et al. v. Meta Platforms, Inc. et al., filed March 4, 2026, in the U.S. District Court for the Northern District of California, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California sued Meta Platforms and Luxottica of America, alleging that defendants paired privacy-centric marketing claims with insufficiently clear disclosures regarding transmission, cloud processing, and human review of captured media. The complaint points to slogans like “designed for privacy, controlled by you” and “built for your privacy,” arguing those messages gave buyers a false sense of control. The case is at an early stage, with an initial case management conference set for June 2026 and no ruling on the merits.
The lawsuit draws heavily on a February 2026 investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten. These newspapers reported that workers at Sama, a Kenya-based subcontractor, were reviewing video clips captured through Meta’s AI glasses as part of an AI training pipeline. According to the reporting and the complaint, the reviewed footage included bathroom visits, nudity, sexual activity, and other private moments inside users’ homes. In one incident described in the complaint, a man placed his Meta glasses on a bedside table and left the room. His partner entered and changed clothes in front of the camera, not knowing the glasses were still recording. That footage was then sent to workers in Kenya for labeling, all without the woman’s knowledge or consent.
Defendants’ Emerging Disclosure-Based Defense
Meta has responded with a distinction that will likely become central to the case. A company spokesperson said that media captured by the glasses stays on the user’s device unless the user chooses to share it with Meta or others. When users do share content with Meta AI, the company said, it sometimes uses contractors to review the data to improve the experience, taking steps to filter content and reduce identifying information. Meta also said it has been in contact with Sama, which stated it is not aware of workflows where sexual or objectionable content is reviewed or where faces remain consistently unblurred.
That response highlights a gap that runs through the entire dispute. Meta’s position is that data handling is disclosed in its terms of service and privacy policies. The plaintiffs’ position is that ordinary buyers never understood those disclosures, because the marketing language, product design, and overall experience all pointed in the opposite direction. A company can believe it has said enough. A consumer can still believe, reasonably, that the product is private. Both things can be true at once, and that tension is what the court will have to sort out.
Why Smart Glasses Destabilize Conventional Privacy Assumptions
The covert-filming videos and the allegations in Bartone describe different harms, but they share a common logic. Smart glasses make recording so sufficiently seamless and so invisible that they alter the background social assumptions that ordinarily govern cameras. A smartphone typically announces itself: it is removed, oriented, and visibly used. Smart glasses, by contrast, rest on the face and blend into ordinary interaction. Recording therefore becomes casual, ambient, and in some circumstances, effectively undetectable. In both the public-filming examples and the private-home allegations, the device did not malfunction: the women filmed for TikTok content didn’t see a camera; the woman who changed clothes in front of glasses left on a bedside table didn’t know a camera was there. In both cases, the device worked exactly as designed; the problem is what that design makes possible and the foreseeable affordances of the product’s design.
The privacy risk is relational rather than purely individual. As the complaint suggests, the effects of the glasses are not confined to the purchaser-wearer; they extend to everyone around them: spouses, children, roommates, coworkers, guests, and strangers who never agreed to the product at all. Meta’s glasses have an LED light that activates when recording, but the BBC found multiple videos online demonstrating how it can be covered or disabled entirely. None of the women who spoke to the BBC said they noticed the indicator. The point is not simply that notice mechanisms may fail in practice, but that wearable recording devices shift privacy burdens onto bystanders who have little ability to detect, interpret, or meaningfully respond to the recording at all.
The “AI” Problem and Operational Opacity
The case also illustrates a persistent transparency problem in AI governance: a gap in how “AI” is understood by consumers. Many people hear the term and assume their data is being processed entirely by software, even though many commercial systems rely, at least in part, on human review, annotation, auditing, or exception handling. But both the complaint and Meta’s response indicate that human review may form part of the broader system. Although that practice is common across the AI industry, it is nonetheless legally and normatively significant because it changes the consumer calculus. A person might evaluate machine analysis one way and human review by remote contractors of their footage and quite differently.
What Is at Stake
The legal claims reflect how broadly the plaintiffs are framing the harm. The complaint includes causes of action for false advertising, violations of California’s Consumers Legal Remedies Act, New Jersey consumer fraud, fraud by misrepresentation, fraud by concealment, negligent misrepresentation, breach of contract, breach of implied warranty, and unjust enrichment. The plaintiffs are seeking class certification on behalf of all U.S. purchasers, monetary damages, and an injunction that could force changes to how the glasses are marketed, sold, and operated. The core argument is simple: if privacy was part of the sales pitch, then consumers were entitled to accurate and complete information about how private the experience actually was.
What Companies Should Learn From This Case and Do Now
For companies developing connected products, AI-enabled features, wearables, sensors, or any service that captures or processes user-generated content, the central lesson from this case is not limited to smart glasses. It is that privacy risk increasingly arises not only from what a product technically does, but from the gap between how the product is experienced, how it is marketed, and how the data actually moves through the system. A company may believe it has adequately disclosed its practices in privacy policies, settings, or backend documentation. But if the product’s design and marketing communicate a substantially different impression to ordinary users, those formal disclosures may not carry the weight the company expects.
The case also underscores that companies should not think about privacy solely from the perspective of the account holder or purchaser. Many modern products create what might be called relational privacy risk: they affect not only the user who activates the device, but also bystanders, family members, coworkers, guests, customers, and others who never agreed to the product’s terms and may not even know they are being captured. That means product teams should be asking a broader question at the design stage: who is exposed by this product, whether or not they are our customer? If the answer includes non-users, then privacy review cannot stop at user consent flows alone.
A related takeaway is that product design itself can create legal exposure. The complaint and surrounding reporting are powerful in part because the alleged harms do not arise from a defect in the ordinary sense; they arise from the foreseeable operation of the product as designed. That is an important lesson for engineering, legal, and product teams. If a product makes recording, monitoring, inferencing, or sharing unusually easy, unusually ambient, or unusually difficult to detect, then teams should assume that regulators, courts, and plaintiffs’ lawyers will eventually examine not just whether the feature worked, but whether the product made misuse or unexpected use sufficiently foreseeable that the company should have designed differently.
Companies should also take from this dispute that “privacy by design” cannot be merely a marketing theme. If a company uses privacy-forward language; phrases such as “built for your privacy,” “designed for privacy,” or “you remain in control”; those statements should be tested against the entire product reality, not just a subset of settings or technical defaults. In practice, that means legal and marketing teams should stress-test whether those claims remain accurate when content is shared to cloud services, used to improve models, accessed by vendors, reviewed by human personnel, retained for quality assurance, or exposed through reasonably predictable user behavior. A privacy claim that is technically defensible in a narrow sense may still create substantial risk if it overstates the level of control, isolation, or confidentiality that users and non-users will reasonably infer from it.
The case likewise highlights the importance of examining human involvement in AI systems. Many companies describe products as “AI-powered” in ways that imply automated processing, while operational reality may include human review, annotation, moderation, escalation, quality assurance, or model-improvement workflows. That does not make the practice improper, but it does make transparency more important. Legal, product, and compliance teams should assume that many consumers will understand “AI processing” to mean software-only handling unless told otherwise in a clear, timely, and intelligible way. Where human review is part of the system, especially by vendors or offshore personnel, companies should consider whether that fact is disclosed in a way that matches consumer expectations and whether the surrounding controls are strong enough to support the company’s broader privacy messaging.
The right question is not simply whether the privacy policy mentions the practice somewhere. The better questions are: What impression does the feature create for an ordinary person? What assumptions will users and non-users naturally make about when capture is occurring, where data goes, and who can see it? Where might the user interface, industrial design, or product branding imply more privacy than the underlying system actually delivers? And if sensitive edge cases occur, and they usually do, are those truly edge cases, or are they foreseeable consequences of the product architecture?
For clients, that translates into several practical disciplines. Before launch, companies should map the full lifecycle of content and sensor data, including on-device capture, transmission, storage, training, review, vendor access, retention, and deletion. They should evaluate bystander impacts separately from user impacts. They should test whether visual or audible indicators of capture are actually effective in real-world settings rather than merely available in theory. They should align marketing language, onboarding flows, in-product prompts, FAQs, and privacy notices so that the most important facts are conveyed consistently and before the user meaningfully relies on the product. And they should assess whether vendors, model-training practices, and human-review workflows are governed tightly enough that the company could defend its privacy representations under scrutiny.
What Consumers Should Ask
For anyone considering a pair of smart glasses, whether from Meta or anyone else, the case makes a few questions unavoidable. Does captured content stay on the device by default? Under what circumstances does it travel to the cloud? Can it be used to train AI systems? Can human reviewers ever access it? And are those disclosures made clearly enough that an ordinary buyer would understand them before purchase, not buried in terms of service that almost no one reads?
The court has not determined whether the plaintiffs’ allegations are true. But the combination of the San Francisco litigation and the covert-filming videos circulating online already exposes a deeper problem. Smart glasses have been marketed as private, personal devices, yet many of the most salient privacy risks fall not only on users, but on the people around them: intimate partners, household members, coworkers, and strangers in a public space. Smart glasses therefore reveal a structural limitation in contemporary privacy law. So long as privacy is framed primarily as a matter of disclosure to the purchaser, the law will continue to under-address the bystander and relational harms that these devices predictably produce.