Grok’s Image Generator Must Go, and Take Its Deepfakes with It.


By: Kobain Radzat-Lockwood

In early December of 2024, xAI, the company behind X (formerly Twitter), introduced an artificial intelligence (AI) image generator that allowed users to prompt its AI model, Grok, to produce and edit photorealistic images.[1] Since its release, the model has generated millions of nude and sexually suggestive photos of individuals against their will.[2] One researcher found that Grok generated around 6,700 of these images per hour over twenty-four hours.[3] Despite both domestic and international outrage, Elon Musk, the CEO of xAI, framed requests to remove or reduce the model’s “nudifying” capabilities as a suppression of free speech.[4] The California Attorney General recently opened an investigation into this issue, urging xAI to “take immediate action to ensure this goes no further.”[5] Other countries have followed suit.[6] OpenAI, another popular AI company, has indicated its interest in expanding into AI-generated adult content with a projected release sometime this year.[7] The industry’s push towards developing AI-generated sexual content is alarming, given the lack of regulations necessary to prevent the current and future harm likely to occur.[8]

Unlike the EU, the U.S. does not have a uniform federal AI regulatory system; however, last year Congress signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (“TAKE IT DOWN Act”) into law, targeting the proliferation of AI-generated “deepfakes.”[9] The TAKE IT DOWN Act is the first meaningful step towards federal regulation of AI, aimed at preventing the use of AI to alter images of individuals into seemingly real depictions of them that never occurred.[10] Under the Act, X, like all other “covered platforms,” has until May 19th of this year to implement a compliant notice and take-down procedure for victims to report sexual deepfakes within its social media app.[11] Any public platform, website, or app that allows users to generate content must implement a “clear and conspicuous notice . . . and removal process” whereby harmed individuals can report AI-generated, sexually-explicit, deepfakes.[12] Once on notice, these corporations have forty-eight hours to remove “intimate visual depictions” and take reasonable efforts to also remove any copies.[13]

States also independently regulate AI, with 45 states enacting laws targeting the creation and dissemination of AI-generated deepfakes.[14] In an open letter to xAI in response to Grok AI, thirty-five state attorneys general called on the company to take immediate steps to protect user safety and prevent the circulation of sexual deepfakes.[15] Recognizing xAI’s “unique” structure, which connects its image generator directly to X’s social media users, the attorneys general urged the company to set industry standards by preventing and removing sexually explicit deepfakes.[16] California AG Bonta is already investigating xAI for Grok’s behavior and is looking into OpenAI to ensure its purpose remains centered on the safety of its products.[17] AG Bonta likely will investigate xAI under California’s SB 981, which requires social media platforms to provide accessible reporting mechanisms and “immediately remove” reported deepfakes.[18]

Outside of the U.S., countries have criminalized the use of AI to create sexually explicit images without the targeted individuals’ consent.[19] The European Union and the U.K. have committed to regulating this type of online behavior, and the legislative bodies are continuing to discuss implementing further protections in light of X’s massive increase in non-consensual sexual images being generated by the hour.[20] Indonesia and Malaysia blocked X because of Grok and its capabilities.[21] Across the globe, AI regulation is quickly becoming an important focus, given the exponential growth of both the industry and the harm it is causing.

Currently, it is against X’s policy to post or share nonconsensual nude deepfakes, but many victims have reported the corporation’s take-down process as ineffective at best.[22] Some users’ deepfakes are still circulating, even after they reported the images on multiple occasions.[23] Under the TAKE IT DOWN Act, xAI is obligated to remove these deepfakes, yet only the Federal Trade Commission (FTC) has the power to enforce the Act.[24] Interestingly, individuals or attorneys general may attempt to enforce the Act by claiming that a social media company’s failure to comply with its obligations under the law is an unfair or deceptive trade practice.[25] But whether these claims can surpass the defense that Section 230 of the Communications Decency Act applies, which may provide immunity from claims by non-FTC parties that the platform is a publisher of third-party content.[26] Under California law, these remaining reported deepfakes are illegal unless the company finds that the reported material is not “covered material.”[27] Most other state laws mirror California’s law, indicating a collective desire to prevent AI-generated sexual deepfakes and their circulation.[28]

Notwithstanding the public and legal backlash, AI corporations are still receiving billions in funding to expand the industry; the risk of public contempt has yet to outweigh perceived benefits in expanding the industry.[29]  Like most other AI corporations, xAI lost billions in earnings last year, mostly due to the costs associated with running and maintaining the business.[30] Yet investor interest in AI image generation remains high. On the same day that OpenAI announced its move towards adult content, Disney bought a $1 billion stake in the corporation, allowing OpenAI users to generate content using Disney characters.[31] These continuing investments imply that the public marketplace is not a reliable stick to help reduce AI deepfake-related harm. CEO Elon Musk seems hesitant to ensure users’ safety through any meaningful change to Grok’s features.[32] Without an established industry norm directing these AI companies to tackle AI-generated deepfakes, state, federal, and international laws are the only tools available to drive change.[33] For now, Grok’s image generator is behind a paywall, but its consensual deepfake capabilities do not appear to be meaningfully restricted.[34]

 

[1] Grok Image Generation Release, xAI (Dec. 9, 2024), https://x.ai/news/grok-image-generation-release [https://perma.cc/Y854-MRAY].

[2] See Cecilia D’Anastasio, Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X, Bloomberg L. (Jan. 7, 2026, at 06:00 ET), https://www.bloomberglaw.com/bloombergterminalnews/bloomberg-terminal-news/T8HQKHKGIFPO [https://perma.cc/G9TB-ZEKH] (reporting that in a 24-hour timeframe, Grok “generated about 6,700 every hour that were identified as sexually suggestive or nudifying”).

[3] Id.

[4] See Elon Musk (@elonmusk), X (Jan. 9, 2026, at 23:34 ET), https://x.com/elonmusk/status/2009846352340222300 [https://perma.cc/XJZ3-DF9H] (replying to a user asking why the focus is on Grok rather than other AI chatbots that can also generate sexually explicit images).

[5] Press Release, Rob Bonta, Att’y Gen., State of Cal. Dep’t of Just., Attorney General Bonta Launches Investigation into xAI, Grok Over Undressed, Sexual AI Images of Women and Children (Jan. 14, 2026), https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-investigation-xai-grok-over-undressed-sexual-ai [https://perma.cc/RW2G-3NVS].

[6] See D’Anastasio, supra note 2 (noting that “authorities in the European Union, UK, Malaysia, France and India” are investigating Grok).

[7] Paulo Palma, ChatGPT Adult Mode Explained: Erotica, NSFW Limits – Is “Porn” Allowed?, JustAI (Jan. 30, 2026), https://justainews.com/companies/openai/adult-mode-in-chatgpt-explained-nsfw-erotica-porn-policy/ [https://perma.cc/S8F7-RRCP]; OpenAI to Allow Mature Content on ChatGPT for Adult Verified Users Starting December, Reuters (Oct. 14, 2025, at 15:25 ET), https://www.reuters.com/business/openai-allow-mature-content-chatgpt-adult-verified-users-starting-december-2025-10-14/ [https://perma.cc/GK93-938B].

[8] See Riana Pfefferkorn, There’s One Easy Solution to the A.I. Porn Problem, N.Y. Times (Jan. 12, 2026), https://www.nytimes.com/2026/01/12/opinion/grok-digital-undressing.html [https://perma.cc/N33L-KS42] (discussing the proliferation of AI-generated pornography and the lack of regulations addressing it).

[9] TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55 (2025); see also Artificial Intelligence 2025 Legislation, Nat’l Conf. of State Legislatures (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation [https://perma.cc/EQU6-XF6G] (listing enacted state-level AI legislation).

[10] See Science & Tech Spotlight: Deepfakes, Gov’t Accountability Off. (Feb. 2020), https://www.gao.gov/assets/gao-20-379sp.pdf [https://perma.cc/9TRA-NHD4].

[11] TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55, 61 (2025) (defining covered platform as a “website, online service, online application, or mobile application . . . that serves the public; and . . . that primarily provides a forum for user-generated content, including messages, videos, images, games, and audio files; or for which it is in the regular course of trade or business of the website, online service, online application, or mobile application to publish, curate, host, or make available content of nonconsensual intimate visual depictions”).

[12] 139 Stat. at 60 (2025) (outlining the required notice and take-down mechanisms).

[13] Id. (requiring platforms to remove identified material “as soon as possible, but no later than 48 hours after receiving” a request to remove the material).

[14] Tracker: State Legislation on Intimate Deepfakes, Pub. Citizen (Oct. 20, 2025), https://www.citizen.org/article/tracker-intimate-deepfakes-state-legislation/ [https://perma.cc/C7Z5-X5DW] (listing state legislation that specifically targets AI-generated deepfakes).

[15] Letter from 35 State Attorneys General to xAI (Jan. 23, 2026), https://agportal-s3bucket.s3.us-west-2.amazonaws.com/AI%20Task%20Force/Letter%20to%20xAI_FINAL.pdf?VersionId=qmtDkSKNgA65D9INm.JOdOEiRzl12IdH [https://perma.cc/RA7N-K83V] (urging xAI to implement real safeguards to prevent Grok from disseminating nonconsensual sexual deepfakes).

[16] Id.

[17] See Press Release, Rob Bonta, supra note 5; Press Release, Rob Bonta, Att’y Gen., State of Cal. Dep’t of Just., Attorney General Bonta to OpenAI: Harm to Children Will Not Be Tolerated (Sep. 5, 2025), https://oag.ca.gov/news/press-releases/attorney-general-bonta-openai-harm-children-will-not-be-tolerated [https://perma.cc/4PDZ-3GG9].

[18] See Act of Sep. 19, 2024, ch. 292, § 1, 2024 Cal. Stat. 1-3 (codified at Cal. Bus. & Pros. Code §§ 22670-71) (prohibiting digital identity theft).

[19] See Parmy Olson, Musk Will Not Fix Fake AI Nudes Made by Grok. A Ban Would, Bloomberg L.  (Jan. 6, 2026, at 23:30 ET), https://www.bloomberglaw.com/bloombergterminalnews/bloomberg-terminal-news/T8H8IFKIJH8P [https://perma.cc/XY8M-BY9L].

[20] See id.

[21] See Osmond Chia & Silvano Hajid, Malaysia and Indonesia Block Musk’s Grok Over Explicit Deepfakes, BBC. (Jan. 12, 2026), https://www.bbc.com/news/articles/cg7y10xm4x2o [https://perma.cc/PEH8-8L2M] (discussing the global response to Grok’s AI image generator).

[22] See D’Anastasio, supra note 2.

[23] Id.

[24] See Jeffrey Neuburger & Jonathan Mollod, Take it Down Act Signed into Law, Offering Tools to Fight Non-Consensual Intimate Images and Creating a New Image Takedown Mechanism, Proskauer (May 29, 2025), https://www.proskauer.com/blog/take-it-down-act-signed-into-law-offering-tools-to-fight-non-consensual-intimate-images-and-creating-a-new-image-takedown-mechanism [https://perma.cc/R6AV-FZ2X] (discussing the TAKE IT DOWN ACT and its implications); TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55, 61 (2025).

[25] See Neuburger & Mollod, supra note 24 (discussing the TAKE IT DOWN ACT and its enforcement mechanisms); TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55, 61 (2025).

[26] See Neuburger & Mollod, supra note 24 (discussing how the TAKE IT DOWN ACT and Section 230 of the CDA work together); TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55, 61 (2025).

[27] See Act of Sep. 19, 2024, ch. 292, § 1, 2024 Cal. Stat. 1-3 (codified at Cal. Bus. & Pros. Code §§ 22670-71) (defining covered material as “an image or video created or altered through digitization that would appear to a reasonable person to be an image or video of . . . intimate body part[s] . . . [a] person engaged in an act of sexual intercourse, sodomy, oral copulation, sexual penetration . . .[or] masturbation”).

[28] See Pub. Citizen, supra note 14 (listing state legislation that specifically targets AI-generated deepfakes).

[29] See Carmen Arroyo & Ed Ludlow, Musk’s xAI Closed $20 Billion Funding with Nvidia Backing, Bloomberg L. (Jan. 6, 2026, at 15:51 ET), https://www.bloomberglaw.com/bloombergterminalnews/bloomberg-terminal-news/T8GLNSKK3NY8 [https://perma.cc/KGG4-MXVT].

[30] See Carmen Arroyo, Musk’s xAI Burns Almost $8 Billion, Reveals Optimus Plan, Bloomberg L. (Jan. 9, 2026, at 12:34 ET), https://www.bloomberg.com/news/articles/2026-01-09/musk-s-xai-reports-higher-quarterly-loss-plans-to-power-optimus [https://perma.cc/JF6G-JG6C] (reporting xAI’s 2025 financials); Thomas Claburn, OpenAI’s ChatGPT is So Popular that Almost No One Will Pay For It, The Register (Oct. 15, 2025, at 16:03 ET), https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay [https://perma.cc/EE6G-7K6R] (reporting on OpenAI’s financials).

[31] Brooks Barnes & Cade Metz, Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos, N.Y. Times (Dec. 11, 2025), https://www.nytimes.com/2025/12/11/business/media/disney-openai-sora-deal.html [https://perma.cc/C3L7-WVZY] (reporting that the agreement “includes limits on character behavior. No drugs, sex, alcohol or interactions with characters owned by other media companies”).

[32] See Safety (@Safety), X (Jan. 3, 2026, at 20:00 ET), https://x.com/Safety/status/2007648212421587223 [https://perma.cc/2JNS-JB6U].

[33] See Letter from 35 State Attorneys General to xAI, supra note 15 (urging xAI to “ensure that the safeguards . . . recently announced do not merely place NCII creation behind a paywall, but actually mitigate its production throughout X and the Grok platform”); Pfefferkorn, supra note 8 (discussing the similarities in government inaction between AI and cybersecurity and stating that “We cannot afford years of government inaction again”).

[34] Helena Horton, Dan Milmo & Amelia Gentleman, Grok Turns Off Image Generator for Users After Outcry Over Sexualized AI Imagery, The Guardian (Jan. 9, 2026, at 16:47 ET), https://www.theguardian.com/technology/2026/jan/09/grok-image-generator-outcry-sexualised-ai-imagery [https://perma.cc/CMH9-PN28].



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *