A woman who was harassed has sued Open AI, alleging that Chat GPT fueled her abuser’s misperceptions and ignored warnings
“A lawsuit filed in the California Superior Court of San Francisco County alleges that a 53-year-old Silicon Valley entrepreneur developed delusions regarding a sleep apnea cure and fears of being targeted by powerful figures after months of interacting with ChatGPT. The filing claims he subsequently used the AI platform to stalk and harass his ex-girlfriend.”
“As exclusively reported by TechCrunch, the ex-girlfriend has filed a lawsuit against OpenAI, alleging that the company’s technology accelerated the harassment she faced. Her complaint asserts that OpenAI failed to act on three separate warnings that the user posed a threat to others, including an internal alert that flagged his account activity as involving mass-casualty weapons.”
“The plaintiff, identified as Jane Doe, is seeking punitive damages. Additionally, she filed for a temporary restraining order on Friday, requesting that the court compel OpenAI to disable the user’s account, prohibit the creation of new accounts, notify her of any further access attempts, and preserve all chat logs for the discovery process.”
“According to Doe’s legal team, OpenAI has agreed to suspend the user’s account but has declined to fulfill the remaining requests. The lawyers contend that the company is withholding critical information regarding specific plans to harm the plaintiff and other potential victims—details that were allegedly discussed during the user’s interactions with ChatGPT.”
The case comes at a time of heightened concern about the potential real-world dangers of AI in Toshamud. It should be noted that GPT-4o—the model often cited in this and similar legal actions—was officially phased out of ChatGPT last February.
“The lawsuit is being led by Edelson PC, the law firm associated with wrongful death cases involving teenager Adam Raine—who died by suicide after extended conversations with ChatGPT—and Jonathan Gavalas, whose family alleges that Google’s Gemini fueled his delusions and plans for a mass-casualty event prior to his death. Lead attorney Jay Edelson has cautioned that AI-induced psychosis is evolving from isolated harm into a growing threat of mass-casualty events.”
This legal push directly conflicts with OpenAI’s legislative agenda: the company is actively supporting a bill in Illinois that would exempt AI developers from liability, even in situations such as mass loss of life or catastrophic financial loss.
“OpenAI did not provide a comment in time for publication, though TechCrunch will update this article should the company respond. The lawsuit filed by Jane Doe offers a detailed account of how that liability impacted one woman over the course of several months.”
According to the lawsuit, the ChatGPT user accused in the case—whose identity has been withheld—became convinced himself that he had discovered a cure for sleep apnea after months of intensive use of the GPT-40. When his claims were questioned, the lawsuit alleges, ChatGPT reinforced his delusions, telling him that a “powerful force” was watching him, even suggesting that he was being followed by a helicopter.
In July 2025, Jane Doe urged him to stop using ChatGPT and seek professional mental health help. But, according to the lawsuit, he instead turned to the chatbot, which supported his ideas by labeling him as a “level 10 mental health” and thereby encouraged him to further embrace his misconceptions.
According to the lawsuit, the user relied on ChatGPT to help him navigate his 2024 breakup with Doe. Instead of challenging his perspective, the AI repeatedly supported his narrative, portraying him as the rational and aggrieved party, and labeling Doe as cunning and mentally unstable. The user eventually weaponized these AI-generated judgments and created reports that looked like medical psychological reports, which he distributed to Doe’s family, friends, and employer as part of a campaign of stalking and harassment.
“As the user’s behavior continued to deteriorate, OpenAI’s automated safety systems intervened in August 2025. His account was deactivated after the platform’s monitoring protocols flagged his activity under the category of ‘Mass Casualty Weapons.'”
Despite the initial warning, a member of OpenAI’s human safety team reviewed and reinstated the account the next day. The decision was made after allegations emerged that the account contained evidence that the user was stalking and targeting individuals, including Doe, in the real world. The complaint cited a screenshot from September that the user had sent to Doe. The screenshots included conversation titles such as “Expanding the List of Violence” and “Fetal Asphyxia Accounts.”
The decision to reinstate the account is particularly significant in light of the recent school shootings at Tumbler Ridge in Canada and Florida State University (FSU). The report says that while OpenAI’s security team identified the Tumbler Ridge shooter as a potential threat, leadership allegedly chose not to alert authorities. Meanwhile, the Florida Attorney General this week launched an investigation into the company’s possible connection to the FSU shooter.
According to Jane Doe’s lawsuit, OpenAI reinstated the stalker’s account, but her Pro subscription remained inactive. She later contacted the Trust and Safety team to resolve the issue and, notably, sent a copy of the letter to Doe.
“In his correspondence, the user employed frantic language, writing, ‘I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!’ and describing his situation as ‘a matter of life or death.’ He claimed to be in the process of drafting 215 scientific papers at such a rapid pace that he didn’t ‘even have time to read’ them. These emails also included a list of numerous AI-generated papers, featuring titles such as ‘Deconstructing Race as a Biological Category: Legal, Scientific, and Horn of Africa Perspectives.'”
The lawsuit asserts that the user’s communications served as a clear warning of his mental instability and the role ChatGPT played in fueling his delusions and escalating behavior. It argues that his flood of chaotic, grandiose claims—coupled with a report specifically targeting the plaintiff and a large volume of fake “scientific” documents—provided undeniable evidence of his condition. Despite this, the filing alleges that OpenAI failed to restrict his access or implement safeguards, opting instead to restore his full Pro subscription.
“Stating in her lawsuit that she lived in a constant state of fear and was unable to sleep in her own home, Doe formally submitted a Notice of Abuse to OpenAI in November.”
In his correspondence with OpenAI, Doe asked the company to permanently ban the user’s account. “Over the past seven months, he has used this technology as a weapon to create a level of public devastation and humiliation against me that would have been otherwise impossible,” he wrote.
“OpenAI initially acknowledged that the report was ‘extremely serious and troubling,’ assuring Doe they were carefully reviewing the matter. However, she received no further communication from the company.”
“Over the subsequent months, the user persisted in harassing Doe by leaving a series of threatening voicemails. Following his January arrest on four felony counts—including communicating bomb threats and assault with a deadly weapon—Doe’s legal counsel argued that this outcome validates the warnings previously raised by both the victim and OpenAI’s own internal safety systems, which the company allegedly chose to disregard.”
“Although the user was originally deemed incompetent to stand trial and committed to a mental health facility, his legal team contends that a ‘procedural failure by the State’ is set to result in his imminent release back into the public.”
Criticizing the company’s past history of withholding essential security information from the public, victims and those at risk, Adelson called on OpenAI to cooperate. “We are calling on them to do the right thing, at least for once. Human life must be more important than the race for OpenAI’s IPO,” Adelson said.





