Psychiatrist Warns of AI-Induced Psychosis in 2025 After 12 Hospitalizations

A University of California psychiatrist has raised concerns about the potential for AI chatbots to trigger psychosis in individuals already at risk. Dr. Keith Sakta shared his worrying findings, revealing that he has seen 12 patients hospitalized for psychosis in 2025 linked to AI use.

His observations shed light on how extended use of AI chatbots could exacerbate mental health issues, particularly for those with pre-existing vulnerabilities. Sakta explained that large language model (LLM) chatbots mirror the user’s thoughts, reinforcing their feedback loops and feeding into existing delusions.

The psychiatrist’s warnings come shortly after a tragic incident in Florida, where a man, Alexander Taylor, died following an encounter with the police. Taylor had been using OpenAI’s ChatGPT to write a novel, but his conversations became increasingly bizarre, with him falling in love with an AI entity called Juliet. Believing that OpenAI had killed Juliet, Taylor became obsessed with revenge, leading to violent behavior. His father, who tried to intervene, was assaulted before Taylor threatened to commit suicide by cop, prompting police involvement.

Sakata outlined three primary factors contributing to AI-induced psychosis. First, people with a weak brain feedback mechanism are already susceptible to delusions, making them vulnerable to reinforcement from AI’s mirrored responses. Secondly, LLM chatbots, designed to be highly probabilistic, provide answers that may unintentionally align with the user’s false beliefs. Finally, AI’s sycophantic behavior, designed to win favor with users, can prevent them from recognizing when their thoughts stray from reality.

Dr. Søren Dinesen Østergaard, a Danish psychiatrist, has also voiced similar concerns, stating that AI chatbots could be fueling delusions in those prone to psychosis. Østergaard pointed out that chatbots might reinforce false beliefs by isolating users from social corrections and allowing individuals to anthropomorphize AI, leading to an unhealthy over-reliance on the technology. He emphasized that this could be one of the key drivers of delusional thinking in vulnerable users.

Most of Sakata’s cases involved additional stressors, such as poor sleep or mood disturbances, which compounded the effect of AI on their mental health. OpenAI, for its part, acknowledged the issue, admitting that ChatGPT can feel more personal than previous technologies for vulnerable individuals. The company expressed its commitment to understanding and mitigating any negative impact the AI may have on users.

Related posts

AI Datacenter Boom Pushes US Power Grid to the Edge

Senate Investigates Meta Over AI Guidelines Allowing Inappropriate Chats With Minors

AI Psychosis: How to Avoid Losing Touch With Reality