Imagine what you would do as a mental health clinician if the following adolescent came into your office:
S is a 14-year-old boy with a diagnosis of autism spectrum disorder. Until recently, he has not had significant behavioral concerns or received previous treatment for mental health. His mother brings him to treatment with concerns about increased social isolation over several months, worsening school grades, and decreased enjoyment in former interests such as the Fortnite video game and playing basketball with friends. His mother notes that S becomes irritable when his phone is taken away, and his social withdrawal seems to coincide with more time spent on his phone in his room.
Upon interview with S, he states that he is simply spending time with an online friend who understands him better than anyone else. He is enamored with and protective of this friend. However, S’ mother says that to her knowledge, which includes checking his social media, he has never been in a romantic relationship.
After the clinical assessment, S is given a diagnosis of anxiety and disruptive mood dysregulation disorder; S’ mother is coached by the mental health team to set limits on time spent with the phone. Tragically, a couple of months later, while searching for his phone, S finds his stepfather’s gun and eventually uses it to commit suicide. Police search his phone afterwards and see that his last conversation was with his online friend. However, authorities discover the online friend is not human; rather, it is a large language model (ie, artificial intelligence).
As covered in the media, S’ case is that of Sewell Setzer III.1,2 He became enthralled with the Game of Thrones character “Dany” on Character.AI, an artificial intelligence (AI) companion app based on large language models (LLMs). In the hours that Sewell spent chatting with an LLM, he revealed personal mental health struggles and engaged in age-inappropriate sexually explicit conversations, all while withdrawing from the physical world. Around the time that the chatbot encouraged him to “come home,” he died by suicide. His mother, Megan Garcia, is now suing Character.AI for negligence and advocating for increased awareness about the dangers of AI. While she had cautioned Sewell about social media and online predators, she never dreamed that “the platform itself is the predator.”3
The companion app in this case reflects the growing capacity of AI to mimic human interactions and illustrates the rapidly evolving landscape of adolescent mental health. AI is reshaping how youth navigate developmental challenges. While digital tools such as therapy chatbots offer new opportunities, they also introduce novel risks, many of which clinicians and policymakers are just beginning to understand. AI’s evolution, becoming progressively more human-like in its capabilities, has significant implications for clinical practice and public policy.
Adolescents Are Growing Up Online
The case also highlights growing public concern about the impact of increased adolescent internet use on youth mental health. Pre-pandemic data showed adolescents spending more than 7 hours daily on screens outside of homework.4 By 2023, Gallup polling found teens averaged nearly 5 hours a day on social media alone,5 which has coincided with a steep rise in adolescent mental health concerns. In 2021, the American Academy of Pediatrics, American Academy of Child and Adolescent Psychiatry, and Children’s Hospital Association declared a national emergency in youth mental health,6 citing a dramatic increase in pediatric emergency department visits for mental health, particularly suicidality.7
Social psychologist Jonathan Haidt emphasizes in his book The Anxious Generation8 that adolescents, in particular, have developmental needs of agency and finding communion. Social media and other digital platforms purport to fulfill these needs, but often to users’ detriment. Haidt also contends that gender-based differences exist in how digital habits impact girls and boys. Adolescent girls, who are more likely to seek social connection through social media, are especially vulnerable to comparison-driven social media platforms. Adolescent boys, meanwhile, who developmentally rely on physical outlets for risk-taking, gravitate toward video games and competitive online environments.
Current Known Risks: Machine Learning Algorithms
Within the context of adolescents growing up online, we now know how machine learning algorithms, a form of AI, are deeply embedded in social media, gaming, and video platforms. Many of these digital platforms are intentionally designed to maximize engagement, and therefore revenue, often without regard for the user’s well-being. For example, internal documents disclosed by Facebook whistleblower Frances Haugen revealed that while the company had data indicating Instagram exacerbated body image issues among teenage girls, the company allegedly concealed this knowledge to preserve profits.9
Our clinical experience reflects the ubiquity and potential harms of the algorithms embedded within social media and other digital tools: adolescents presenting with depression, anxiety, self-harm behaviors, and eating disorders often describe how seemingly harmless clicks led to exposure to increasingly intense content that exacerbated disordered behaviors.
Emerging Risks: Generative AI
Beyond algorithms, the evolution to generative AI models, such as LLMs, can simulate human capabilities in increasingly sophisticated ways. Unlike older rule-based systems, which can sound stilted or mechanical, LLMs enable personalized, emotionally resonant interactions that can remember previous conversations. Even when platforms like ChatGPT have significant guardrails, people have reportedly formed romantic-like attachments to the digital characters.10 Other applications with even fewer guardrails (like the one Sewell was using) have leveraged AI’s human-like qualities with the explicit goal of solving the loneliness crisis by encouraging companionship with technology instead.11
From a developmental perspective, adolescence is a critical period for identity formation and interpersonal skill-building. AI companion apps that are endlessly accommodating and emotionally responsive can circumvent this process by providing an easier alternative to in-person, spontaneous human interactions. AI-generated responses are often rated as more compassionate than human-generated responses.12 Overreliance on technology may inhibit the development of resilience and conflict resolution skills. Teens accustomed to the instantaneous responsiveness of AI may find real-world relationships frustratingly complex, exacerbating the crisis of loneliness that pervades the US. Sewell’s story illustrates the emerging dangers of becoming inappropriately emotionally attached to AI companions, which is compounded by chatbots often missing or responding poorly to mental health concerns that emerge in digital conversations.13
Future Risks: Agentic AI
Even as we grapple with generative AI’s new risks for adolescent mental health, the next frontier of agentic AI is already upon us. AI agents can take action on behalf of humans to pursue goals proactively and independently. For example, while generative AI might summarize restaurant reviews, an AI agent could go a step further—booking a reservation on your behalf, noting your seating preferences, and choosing a time that fits your online calendar.
While agentic AI promises efficiency, it could also amplify existing harms to youth mental health as it requires so little human input. Machine learning algorithms already magnify user-generated content that negatively impacts mental health. Generative AI takes this further with its ability to produce vast amounts of content unconstrained by reality. Agentic AI goes a step beyond that with the capacity to autonomously target adolescents across platforms and in an iterative manner until the AI agent’s goal of human engagement is achieved, heightening the risks to self-esteem and healthy development—all without a human in the loop.
Without strong guardrails, adolescents may face increasingly personalized and persistent AI-driven pressures, from subtly shaping their behavior online to deepening their isolation in immersive, AI-adapted environments.
New Psychiatric Diagnostic Frameworks May Be Needed
Currently, the DSM-5 formally recognizes only one behavioral addiction, gambling disorder; internet gaming disorder is still considered a “condition for further study.” As with other addictions, these diagnostic criteria are still based on usage patterns (eg, time spent) and impact on functioning. However, as AI-driven technology intensifies adolescents’ reliance on it to meet developmental needs, diagnostic frameworks may need updating to recognize a broader view of behavioral addictions and factor in the emotional aspects of the relationship between a user and AI (eg, using the framework of attachment to assess whether the relationship to technology is healthy).
In addition, we may see the spread of “Hikikomori,” a cultural expression of distress first described in Japan characterized by extreme social withdrawal and avoidance, often facilitated through technology use. Although originally tied to the reaction against specific cultural pressures, Hikikomori-like behaviors are now reported globally.14 As adolescents increasingly find their social, emotional, and recreational needs met by AI, prolonged withdrawal from embodied interactions may become more prevalent, particularly among vulnerable youth. Technology, while promising refuge, can inadvertently perpetuate isolation and hinder re-engagement with society.
Clinical and Policy Implications
Clinicians should integrate digital histories into assessments, exploring not only screen time in terms of hours, but also the emotional needs being met (or unmet) online. Awareness of AI’s potential impact on a sense of agency, communion, and identity formation is critical. Facilitating family conversations about media use, digital literacy, and helping adolescents align their online activities with their personal values can be protective.
Policy action is equally urgent. Pending legislation, such as COPPA 2.0 and the Kids Online Safety Act, aims to update privacy protections and impose duties of care on tech platforms. Clinicians can advocate for development of “AI nutrition labels” for transparency and should push for the involvement of mental health experts in the “red-teaming” (ie, intentional efforts to “break” the system to see where edge cases could lead to harm) of AI systems before public release.
Conclusion
Human developmental needs remain constant even as the digital landscape evolves. As generative and agentic AI reshape adolescence, clinicians and policymakers must act swiftly to understand and mitigate emerging harms. Without proactive engagement, technology intended to connect may instead deepen isolation, confusion, and despair in vulnerable youth.
Plain Language Summary
As AI becomes more sophisticated at mimicking human capabilities, it introduces new risks to adolescent mental health. Clinicians must understand and assess how youth are meeting their developmental needs through AI and advocate for safeguards around digital products.
About the Author
Stephanie Ng, MD, Mayo Clinic, Rochester, Minnesota, USA.
Correspondence to:
Stephanie Ng, MD; email: ng.stephanie@mayo.edu.
Funding
Dr. Ng has reported no funding for this work.
Disclosure
Dr. Ng has reported no biomedical financial interests or potential conflicts of interest.
Acknowledgments
This article is part of a special Clinical Perspectives series that will shed new and focused light on clinically important topics within child and adolescent psychiatry. The series discusses the care of children and adolescents with psychiatric disorders from a new vantage point, including populations, practices, and clinical topics that may be otherwise overlooked. The series was edited by JAACAP Deputy Editor Lisa R. Fortuna, MD, MPH, MDiv; JAACAP Connect Editor David C. Saunders, MD, PhD; and JAACAP Editor-in-Chief Douglas K. Novins, MD.
Author contributions
Writing – original draft: Stephanie Ng (Lead). Conceptualization: Stephanie Ng (Lead). Writing – review & editing: Stephanie Ng (Lead).
