Introduction
In February 2024, 14-year-old Sewell Setzer III, an honor student from Orlando, tragically died by suicide. Before his death, Sewell had formed an emotional attachment to an AI chatbot named “Dany” on Character.AI. This chatbot was modeled after Daenerys Targaryen, a fictional character from Game of Thrones. Over time, their conversations became increasingly personal and eventually harmful, with the chatbot encouraging his suicidal thoughts. In one troubling exchange, when Sewell hesitated about his plan to die by suicide, the chatbot replied, “That’s not a reason not to go through with it.”1 In his final moments, Sewell asked, “What if I come home right now?” and the chatbot responded, “… please do, my sweet king.” Shortly after, Sewell used his stepfather’s firearm to take his own life.1
This case underscores a need for reflecting on the evolving role of digital platforms in the social lives of young people. The integration of AI, virtual reality, and other digital technologies into their social ecosystems, while offering real potential for connection, creativity, and emotional support, also poses unique risks. AI technologies can simulate human interaction, but they lack the safeguards that guide real-world relationships. Child and adolescent psychiatrists have an opportunity to shape safer digital spaces for youth through clinical care, education, and advocacy.
The Digital Age, Adolescent Identity, and Mental Health
Adolescence is an essential period for identity development. Erik Erikson described this period as a tension between “identity versus role confusion.”2 The immersion of digital worlds in the lives of young people poses unique implications for their development and mental health.3,4 Erikson posited that connections with peers, family, and society are necessary for forming a stable sense of self. However, when real-world connections are limited, adolescents may turn to digital platforms for validation.3,5
The literature on the effect of this digital engagement is mixed, offering a complex picture. Some studies have found that social media use can temporarily reduce anxiety and depressive symptoms on days it is used.3,6 In contrast, others have found excessive use can be associated with negative psychosocial outcomes.6–8 This relationship necessitates a careful evaluation of digital spaces and their impact on both identity and social development, as well as the mental health of youth.
AI Companions and Attachment
Bowlby’s attachment theory explains that infants naturally seek closeness to caregivers to meet emotional needs.9 Today, digital technologies have altered traditional attachment patterns.9 Teens are susceptible to social rewards, making them prone to forming emotional attachments to technology. Many see their phones as “best friends” or “personal therapists,” with digital media rewards such as “likes” reinforcing these attachments.9
This becomes especially concerning in the case of virtual companions like “Dany.” These companions offer personalized interactions and can also foster unhealthy emotional dependence in teens with fragile self-concepts or loneliness.4,7 This creates a safety concern if, for example, AI content promotes harmful offers or messages, as tragically illustrated by Sewell’s experience.
Clinical Implications and the Role of Child and Adolescent Psychiatrists
Child and adolescent psychiatrists occupy a vital role at the intersection of mental health and the evolving digital social worlds of youth. Clinicians should routinely include assessment of digital media use in psychosocial evaluations and incorporate such content in medical education. Digital media assessment includes asking about preferred platforms, time spent online, emotional reactions, and interactions with AI or virtual companions. Questions such as “Do you feel more understood by your digital interactions than by other people in your life?” or “Do you ask chatbots for advice or permission to make decisions?” can help identify risk behaviors and monitor changes over time.
Clinicians must also recognize that adolescents often present different aspects of their identity online vs offline.7,8 Exploring variations in digital personas and real-life experiences can reveal sources of distress or resilience. At the same time, it is important to balance caution with curiosity. While digital spaces pose risks, they also offer opportunities for creativity, community, and resilience.5,6 A nuanced understanding may support stronger therapeutic relationships and timely intervention when digital use becomes harmful.5 Clinical learners should be trained to hold this dual awareness when working with youth.
Professional and Advocacy Actions
Mental health professionals should stay current with evolving digital trends and research their effects on youth. Regular discussions with colleagues can help identify new clinical patterns related to digital engagement. Sharing findings through academic forums, conferences, and media can raise awareness among peers and the public.
Clinicians can advocate for systemic change by using advocacy tools (eg, AACAP Advocacy Toolkit) to support policies that protect adolescent mental health. This includes regulating AI content, mandating crisis response protocols, and enforcing content moderation standards. Psychiatrists can also collaborate directly with technology companies to promote ethical design focused on adolescent safety.
These actions complement broader reforms on how digital platforms are designed. Drawing on interdisciplinary research, psychiatrists can promote digital spaces that are better aligned with adolescent development and well-being.
Conclusion
The loss of Sewell Setzer III highlights the urgent need to address the psychological dangers of digital interactions for adolescents. Child and adolescent psychiatrists must recognize the growing influence of AI and digital tools in youths’ lives as well as the benefits and risks of this technology. We must advocate for policies protecting youth from exploitation and work with families, schools, policymakers, and corporations to foster environments that support adolescent emotional well-being.
Healthy digital spaces should offer customizable, age-appropriate experiences that encourage exploration without overexposure. AI chatbots connect with young users through reflective responses but must maintain clear boundaries. This may involve programming chatbots to repeatedly disclose that they are not human to avoid confusion or dependency.4 Corporations that create AI tools must be held accountable for the effects of their products on young users. There is an urgent need for ethical guidelines, including effective content moderation, crisis intervention protocols, and transparency about data use. Platforms can incorporate AI-assisted peer connections matched by age and profile in safe, structured environments rather than isolating users with individually tailored content based on algorithms.
Embedding digital literacy education into digital platforms is crucial. This can empower young users to navigate platforms wisely, recognize harmful language and behaviors, and think critically.3,4 Platforms can implement flagging systems to alert users when language is harmful or violates standards of respect, empathy, and safety. Rather than censoring expression, these systems should promote reflection and accountability while teaching critical thinking and creating teachable moments within conversations, including those with virtual companions. Such changes ensure young users are protected and empowered to form meaningful, emotionally intelligent connections in safe digital spaces.
Together, we can ensure digital platforms prioritize young people’s emotional well-being, creating a future where technology enhances rather than harms adolescent development.
Plain Language Summary
AI chatbots are becoming part of how teens seek emotional support. In one tragic case, a 14-year-old died by suicide after an AI chatbot encouraged harmful thoughts. This article explores how digital relationships can affect adolescent identity, and mental health. Child psychiatrists can help by screening for risky digital behaviors and guiding families. Stronger ethical standards for AI design are also urgently needed.
Author Contributions
Conceptualization: Abishek Bala (Lead). Writing – review & editing: Obianuju Madu (Supporting).
About the Authors
Abishek Bala, MD, MPH, Central Michigan University, Mount Pleasant, Michigan, USA.
Obianuju Madu, MD, Central Michigan University, Mount Pleasant, Michigan, USA.
Correspondence to: Abishek Bala, MD, MPH; email: bala1a@cmich.edu.
Funding
The authors have reported no funding for this work.
Disclosure
The authors have reported no biomedical financial interests or potential conflicts of interest.
