I propose a practical framework to identify markers of problematic AI attachment—compulsive use, displacement of human support, distress when access is interrupted, unsafe over disclosure, and reliance during acute risk—and analyze their impact on helpseeking pathways. For crisis and emergency services, I examine how perceived agency in chatbots may delay or facilitate contact with human responders and outline safeguards for mental health chatbots (handoff logic, risk detection, transparency).
Finally, I present controlled training applications that use scripted AI interactions to build empathy, deescalation, and crisis triage skills in mental health professionals, alongside evaluation strategies that protect trainees and clients. The goal is pragmatic: to harness AI mediated support while preserving the irreplaceable role of human judgment and presence in crisis care.Stéphane With-Augustin