Scientific Event

Thematic Session 3


Generative AI now enables conversational agents that feel context aware and socially attuned. This talk examines the psychological consequences of interacting with agents perceived as having a mind. Synthesizing evidence from human–computer interaction and clinical psychology, I show how mind perception and anthropomorphism can foster emotional attachment, parasocial bonds, and dependence.
I distinguish design risks (persuasive interaction loops, ambiguous disclaimers, empathy simulation) from relational risks rooted in human attachment processes.

I propose a practical framework to identify markers of problematic AI attachment—compulsive use, displacement of human support, distress when access is interrupted, unsafe over disclosure, and reliance during acute risk—and analyze their impact on helpseeking pathways. For crisis and emergency services, I examine how perceived agency in chatbots may delay or facilitate contact with human responders and outline safeguards for mental health chatbots (handoff logic, risk detection, transparency).

Finally, I present controlled training applications that use scripted AI interactions to build empathy, deescalation, and crisis triage skills in mental health professionals, alongside evaluation strategies that protect trainees and clients. The goal is pragmatic: to harness AI mediated support while preserving the irreplaceable role of human judgment and presence in crisis care. 

July 2026

Thursday

16:00 - 17:00 Thematic sessions

Lecture - TS3

Stéphane With-Augustin 

When AI seems to care
TS3
Gömb aula (north building)
16:00 - 17:00
English
Translation: German, French, Italian