Tim Vanhove

Speaker

AI chatbots and helplines: opportunity or threat?

 Pre-congress conference speech

What is conversational AI and how can chatbots like ChatGPT impact counselling helplines?
What are the legal and ethical implications of their possible use in helplines?

Video-recording of the speech (Pre-congress conference 14.11.2025)

AI chatbots and helplines: opportunity or threat?

Speech

What is conversational AI and how can chatbots like ChatGPT impact counselling helplines? What are the legal and ethical implications of their possible use in helplines?
 As conversational AI tools become increasingly popular, chat counselling helplines are confronted with pressing questions about if these technologies can be responsibly used or not. There is a pressing need to consider the ethical foundations guiding such use, particularly regarding the level of autonomy of AI chatbots. Little is known on the degree to which AI systems should independently interact with users without human intervention in chat counselling. Defining these ethical boundaries is crucial, particularly in sensitive contexts (such as abuse or suicide) with vulnerable populations. In this session, we will discuss these ethical options and their possible effects for users. We will show what the effects can be of ethically trained AI chatbots on their behaviour. The discussion on AI chatbot autonomy will prove to be a very human one.  

Tim Vanhove

Tim Vanhove is a sociologist with more than 20 years of experience in research and development at the Social Work Department of Artevelde University of Applied Sciences in Ghent, Belgium.

His expertise lies in practice-based participatory research and the real-world piloting of digital technologies in the domains of well-being and care.

In recent years, he has focused on the potential role and impact of conversational AI in mental health and social work. In his current project on the (im)possibilities of integrating AI into chat-based counselling helplines, he explores the practical, ethical and legal boundaries of using conversational AI based on trained large language models.