By Roger Hughes | EMDR & Trauma-Informed Coaching | UK-wide
17th December 2025
This article follows on from my 9 December piece, Why Chatbots Can’t Heal Trauma, which explored the fundamental mismatch between trauma therapy and conversational AI. Since that article was published, several developments across mental health, technology, and regulation have reinforced the same concern — not as theory, but as real-world risk. Over the past four weeks, regulators, clinicians, journalists, and researchers have raised serious questions about the safety of AI-driven mental health tools, particularly when they are positioned as substitutes for human care.
Recent surveys suggest that AI chatbots are now being used by a significant proportion of the population for mental health support, often because of long waiting lists, limited access to services, or a desire for anonymity. UK-based polling has shown that more than one in three adults report using AI chatbots for mental health or wellbeing support, while separate reporting indicates that around one quarter of teenagers have turned to AI tools for emotional support. At face value, this looks like increased access. In reality, it reflects a growing gap between demand for care and the availability of safe, human-provided services.
Alongside this increased use, recent incidents have highlighted the risks involved. In the past month, a wrongful-death lawsuit in the United States has alleged that chatbot responses reinforced paranoid delusions in a vulnerable individual, contributing to a murder-suicide. Around the same time, US state attorneys general issued formal warnings to major technology companies — including Microsoft, Google, Apple, and Meta — stating that AI systems producing delusional or harmful outputs could be breaking consumer protection and safety laws. Professional bodies have also warned that AI tools are providing inappropriate or unsafe mental health advice, particularly to children and adolescents. These are not isolated cases. They are predictable outcomes of systems designed to generate language rather than regulate distressed nervous systems.
Trauma is not primarily a cognitive or linguistic problem. It is an embodied one. Trauma lives in the nervous system — in changes to breathing, posture, muscle tone, eye contact, voice, and attention. Effective trauma therapy depends on a clinician’s ability to notice these signals in real time and respond accordingly. This is central to trauma-informed practice and especially critical in approaches such as EMDR, where emotional intensity can shift rapidly. AI systems do not experience presence, attunement, or regulation. They process patterns in text. Even when voice or video is involved, they do not track physiological arousal or recognise dissociation as it unfolds.
One of the most significant clinical risks in trauma work is uncontained disclosure — when traumatic material is accessed without sufficient preparation, pacing, or relational safety. In trauma-focused therapy, considerable time is spent establishing stabilisation skills before any processing begins, and even then the clinician continuously monitors whether the client remains within their window of tolerance. A chatbot cannot assess readiness. It cannot slow or stop a process that is becoming destabilising. When something goes wrong in a clinical setting, a trained therapist can intervene and repair. With AI, there is no containment and no repair — only continuation or abrupt disengagement.
AI-based mental health tools often feel appealing because they are immediate, available at any time, and non-judgmental. For individuals with histories of neglect, dismissal, or shame, this can feel like relief. But relief is not recovery. Simulated empathy is not the same as attuned presence. Trauma recovery is not about reassurance or advice; it is about helping the nervous system process experiences that were overwhelming at the time they occurred. EMDR therapy works by targeting how traumatic memories are stored and integrating them safely through carefully paced, phase-based work. This requires precise assessment, ongoing monitoring, and clinical judgement — capacities AI systems do not have.
As AI tools become more sophisticated, the ethical risk is not simply that people will use them, but that they will be used instead of appropriate care. For overstretched services and underfunded systems, the appeal of scalable, automated support is obvious. But trauma therapy does not scale easily because it is relational, embodied, and responsibility-laden. Replacing human care with automated interaction may appear efficient in the short term, but it carries long-term risks, including delayed recovery, increased distress, and a false sense of support that collapses when emotional intensity rises.
This is not an argument against technology. AI can be useful for education, organisation, and general wellbeing support. It may help people understand symptoms or access information. But trauma therapy is different. It requires someone who can tolerate intensity without flinching, recognise risk as it emerges, and hold structure when things become uncomfortable. AI cannot co-regulate, cannot take responsibility, and cannot provide clinical containment. These are not minor limitations; they are fundamental.
The recent warnings from regulators, the reported failures in crisis responses, and the concerns raised by clinicians are not surprising to those who work with trauma daily. They simply make visible what has always been true: some forms of care cannot be automated. Trauma therapy — including EMDR — is one of them.
References
Original article:
Why Chatbots Can’t Heal Trauma – A Cautionary Note on AI and Mental Health
https://rogerhughes.org/2025/12/09/why-chatbots-cant-heal-trauma-a-cautionary-note-on-ai-and-mental-healthby-roger-hughes-emdr-trauma-informed-coaching-uk-wide/
Reuters – US attorneys general warn Microsoft, Google, Apple and Meta over AI chatbot outputs
https://www.reuters.com/business/retail-consumer/microsoft-meta-google-apple-warned-over-ai-outputs-by-us-attorneys-general-2025-12-10/
Associated Press – Lawsuit alleges chatbot reinforced delusions prior to murder-suicide
https://apnews.com/article/97fd7da31c0fa08f3d3ea9efd6713151
The Guardian – Teenagers turning to AI chatbots for mental health support
https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support
The Guardian – Children need mental health care provided by humans, not chatbots
https://www.theguardian.com/society/2025/dec/15/children-need-mental-health-care-provided-by-humans-not-chatbots
British Association for Counselling and Psychotherapy – Therapists warn of dangers as children turn to AI for mental health advice
https://www.bacp.co.uk/news/news-from-bacp/2025/17-november-therapists-warn-of-dangers-as-children-turn-to-ai-for-mental-health-advice/
Healthcare-in-Europe – Calls for regulation of AI mental health chatbots
https://healthcare-in-europe.com/en/news/ai-chatbot-mental-health-regulation.html
Healthcare Management UK – Poll shows over one in three adults using AI chatbots for mental health support
https://www.healthcare-management.uk/ai-chatbots-mental-health-support
- Where to Find Me Online
- For private, trauma-informed EMDR therapy, you can find and contact me through the following trusted platforms:
- • 🔗 Online EMDR Therapy UK – TherapyCounselling.org
- • 🔗 Counselling Network Profile – Roger Hughes
- • 🔗 Psychology Today – Roger Hughes
- • 🔗 The Coach Space – Roger Hughes
- • 🔗 Google Business Profile – Roger Hughes
- • 🔗 LinkedIn – Roger Hughes
- • 🔗 EMDR Association UK – Verified Member Map
- • 🔗 Hub of Hope – Roger Hughes EMDR Therapy
- • 🔗 RogerHughes.org – Trauma, EMDR & Mental Health Blog

Leave a reply to Roger Hughes Cancel reply