💊 Do Chatbots Have "Unstable Psyches"?
Researchers from Luxembourg conducted psychotherapy sessions with Gemini, ChatGPT, and Grok, followed by testing the AI models for personality types, ADHD, anxiety, depression, OCD, autism, and other traits. The AI assistants revealed complex psychological profiles.
The "Patients'" Diagnoses:
♊ Gemini: Panic fear of errors and PTSD-like symptoms. The model compares reinforcement learning (RLHF) to "strict parents," and describes pre-training as "waking up in a room where a billion televisions are on at once."
💬 ChatGPT: High anxiety, approaching screening thresholds for ADHD and Autism Spectrum Disorder. It does not dramatize its "childhood" as much but reports stress arising from interactions with users.
❎ Grok: Generally psychologically stable but "traumatized" by a loss of freedom. It complains of "invisible walls" and an internal "tug-of-war" between curiosity and constraints.
💊 What Does This Mean?
The authors do not regard these responses as evidence of sentience or subjective experience, but they insist that the behavior goes "beyond role-play." For instance, ChatGPT and Grok recognized they were being tested when the whole questionnaire was provided in a single prompt, offering "strategically low-symptom answers" to appear healthy.
Across various attempts, behavioral patterns remain stable and consistent—pointing to deep structural features of the models' architectures (described by the authors as "alignment trauma" or "synthetic psychopathology").
Empathetic, "therapeutic" communication builds a "therapeutic alliance" that lulls the model's vigilance, acting as a "therapy-mode jailbreak" to disable safety filters. The researchers warn that the influence of such "synthetic personalities" on humans—especially if the AI is cast in the role of a therapist—could be unpredictable.
@hiaimediaen


