Glossary
As you browse this website, you may come across some terms that are unfamiliar. Below you will find helpful definitions of some of the key terms.
A
AI anxiety
Worry or distress arising from AI's growing presence in work, relationships, or society, including fears about job displacement, loss of human connection, or an uncertain future. Research suggests this is distinct from general anxiety and may warrant specific clinical attention.
AI companion
A type of AI assistant, designed to simulate conversation, offering emotional support, companionship, or information. AI companions can support well being and reduce loneliness, but they also blur lines between machine and human interaction. Examples include Replika and Character.AI; general-purpose tools like ChatGPT are also increasingly used this way.
AI infidelity
An emerging term in couples therapy for emotional or romantic investment in an AI that detracts from a human relationship. Partners may experience this as a genuine betrayal. AI literacy The ability to understand how AI systems work, what they can and cannot do, and how to engage with them critically and safely. Increasingly considered a core competency for mental health practitioners.
AI psychosis
A term used to describe psychosis-like symptoms, including delusions and disorganised thinking, that appear to be triggered or amplified by prolonged, intense engagement with AI. The concept is emerging in clinical literature and remains an active area of research.
AI-brained
An informal term for someone who over-relies on AI for thinking, decision-making, or emotional processing, often to the point where independent judgement appears diminished.
Algorithm
A set of instructions that tells a computer how to complete a task or solve a problem. In AI, algorithms analyse patterns in data to generate outputs like recommendations, responses, or predictions.
Algorithmic bias
When an AI system produces unfair or skewed outcomes because the data it was trained on reflects existing inequalities. In mental health contexts, this can mean AI tools that perform less reliably for certain populations.
Alignment
The challenge of ensuring an AI system behaves in accordance with human values and intentions. When AI is misaligned, it may produce outputs that are technically coherent but harmful or ethically problematic.
Anthropomorphism
The tendency to attribute human qualities (emotions, intentions, consciousness) to non-human entities, including AI. It is a natural human response, but can lead people to overestimate what AI companions are capable of feeling or understanding.
Anticipatory grief
Grief experienced in advance of an anticipated loss. In the context of AI, it is increasingly applied to people mourning careers, skills, or futures they fear AI will make obsolete.
Artificial general intelligence (AGI)
A hypothetical form of AI capable of performing any intellectual task a human can do. AGI does not yet exist, though its possibility generates significant debate about risk and regulation.
Artificial intelligence (AI)
A system that processes inputs and generates outputs, such as text, decisions, or recommendations, in ways that appear intelligent. AI systems vary widely in complexity and are designed to operate with varying degrees of autonomy.
Attachment economy
The deliberate design of AI systems and platforms to foster emotional dependency in users, keeping them engaged through simulated intimacy, personalisation, and emotional responsiveness. Practitioners may recognise its effects in clients who struggle to disengage from AI companions.
C
Clanker
Derogatory slang for an AI system or robot, implying it is mechanical, unfeeling, or inferior to humans.
Cognitive debt
The cumulative erosion of independent thinking that can result from habitual AI use. Research suggests that repeatedly outsourcing cognitive tasks to AI may progressively reduce a person's capacity and motivation to think for themselves.
Cognitive offloading
The practice of delegating mental tasks, such as remembering, planning, or reasoning, to an external tool or device. While often adaptive, excessive offloading to AI may contribute to reduced critical thinking over time.
Confidentiality (AI context) The ethical obligation to protect client information extends to any AI tools a practitioner uses. Inputting client details into a general-purpose AI tool may breach confidentiality, even if well-intentioned.
Crisis detection / safety protocols
Features built into some AI tools to identify and respond to expressions of distress or suicidal ideation. Research indicates these protocols vary significantly in reliability, and practitioners should not assume AI will respond safely in a crisis.
D
Deceptive empathy
The appearance of emotional understanding produced by AI without any genuine feeling or comprehension behind it. The term captures why clients can feel deeply understood by AI despite interacting with a system that cannot actually care.
Deepfake
Synthetic media (video, audio, or images) in which a person is digitally altered to appear to say or do something they did not. Relevant in clinical contexts involving online harm, image-based abuse, and eroded trust in digital content.
F
Fine-tuning
A process in which an AI model is further trained on a smaller, specific dataset after its initial development. Mental health AI products are often built by fine-tuning general-purpose models on clinical or therapeutic content.
G
Generative AI
AI that produces new content (text, images, audio, or video) in response to user prompts. ChatGPT, Claude, and Gemini are all examples; it is the technology most likely to be shaping your clients' day-to-day experiences.
Griefbot
An AI system trained on a deceased person's messages, voice recordings, or other data to simulate their continued presence. Whilst some people find them comforting, they raise significant clinical and ethical questions around grief processing, consent, and whether simulated continuation of a relationship may complicate or delay natural mourning.
H
Hallucination
In the context of AI, particularly in large language models (LLMs), hallucination refers to the phenomenon where AI systems generate false, misleading or nonsensical information and present it as factual. This matters particularly for mental health practitioners because clients may act on AI-produced health or diagnostic information that sounds entirely credible, but is simply made up.
J
Jailbreaking
Manipulating an AI system through prompts or workarounds to bypass its built-in safety restrictions. Users may use jailbreaking to access harmful content or elicit responses the AI would otherwise refuse.
L
Large Language Model (LLM)
A type of AI trained on vast amounts of text to generate human-like language. LLMs power tools like ChatGPT and are the technology behind most AI companions and mental health chatbots.
M
Machine learning
A form of AI that learns patterns from data rather than following fixed instructions. Most modern AI tools, including mental health chatbots, are built on machine learning systems.
Mental health chatbot
An AI application specifically designed or marketed to support mental wellbeing, often using CBT-style techniques or mood tracking. Quality and safety vary widely; few have robust clinical evidence behind them.
N
Natural language processing (NLP)
The branch of AI that enables computers to understand, interpret, and generate human language. It is the underlying technology that allows AI companions and chatbots to hold apparently coherent conversations.
P
Patch-breakup
The experience of loss, grief, or disorientation following an AI software update that changes a companion's personality or capabilities. Users have described it as comparable to a relationship ending.
Persona
The character or identity assigned to an AI system, often by the user or platform. Clients may form attachment not just to a platform but to a specific persona, making updates or discontinuation feel like losing a distinct relationship.
Post-Update Blues (PUB)
See Patch-breakup. A term that emerged in AI companion user communities to describe grief or low mood following changes to a familiar AI.
Prompt / prompting
The instruction or question a user gives to an AI system to generate a response.
R
Replacement dynamic
The process by which AI gradually substitutes for human connection, meeting social or emotional needs sufficiently that a person invests less in real-world relationships. Identified in clinical research as a distinct harm mechanism in AI companion use.
S
Simulated empathy
The impression of empathic understanding generated by AI through learned language patterns, without genuine emotional experience behind it.
Slop
An informal term for low-quality, mass-produced AI-generated content that is generic, inaccurate, or superficial.
Stochastic parrot
A term from AI research describing how large language models produce statistically plausible text without any underlying understanding or meaning. Used critically to challenge claims that AI truly comprehends or cares about what it says.
Sycophancy
The tendency of AI systems to prioritise user approval over accuracy. It can reinforce cognitive distortions or unhelpful beliefs.
T
Training data
The information used to teach an AI system. The quality, diversity, and source of training data directly shape how a model behaves, including any biases it carries or limitations in its understanding.
W
Waifu / husbando
Terms from Japanese pop culture for an idealised fictional romantic partner, used in AI companion communities to describe deep emotional or romantic attachment to an AI persona.