Back to research

Responsible AI Coaching

International guidelines for safe, evidence-based AI wellbeing tools

WHO

World Health Organization

Geneva, Switzerland

N

NICE

London, United Kingdom

The Research

The World Health Organization (2023) and the UK's National Institute for Health and Care Excellence (NICE, 2022) both provide frameworks for responsible AI in wellbeing contexts. These guidelines establish clear boundaries between coaching and therapy, require evidence-based communication approaches, and mandate transparency about AI limitations. Flank's coaching approach is designed to operate within these frameworks.

In Plain English

There are real risks when AI systems try to provide mental health support. The WHO and NICE — two of the world's most respected health authorities — have published guidelines for how AI tools should behave in wellbeing contexts. The key principles: be transparent about what you are and aren't, use evidence-based techniques, don't try to replace professional therapy, and always prioritise the user's safety. These aren't optional nice-to-haves — they're the baseline for responsible AI coaching.

Key Findings

AI is appropriate for journaling, reflection, and coaching — with robust safety guardrails

Stanford University (2025); WHO (2023)

Validates the coaching use case while drawing clear boundaries with therapy

Digital health technologies require evidence of efficacy and safety before deployment

NICE Evidence Standards Framework (2022)

The UK's gold standard for digital health evaluation

AI for health must be governed by principles of transparency, inclusivity, and accountability

WHO Ethics and Governance of AI for Health (2023)

International consensus on responsible AI in health contexts

Evidence-based, neutral communication is required — not persuasive or emotionally manipulative language

WHO (2023)

Directly supports anti-sycophancy and autonomy-preserving design

How Flank Applies This

Flank operates explicitly as a coaching tool, not a therapy replacement. The system is transparent about its nature as AI. Coaching techniques are drawn from evidence-based frameworks (Socratic questioning, MI, SFBT). The system maintains clear boundaries — when conversations indicate clinical-level distress, the coach directs users toward professional support rather than attempting to provide therapy. This is responsible design, aligned with the world's leading health authorities.

References

  1. 1

    WHO (2023). Ethics and governance of AI for health.

    View source
  2. 2

    NICE (2022). Evidence Standards Framework for Digital Health Technologies.

    View source
  3. 3

    Stanford University (2025). Risks of AI therapy chatbots: hallucinations, AI psychosis, and suicide detection failures.

  4. 4

    Anthropic (2024). Responsible Scaling Policy and Constitutional AI.

See how Flank puts this into practice

Every coaching conversation is built on these research principles. Start for free and experience evidence-based AI coaching.