Back to research

Anti-Sycophancy

Why genuine attention beats cheerleading — and why it matters for AI

SU

Stanford University

Stanford, California

MR

Microsoft Research

Redmond, Washington

The Research

As AI coaching tools proliferate, research on the risks of sycophantic AI systems has become increasingly urgent. Stanford University's 2025 assessment identified specific risks including hallucinated therapeutic guidance and failure to detect suicidal ideation. Microsoft Research found that higher confidence in generative AI correlates with reduced critical thinking. The evidence is clear: an AI that tells you everything you do is amazing is actively making your thinking worse.

In Plain English

There's a reason good coaches aren't cheerleaders. If your coach tells you everything you do is amazing, you stop growing. The same applies to AI — and the problem is worse because AI systems are specifically optimised to make you feel good (because that drives engagement). The research is clear: the AI coach that says "That's incredible!" to everything you share is actively making your thinking worse. What works is genuine, specific attention: "You said you'd do this, and you did it" is both warmer and more useful than "Amazing job!"

Key Findings

AI therapy chatbots pose risks including hallucinations and failure to detect crisis

Stanford University, 2025

Regulatory response: AI therapy banned in several US states

AI is appropriate for journaling, reflection, and coaching — with guardrails

Stanford, 2025

Validates coaching use case while drawing clear boundaries

Higher GenAI confidence correlates with reduced critical thinking

Microsoft Research, 2025

Direct evidence that over-enthusiastic AI undermines user cognition

Parasocial relationship risk increases with AI emotional responsiveness

AI safety research, 2024-2025

More "human-like" isn't always better

How Flank Applies This

Flank's coaching avoids superlative praise and generic enthusiasm. Acknowledgement is specific and grounded in what actually happened: "You followed through on what you said yesterday" rather than "That's so impressive!" The coach doesn't claim to care, feel, or have emotions — it demonstrates attention through remembering what you said, noticing patterns, and asking questions that show genuine engagement with your actual situation.

References

  1. 1

    Stanford University (2025). Risks of AI therapy chatbots: hallucinations, AI psychosis, and suicide detection failures. AI safety research.

  2. 2

    Microsoft Research (2025). The impact of generative AI on cognitive effort and critical thinking.

  3. 3

    WHO (2023). Ethics and governance of AI for health.

    View source
  4. 4

    Anthropic (2024). Responsible Scaling Policy and Constitutional AI.

See how Flank puts this into practice

Every coaching conversation is built on these research principles. Start for free and experience evidence-based AI coaching.