← Back to Insights
Research

Measuring the unmeasurable: psychological safety as a governance condition

Psychological safety is treated as a culture topic. It is, in fact, a governance condition — one that determines whether escalation, challenge, and override are possible at all.

Psychological safety is treated as a culture topic. It is, in fact, a governance condition — one that determines whether escalation, challenge, and override are possible at all. When people do not feel safe to question an AI recommendation, oversight structures become ceremonial.

The concept of psychological safety — the shared belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes — has been extensively researched in organisational psychology. Its relevance to AI governance, however, has been largely overlooked.

Why Psychological Safety Matters for AI Oversight

The EU AI Act requires human oversight that is genuine, not performative. Article 14 explicitly requires that oversight persons can 'decide not to use the high-risk AI system or otherwise disregard, override or reverse the output.' This requires not just technical competence, but the psychological conditions that make challenge and override possible.

In practice, overriding an AI system's recommendation requires a person to publicly disagree with a system that is often perceived as more objective, more consistent, and more defensible than human judgement. This is a psychologically demanding act — one that most governance frameworks assume will happen naturally.

Research consistently shows that it does not. In environments with low psychological safety, people default to the AI recommendation — not because they agree with it, but because disagreeing carries perceived social and professional risk.

From Culture Metric to Governance Indicator

The challenge is translating psychological safety from a general organisational culture metric into a governance-relevant indicator. This requires measuring psychological safety not as a general workplace condition, but specifically in the context of AI oversight — where the stakes, dynamics, and pressures are distinct.

The Responsible AI Center's Governance Diagnostic includes constructs specifically designed to assess psychological safety in AI governance contexts. We measure whether people in oversight-critical roles feel safe to challenge AI outputs, escalate concerns, and exercise genuine judgement — not whether they feel generally comfortable at work.

This is not a survey of how people feel about their workplace. It is a structured assessment of whether the psychological conditions for effective AI oversight exist in the roles and teams where it matters most.

Implications for Governance Design

When psychological safety is low in oversight-critical roles, the appropriate response is not more training. It is governance redesign — creating the structural, procedural, and cultural conditions that make genuine challenge possible.

This may include redesigning escalation protocols to remove social risk, establishing independent oversight functions that are structurally protected from hierarchy, creating peer review mechanisms for high-stakes AI-assisted decisions, and measuring psychological safety as a recurring governance indicator rather than a one-time culture survey.

The Responsible AI Center works with organisations to identify where psychological safety gaps are undermining governance effectiveness — and to design the specific interventions that address them. This is governance work, not culture work. The distinction matters.

Share this article

"If people do not feel safe to question an AI recommendation, your oversight structure is ceremonial — regardless of how well it is documented."

Related Insights

Continue reading

Start with a conversation.

Every engagement begins with a Discovery Conversation. Just an honest exchange about where your organisation stands and whether we are the right fit to help.

Book a Discovery Conversation