August 2026 deadline approaching. Is your organisation ready to demonstrate human oversight capacity? Book a Discovery Conversation
EU AI Act — August 2026 Deadline

Your organisation is deploying AI. But can your people actually govern it?

The EU AI Act requires demonstrable human oversight of AI systems. Most organisations have no way to measure whether their people are psychologically ready to provide it. We change that.

Psychometrically validated
EU AI Act aligned
Grounded in psychological science
Amsterdam & Brussels
Layer 1 — Technical Controls
Layer 2 — Compliance Mechanisms
Layer 3 — Authority Allocation
Layer 4 — Human Oversight Capacity
The Problem

Compliance programmes build processes. They don't build the people who run them.

Organisations across Europe are investing heavily in AI governance frameworks, policies, and training — yet remain fundamentally blind to whether their leaders and teams feel safe to challenge AI outputs, escalate concerns, or override systems when it matters.

Traditional maturity assessments tell you where you stand on a generic five-stage model. They don't tell you whether your people can exercise the judgement that the EU AI Act explicitly demands. The gap is not awareness. The gap is measurable readiness.

The problem is not that psychological and cultural factors are immeasurable. The problem is that they have not been translated into decision-relevant, governance-usable proxies.

Aug 2026
EU AI Act deadline for high-risk AI systems compliance — the clock is running
Art. 4, 14, 26
EU AI Act articles explicitly requiring human oversight competence and AI literacy
0
Validated tools on the market that measure psychological readiness for AI oversight
Why Training Alone Fails

Your teams completed the AI ethics training. Can they actually override a system when it matters?

Organisations routinely conflate training delivered with capability embedded. A completed e-learning module does not mean a manager can challenge an automated lending decision under time pressure, or escalate an anomalous output from a clinical decision-support system.

Knowledge of AI principles is necessary but insufficient. What determines governance effectiveness is whether people possess the psychological conditions — safety, ownership, critical engagement — to act on that knowledge when it counts.

See how ALMA measures this
1

Why do clients come to us?

Because they have governance structures on paper but no confidence that their people can operate them under pressure. Frameworks exist. Human readiness is assumed, not assessed.

2

What are they trying to achieve?

Demonstrable compliance with the EU AI Act — not just documentation, but provable human oversight capacity that survives regulatory scrutiny.

3

What hurdles do they face?

No measurement of psychological readiness. Cultural barriers to challenge and escalation. Leadership that delegates governance to compliance functions rather than embedding it operationally.

4

What options do they have?

Internal self-assessments (limited rigour), generic maturity scans (low value, no action), or The Responsible AI Center's diagnostic-to-intervention methodology that connects measurement directly to value.

The August 2026 Deadline

High-risk AI systems must comply by August 2026. The clock is running.

"Compliance" under the EU AI Act is not a documentation exercise. Articles 4, 14, and 26 require organisations to demonstrate that the people operating and overseeing AI systems possess genuine competence — not just completed training records.

Moving from current state to demonstrable compliance takes 12–18 months when done properly: assessment, targeted intervention, behavioural embedding, and re-measurement. Organisations that start in 2025 will be ready. Those that wait will not.

Talk to us about your timeline
February 2025

AI Literacy Requirements — Article 4 in force

All providers and deployers of AI systems must ensure appropriate AI literacy across their workforce. No validated measurement standard yet exists.

August 2025

GPAI Model Obligations

General-purpose AI model providers must comply with transparency and copyright requirements. Human oversight of GPAI outputs becomes a governance priority.

August 2026

High-Risk AI Systems — Full Compliance Required

Organisations deploying high-risk AI must demonstrate human oversight capacity under Articles 14 and 26. This is the critical deadline. Assessment-to-embedding takes 12–18 months.

How We Help

From diagnosis to embedded capability. A clear path — not a maturity curve.

We don't place you on a five-stage model and wish you luck. Every engagement follows a structured discovery-to-sustainment methodology that links measurement directly to action and value.

1
Identify

Find the real gaps

Where exactly are the gaps between your people's current mindset and what the EU AI Act requires? We map concrete readiness shortfalls across five governance-critical dimensions — not generic maturity labels.

2
Quantify

Measure the exposure

What is the organisational cost of those gaps? We build conservative models that translate oversight deficits into compliance risk and operational exposure — giving decision-makers the numbers they need.

3
Realise

Close the gaps

Which specific interventions will close the gaps — and in what sequence? We deliver a phased adoption roadmap focused on behavioural change and measurable outcomes, not slide decks.

4
Sustain

Embed the capability

How will you embed oversight capacity into governance structures that outlast any single programme? We design for permanence — with governance mechanisms, re-assessment cycles, and clear accountability.

Four-Layer Governance Architecture

Most organisations govern three layers. The fourth is the one that matters most.

Effective AI governance requires more than technical controls and compliance checklists. It requires that the humans in the system can actually exercise oversight — under pressure, in ambiguous situations, against the grain of automation bias.

Layer 4 — Human Oversight Capacity — is where most organisations have a blind spot. It is also the layer that Articles 4, 14, and 26 of the EU AI Act explicitly address. And it is the only layer The Responsible AI Center uniquely measures.

Discover how ALMA measures Layer 4

Layer 1 — Technical Controls

Model monitoring, bias detection, data governance, algorithmic auditing

Layer 2 — Compliance Mechanisms

Policies, documentation, regulatory reporting, risk registers

Layer 3 — Authority Allocation

Decision rights, escalation protocols, accountability structures

Layer 4 — Human Oversight Capacity

Psychological readiness, critical engagement, conscious ownership

Our unique focus

Most organisations invest heavily in Layers 1–3. Layer 4 is assumed, not assessed.

Our Services

Structured engagement. Measurable outcomes.

Every service is anchored in the Value-Focused Discovery methodology and designed to produce outcomes that can be demonstrated to regulators, boards, and audit committees.

The Entry Point

Value-Focused Discovery

Before we propose anything, we diagnose the real situation. ALMA provides the measurement foundation. The discovery output is a prioritised readiness diagnostic with concrete intervention recommendations — not a traffic-light slide deck.

Learn about ALMA
Capability Building

Leadership & Team Development

Closing the gaps ALMA identifies. Targeted workshops, coaching, and development programmes designed around specific ALMA findings — not generic AI ethics training. Always tied back to ALMA measurement with pre/post assessment.

Enquire about programmes
Value Sustainment

Ongoing Advisory

Governance capability erodes without sustained attention. Retainer-based advisory for organisations navigating rapid AI deployment — including periodic ALMA re-assessment, governance design reviews, and regulatory update briefings.

Discuss advisory options
ALMA — AI Literacy Mindset Assessment

Before we propose anything, we diagnose the real situation.

ALMA is not a maturity scan. It is a governance diagnostic that measures whether your people possess the psychological conditions required for effective AI oversight — as mandated by Articles 4, 14, and 26 of the EU AI Act.

Five dimensions. One governance picture.

ALMA measures five dimensions of AI oversight readiness, each mapped directly to observable governance behaviours and EU AI Act compliance requirements. Together, they provide a complete picture of your organisation's human oversight capacity.

🛡️
D1

Psychological Safety

Can your people question AI outputs, report errors, and challenge decisions without fear of consequences?

Art. 14 — Human Oversight
📈
D2

Growth Orientation

Do they believe AI competence can be developed — or do they see it as fixed and outside their control?

Art. 4 — AI Literacy
🔄
D3

Adaptive Flexibility

Can they tolerate ambiguity and revise their approach as AI technology and regulation evolve?

Art. 4 — AI Literacy
⚖️
D4

Conscious Ownership

Do they take personal accountability for AI decisions — or delegate responsibility to the system?

Art. 26 — Deployer Obligations
🔍
D5

Critical Engagement

Do they actively verify, question, and challenge AI outputs — or passively accept them?

Art. 14 — Human Oversight
ALMA — AI Literacy Mindset Assessment

ALMA governance diagnostic — individual and team readiness profiles

The Key Differentiator

A maturity scan tells you that you're at "Level 2". ALMA tells you that 40% of your managers cannot psychologically challenge AI-driven decisions — and shows you exactly which interventions will change that.

How ALMA works

1

Assessment administration

50 validated items, 15–20 minutes per participant. Available for individuals, teams, or organisation-wide deployment. Five-point Likert scale with reflective questions.

2

Diagnostic analysis

Psychometrically validated scoring across five dimensions. Individual profiles, team aggregates, and organisational heat maps identifying governance risk clusters.

3

Intervention roadmap

A prioritised opportunity map linking specific gaps to value at risk, with a phased intervention roadmap and conservative quantification of compliance exposure.

4

Re-measurement & tracking

ALMA is a baseline, not a one-time snapshot. Periodic re-assessment tracks progress and validates that interventions are producing measurable change in oversight capacity.

What you receive

  • Individual and team readiness profiles across all five dimensions
  • Organisational heat map identifying governance risk clusters by role, function, and level
  • Prioritised opportunity map linking gaps to specific value at risk
  • Conservative quantification of compliance exposure under the EU AI Act
  • Phased intervention roadmap with sequenced, adoption-focused recommendations
  • Sensitivity analysis: what happens if key assumptions change?
  • Executive summary for board and audit committee reporting
  • Baseline measurement for tracking progress through re-assessment

How ALMA differs from what you've seen before

Generic maturity scans produce labels. ALMA produces decisions. Here's the difference:

Generic Maturity Scan ALMA Governance Diagnostic
Places you on a 5-stage model with a generic score Identifies specific behavioural gaps across five governance-critical dimensions
Generic recommendations that apply to every organisation Prioritised intervention roadmap linked to your actual readiness profile
Self-reported process maturity — what people say they do Psychometrically validated indicators of actual oversight capacity
One-time snapshot with no measurement baseline Baseline measurement that tracks progress through periodic re-assessment
Tells you what you already know — and what to do about nothing Reveals the human factors governance frameworks cannot capture
No connection to EU AI Act articles or regulatory requirements Directly mapped to Articles 4, 14, and 26 of the EU AI Act
Grounded in Research

Built on psychological science. Designed for governance practitioners.

ALMA's approach isn't opinion-based. It is built on three decades of peer-reviewed research in organisational psychology, translated into governance-usable measurement instruments.

🧠

Psychological Safety

Amy Edmondson's foundational research on team psychological safety — reframed as a property of AI governance decision systems, not merely an HR aspiration. When people feel unsafe to challenge AI outputs, governance fails silently.

🌱

Growth Mindset & Adaptive Capacity

Carol Dweck's growth mindset research applied to AI governance contexts. Organisations where people believe AI competence is fixed — not developable — systematically underinvest in the human layer of oversight.

⚠️

Automation Bias

Decades of research on automation bias — the tendency to over-rely on automated systems — translated into measurable governance indicators. Critical Engagement and Conscious Ownership dimensions directly address this failure mode.

The Responsible AI Center's academic foundation is established through strategic research partnerships, with papers targeting leading governance and public policy journals. This dual academic-practitioner approach ensures ALMA's practical applications are built on rigorous scholarly validation.

Academic validation in progress Research collaborations active Papers forthcoming 2025–2026
Latest Insights

Thinking that shapes the field

Research-backed perspectives on AI governance, human oversight, and the gap between compliance and capability.

EU AI Act March 2026

The EU AI Act requires human oversight. No one is measuring it.

Articles 4, 14, and 26 demand demonstrable oversight competence. Yet the market offers no validated tool to assess whether people are psychologically equipped to provide it.

Read article →
Leadership February 2026

Why AI ethics training doesn't create AI-ready leaders

Knowledge of principles is necessary but insufficient. What determines governance effectiveness is whether people possess the psychological conditions to act on that knowledge when it counts.

Read article →
About The Responsible AI Center

Why The Responsible AI Center exists

We founded The Responsible AI Center because we saw the same pattern everywhere: organisations investing millions in AI governance frameworks, yet unable to answer a simple question — can our people actually govern AI?

Mulya van Roon, Founder & Principal Advisor at The Responsible AI Center

Mulya van Roon

Founder & Principal Advisor — The Responsible AI Center

Mulya van Roon helps organisations move beyond checkbox AI compliance towards governance that actually works. Based in Amsterdam and active across Brussels and the wider EU policy landscape, he translates the EU AI Act and broader trustworthy AI frameworks into actionable governance structures, risk processes, and operational playbooks.

His career spans nearly two decades across Microsoft, KPMG, and IBM/Kyndryl — predominantly in highly regulated industries where governance is foundational, not optional. He is a Member of the European Commission's Apply AI Alliance, contributing to how AI regulation is operationalised across Europe.

Mulya is the architect of ALMA — the AI Literacy Mindset Assessment — a governance diagnostic that measures whether leaders and teams are genuinely equipped to oversee AI-enabled decisions, not merely trained to do so.

His conviction: the next frontier of AI governance is not about what AI systems do. It is about whether the people governing them are fit for purpose.

Academic Foundation

Research collaborations & academic validation

Our approach is not opinion-based. ALMA's measurement framework is built on peer-reviewed research in organisational psychology, with ongoing academic validation through strategic research partnerships.

The scholarly foundations underpinning ALMA are being developed through papers co-authored with academic collaborators, targeting leading governance and public policy journals. This dual academic-practitioner foundation ensures that ALMA's practical applications are built on rigorous scholarly validation — not consulting intuition.

Edmondson — Psychological Safety Dweck — Growth Mindset Automation Bias Research
The Origin

The insight behind The Responsible AI Center

The insight behind The Responsible AI Center was deceptively simple: psychological and cultural factors in AI governance are not immeasurable. They simply haven't been translated into decision-relevant, governance-usable proxies.

Every existing tool measures what organisations have built — frameworks, policies, technical controls. None measures whether the people operating those structures are psychologically equipped to do so. The Responsible AI Center was founded to close that gap — and ALMA is the instrument that makes it possible.

ALMA doesn't tell you where you sit on a maturity curve. It tells you whether your managers can challenge an AI-driven decision under pressure, whether your teams feel safe to escalate concerns, and whether oversight is a lived capability or a documented aspiration. It makes the invisible visible — and gives you a clear, evidence-based answer to the question regulators will eventually ask: are your people actually in control?

Collaborations & Partnerships

Working with organisations that share our conviction

The Responsible AI Center works with academic institutions, policy bodies, and implementation partners committed to making AI governance a lived capability. Three types of partnership — academic, policy, and organisational — each serving a distinct purpose in building the evidence base and practical reach of ALMA.

View Collaborations & Partnerships →
Academic Collaboration
Policy Body
Partner Organisation
Get Started

Let's start with the real question — can your people govern AI?

A 30-minute discovery conversation to understand your specific governance challenge. No generic pitch. No obligation. Just a focused conversation about your situation.

What to expect

✉️
🌐
Website www.theraicenter.org
📍
Locations Amsterdam, The Netherlands & Brussels, Belgium
The Discovery Conversation

30 minutes. Your specific context. A clear view of where human oversight gaps are most likely — and whether ALMA would add value to your governance programme.


Speaking & Events

Mulya is available for keynotes, panels, and workshops on AI governance, human oversight capacity, and the EU AI Act. Topics include the Four-Layer Governance Architecture, ALMA methodology, and psychological readiness for AI oversight.

Enquire about speaking →