Measuring compliance is easy. Understanding it is harder.
Every compliance programme generates numbers. Training completion rates. Number of reports to the whistleblowing channel. Number of cases investigated. Time to close. Policies reviewed and updated. Certifications held. Assessments completed.
These numbers are not without value. They tell you whether the programme is functioning — whether people are receiving training, whether the reporting infrastructure is being used, whether cases are being resolved. They are the operational metrics that a compliance officer rightly tracks, that an audit committee rightly requests, and that an external assessor rightly reviews.
But they do not tell you whether your ethics programme is working. And confusing these two questions is one of the more consequential mistakes a compliance function can make.
What the programme cannot easily count.
The harder — and more meaningful — question is whether the programme is changing how people think and behave in the situations where integrity is tested. That question does not have a tidy metric. But it has indicators, if you know what to look for.
One is the quality of concerns raised. An organisation whose reporting channel receives only clear-cut violations is not necessarily a well-functioning compliance environment. It may simply be an organisation where people do not feel safe raising the ambiguous situations — the ones where a genuine dilemma is present but the right answer is not obvious. A channel that receives a proportion of genuinely uncertain situations, alongside clear violations, is a channel that has earned enough trust to be used honestly.
Another is the pattern of escalation. When people face difficult situations, do they ask for help — from their manager, from the compliance function, from a peer? Or do they navigate those situations alone, in silence? An organisation that sees regular use of its escalation paths — not just its reporting channels — has built something that functions as an ethical support system, not merely a detection mechanism.
A third is what happens after a concern is raised. Whether the person who raised it is seen to have been protected. Whether the response was visible enough to communicate to others that the system works. Whether the feedback loop — closing the loop with the person who raised the concern, communicating outcomes where appropriate — is actually functioning. None of this appears in a metric. But its presence or absence shapes, over time, whether anyone uses the system at all.
One of the most revealing questions a compliance function can ask is this: of the people who knew about a problem before it became a formal case, how many considered raising it — and didn't? Answering this question requires conversation, not data. But it is often the most accurate diagnostic available.
Honest diagnostics over comfortable reports.
The audit tells you what you already knew — or what you were prepared for it to find. The honest diagnosis happens in the months before, in conversations that are harder to initiate and easier to avoid.
Is the compliance training actually changing behaviour? The only way to know is to ask — not in a post-training survey designed to confirm that people found it useful, but in a follow-up conversation, weeks or months later, about whether anything was different. Whether anyone used something from the training in a real situation. Whether the training created a reference point that was drawn on.
Are managers equipped for the compliance decisions they actually face? Not the theoretical scenarios in the training module, but the real dilemmas — the ones involving pressure, relationships, and ambiguity. The answer requires asking managers directly, which most programmes do not do.
Is the culture around speaking up genuinely safe? The answer is not in the policy. It is in what happened the last time someone spoke up. What actually occurred — not what the procedure says should have occurred — is what the rest of the organisation is watching, and what they will act on the next time they face a choice about whether to raise a concern.
An ethics programme that is working looks like this: people in the organisation understand what integrity requires of them in specific, realistic situations. They have somewhere to go when those situations arise. They believe, based on evidence, that the organisation will support them when they make the harder choice. And they have seen, often enough, that the programme responds to concerns in ways that make speaking up worth the risk.
That description is not a metric. It is a condition. And building it is the work.
Download this article
Save a PDF copy for offline reading, or share it with a colleague who might find it useful.