Most compliance training KPIs measure activity. None of them measure impact.

The compliance training metrics that most organisations track are not wrong. Completion rates matter — an employee who has not received training has not been reached by the control. Average assessment scores matter — they tell you something about aggregate comprehension. Time-to-completion matters — it tells you whether employees are engaging with the content or clicking through it.

But these metrics share a common limitation: they measure inputs and outputs, not outcomes. They tell you what the programme did. They tell you nothing about what the programme changed. And a compliance training programme whose impact cannot be distinguished from its activity has not established itself as a functioning control — it has established itself as a functioning process.

The KPIs that actually tell you whether your compliance training is working are the ones that connect training activity to the integrity risk environment it was designed to address. They require more analytical work to produce. They require a compliance function that has thought carefully about what 'working' actually means in the context of a specific risk and a specific population. And they produce information that is genuinely useful — to the compliance function itself, to senior management, and to the governance bodies that need to satisfy themselves that internal controls are effective.

"The compliance function that can only tell you its completion rate has not yet begun to measure whether its training programme is functioning as a control. The completion rate is the start of the measurement, not the end of it."

What assessment results reveal when you read them properly.

Average assessment scores are a blunt instrument. They aggregate performance across all questions, all risk categories, and all employee populations — and in doing so, they obscure the information that is most useful. A module-level average score of 82 per cent tells you that the average employee answered more than four out of five questions correctly. It does not tell you which questions generated errors, which risk scenarios were consistently misunderstood, or whether the employees in the highest-risk roles performed differently from those in lower-risk roles.

The KPIs that produce useful intelligence are question-level and scenario-level: error rates by risk category, identifying which specific compliance concepts the training has not successfully communicated. Performance variance by role and function, identifying whether the employees with the greatest risk exposure demonstrate greater comprehension than those with lower exposure — or, as is sometimes the case, whether the reverse is true. Repeat error rates across training cycles, identifying persistent knowledge gaps that a single training intervention has not resolved.

These metrics require the training to have been designed with sufficient granularity to support this level of analysis — which is itself an argument for building compliance training around specific, role-calibrated scenarios rather than general knowledge assessments. A scenario-based question that tests a specific decision in a specific context produces more diagnostic information than a knowledge-recall question that tests whether an employee can identify the definition of a conflict of interest.

If your compliance training assessment results cannot tell you which risk categories your employees are most likely to get wrong — specifically, by role, by function, and by geography — then your assessment is not generating the intelligence that a control-oriented compliance programme needs. The question to ask of your assessment design is not 'does it test compliance knowledge?' but 'does it tell us where our control environment is weakest?'

Connecting training performance to integrity risk.

The most meaningful compliance training KPIs are the ones that connect training performance to observable indicators in the integrity risk environment. Speak-up channel usage trends following a training intervention on reporting culture. The proportion of third-party due diligence requests that are submitted correctly and completely following a training programme on third-party risk. The volume and quality of conflict of interest disclosures in the period following a training module on that topic.

These connections are not always clean, and establishing causality requires care. But the absence of any attempt to connect training activity to integrity risk indicators is the clearest sign that a compliance function has not yet conceptualised its training programme as a control. Controls are evaluated against the risks they are designed to address. A training programme evaluated only against its own internal metrics — completion rates, average scores — is being evaluated in isolation from the purpose it is supposed to serve.

Benchmarking adds the dimension of direction: not just where the organisation's training performance stands, but whether it is improving, stagnating, or declining over time, and how it compares to the organisation's own historical performance and, where data is available, to comparable organisations. A compliance training programme that produces the same KPI results year after year is not a programme that is continuously improving. It is a programme that has reached a plateau — and the compliance function that is satisfied with that plateau has stopped asking whether it could do better.

"The KPIs that are worth tracking are the ones that would change a decision. If a metric goes up or down and nothing in the compliance programme changes as a result, that metric is not providing intelligence — it is providing decoration. Every KPI in a compliance training dashboard should be connected to a decision that the compliance function would make differently depending on what it shows."

Diesen Artikel herunterladen

Speichern Sie eine PDF-Kopie zum Offline-Lesen oder teilen Sie sie mit einem Kollegen, der sie nützlich finden könnte.

PDF herunterladen