A compliance training strategy that does not improve is not a strategy. It is a routine.
The compliance training programmes that organisations build in response to a regulatory requirement, a new certification, or a significant enforcement event tend to be designed well for the moment of their creation. They address the risks that are most visible at that moment. They are calibrated to the population and the delivery format that made the most sense at the time. They represent a genuine investment in building the control environment.
And then, in a significant number of organisations, they stop developing. The same modules are delivered on the same cycle to the same populations. The assessment results are reviewed at the aggregate level. The completion rates are reported to the governance committee. The programme is described as functioning — because, by the metrics being tracked, it is.
What is not visible in this description is the drift. The gap that opens, gradually, between the training programme as it was designed and the integrity risk environment as it actually exists. New risks emerge that the programme does not address. The regulatory environment evolves in ways that make the content less current. The employee population changes in ways that make the audience calibration less precise. The assessment results, had they been analysed carefully, would have identified persistent comprehension gaps that the programme has not closed.
What systematic monitoring of training performance looks like.
Continuous monitoring of compliance training performance requires a defined cycle that connects measurement to analysis to decision to action. The measurement layer is the KPI set: comprehension rates by risk category, error patterns by role and function, population coverage gaps, delivery consistency across geographies and business units. These are the inputs to the analysis.
The analysis layer is where monitoring becomes useful. It asks what the measurement data reveals: which risk areas show persistent comprehension gaps, which populations are under-trained relative to their risk exposure, which training formats are producing stronger results than others, and how this period's performance compares to the previous period and to the benchmarks the organisation has established for itself.
The decision layer translates analysis into programme choices: which modules need to be redesigned or updated, which populations need additional or more targeted intervention, which delivery formats should be expanded or reduced, which risk areas need to be elevated in training priority based on changes in the risk environment. These decisions should be documented — they are evidence that the compliance function is managing its training as a control and responding to what the measurement tells it.
The action layer is implementation. Updated content is deployed. New populations are reached. Delivery format changes are made. And the monitoring cycle begins again — because the purpose of monitoring is not to produce a report. It is to produce a better programme.
The question that distinguishes a compliance training programme that is continuously improving from one that is merely running is this: what is different about the programme this year compared to last year, and why? If the honest answer is 'not much — we delivered the same modules to the same populations and got similar results,' the programme is not being managed as a strategy. If the answer involves specific changes driven by specific findings from the measurement cycle, the programme is developing. That development is what continuous improvement actually means.
Comparison is only useful when it drives change.
Benchmarking compliance training performance serves two purposes that are sometimes conflated but are genuinely distinct. The first is calibration: understanding whether the organisation's training performance is strong, adequate, or weak relative to a meaningful reference point. The second is aspiration: identifying what better performance looks like and using that picture to set improvement targets.
Internal benchmarking — comparing current performance to historical performance within the organisation — is the most actionable form, because it is the most directly connected to decisions the compliance function can make. A comprehension rate that has declined over two consecutive training cycles in a specific risk category is a finding that demands a response. A comprehension rate that has improved following a redesign of the scenario-based content in that category is evidence that the redesign worked.
External benchmarking — where data is available — provides the calibration dimension: a sense of where the organisation stands relative to comparable peers in terms of programme design, delivery, and measured outcomes. This is harder to access with precision, but industry working groups, professional associations, and regulatory publications increasingly provide sufficient data to allow meaningful comparison.
The purpose of both forms of benchmarking is the same: to create a standard against which the compliance function evaluates its own performance, and to generate the dissatisfaction with current performance that is the necessary condition for improvement. A compliance training programme that benchmarks well — that scores above the internal targets it has set for itself and above the external reference points available to it — should treat that performance as a starting point, not a destination.
This article reflects the compliance advisory perspective of Compliance House and is intended for informational purposes. It does not constitute legal advice. Organisations seeking specific guidance should consult qualified counsel in the relevant jurisdiction.
Download this article
Save a PDF copy for offline reading, or share it with a colleague who might find it useful.