Most compliance training programmes leave behind records. Very few leave behind evidence.
There is a distinction that most compliance functions do not make explicitly but that matters enormously in practice: the distinction between a record and evidence. A record tells you that something happened. Evidence tells you what it meant, whether it achieved its purpose, and what should happen next as a result.
Compliance training programmes generate records in abundance. Completion rates by module, by department, by geography. Time spent on each section. Scores on end-of-module quizzes. These are records. They confirm that employees opened a training module, moved through its screens, and answered some questions. They confirm very little about whether the training achieved the compliance objective it was designed to serve.
Evidence is different. Evidence connects the training to the risk it was designed to address. It captures not just what score an employee achieved, but what that score reveals about their understanding of the specific risk scenario the question was testing. It records which question types — which risk categories, which decision scenarios — generated the most errors, and therefore where the programme's impact on the organisation's control environment is weakest. It documents the analysis that the compliance function applied to those results and the programme decisions that followed.
The four layers that transform records into evidence.
The first layer is the risk foundation. Every compliance training module should be traceable to a specific risk identified in the organisation's compliance risk assessment. The documentation should record which risk the module addresses, why the risk assessment identified this as a priority, and what the training is designed to accomplish in terms of reducing the probability of the risk materialising. Without this layer, training documentation is detached from the compliance programme it is supposed to support.
The second layer is the audience rationale. Documentation should record which employee populations were trained and why — specifically, which risks those populations are exposed to and how the training content was calibrated to their role, their function, and their specific exposure. A generic audience record — 'all employees' or 'management level and above' — is not adequate documentation for a risk-based compliance programme.
The third layer is the assessment analysis. Assessment results should be documented not only at the aggregate level but at the question level — which scenarios generated correct responses, which generated errors, which risk categories are consistently misunderstood across the population. This analysis is the compliance function's primary source of intelligence about where its control environment is weakest and where investment is most needed.
The fourth layer is the response record. What did the compliance function decide in response to the assessment findings? Where comprehension gaps were identified, what was done about them — remedial training, management intervention, policy clarification, increased monitoring? The documentation of this response is what transforms training from a periodic activity into a functioning control loop.
The test of adequate documentation is whether it would support the following statement to a regulator or auditor: 'Our compliance training programme is a component of our internal controls. Here is the risk it was designed to address, here is the population it was delivered to and why, here is what the assessment results revealed about comprehension, here is what we did in response, and here is how we know the control is functioning.' If the documentation cannot support that statement, it is not adequate for compliance purposes.
The audience for compliance training reports is not HR. It is governance.
Compliance training reports that are produced for HR committees measure learning activity. Compliance training reports that are produced for audit committees, risk committees, and boards measure control effectiveness. The difference is not presentational — it is substantive.
A governance-level compliance training report addresses the questions that governance bodies should be asking: Which integrity risks does our training programme cover, and which does it not yet reach? Where have assessment results identified comprehension gaps, and what does that tell us about the robustness of our controls in those risk areas? What changes to the programme has the compliance function made in response to those findings, and what is the evidence that those changes have been effective?
Producing this report requires the compliance function to have done the underlying analysis. The report is not the goal — the analysis is. But the discipline of producing a governance-level report creates the discipline of doing the analysis that would support it, and that analysis is what allows the compliance function to manage its training programme as the control it is designed to be.
This article reflects the compliance advisory perspective of Compliance House and is intended for informational purposes. It does not constitute legal advice. Organisations seeking specific guidance should consult qualified counsel in the relevant jurisdiction.
Download this article
Save a PDF copy for offline reading, or share it with a colleague who might find it useful.