How do you measure change against something that's never happened?
And hopefully, never will...
Whenever we want to measure the impact of a learning intervention, the first step is usually to establish a baseline for current performance or behavior.
If we’re designing a sales-enablement program, we might start by looking at performance against target over the last twelve months.
If we’re developing customer-service training, we might consider CSAT ratings, Net Promoter Score, or complaint-resolution times.
From this baseline, we can then examine how behavior changes post-intervention, ideally using a control group to isolate the effects of our program.
But in a compliance context, in addition to checking a box, we’re often trying to prevent some kind of catastrophic event that may never have happened — enabling modern slavery, allowing a fatal accident to occur, or falling victim to a ransomware attack.
In such cases, our baseline is most likely zero. And if it remains at zero after we roll out our intervention, we can’t conclude that it was our efforts that made the difference.
Even if the worst has never happened, that doesn’t mean it couldn’t happen, and organizations should reasonably expect L&D to demonstrate that it’s doing its part to reduce that risk.
This was the challenge we faced a few years ago when we worked on an information-security project for a prominent pensions and insurance provider.
Following an external audit, the organization had identified various high-risk roles they wished to target, with the goal of instilling a ‘stop and think’ mindset.
Without a clear baseline for info-sec breaches, we needed another way to determine whether the learning intervention we’d designed had moved the needle.
To solve this problem, our Insights team of behavioral scientists completed a literature review, which uncovered a correlation between security compliance and an individual’s tendency to favor long-term decision making.
Based on this finding, our team developed a valid and reliable ‘pre and post’ survey which measured not only respondents’ attitudes to security, but the extent to which they accounted for the future impact of their decisions.
We surveyed a demographically representative sample of the total population, including people who didn’t complete the training, giving us a natural control.
The results showed a statistically significant improvement across decision-making measures and security-based statements for the test group, while the control group remained closer to the pre-intervention baseline.
Although this data didn’t allow us to claim that we’d prevented an info-sec catastrophe, identifying and measuring academically correlated metrics did allow us to demonstrate that we’d shifted behavior in the right direction.
Need help with a measurement and evaluation challenge? Get in touch by emailing custom@mindtools.com or reply to this newsletter from your inbox.
🎧 On the podcast
Induction programmes play a crucial role in shaping the way a new starter thinks and feels about an organization.
In last week’s episode of The Mindtools L&D Podcast, Cammy and Claire joined me to discuss:
what makes induction programmes different from other L&D initiatives;
how an ‘accomplishment framework’ can help new starters (and learning designers!) break induction down into manageable chunks;
how to measure the impact of induction programmes.
Check out the episode below. 👇
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website.
📖 Deep dive
How much time do you think AI saves you each week?
According to new data from the AI consulting firm Section, the way you respond to that question likely depends on your position in your organization.
Based on a survey of 5,000 white-collar workers, two-thirds of non-management staffers believe AI saves them less than two hours each week, or no time at all.
In contrast, over 40% of c-suite executives report that AI saves them more than eight hours per week.
On top of this, when asked how they feel about AI, almost 70% of workers say they feel anxious or overwhelmed, while over 70% of executives say they primarily feel excited about the technology.
So, what’s going on here?
One possible explanation is that executives and frontline employees are experiencing different kinds of AI benefits.
Senior leaders tend to use AI for summarizing information, drafting communications, or supporting decision-making: tasks where even small accelerations feel highly visible and valuable.
For individual contributors, however, AI might sit inside complex workflows, where any time saved is offset by checking and correcting AI output.
And, of course, it’s also true that feeling productive isn’t the same thing as being productive.
Ellis, L. (2026, January 21). ‘CEOs say AI is making work more efficient. Employees tell a different story.’ The Wall Street Journal.
👹 Missing links
Last week, the World Economic Forum gathered the great and the good for a frosty reception in Davos (“frosty” in every sense of the word). Here’s three things your Dispatch correspondents learned from the event.
🌟 The Enterprise fails to hit warp speed
In a far-ranging interview, Anthropic CEO Dario Amodei spoke to Bloomberg about Artificial General Intelligence (AGI), the economic impact of AI, and the different incentives that exist when you target Enterprise rather than Consumer customers. What struck us most, though, is his argument that AI’s potential is capped by the inherent structures of enterprise organizations. We say: put a rocket on the moon. IT says: We need a countersigned software request from. L&D says: We’ll develop training on this next year.
🤖 AI adoption looks like a J-Curve
Those companies that do adopt AI tools should expect to go through a J-Curve, reports Nicholas Thompson, CEO of The Atlantic. If you have existing processes and procedures in place (and you probably do), then adopting AI is going to cause a dip in productivity - followed by a dramatic increase as AI starts to deliver value. If you work in the information economy, you can probably afford a few mistakes while you experiment and learn. If you run a hospital, that’s more problematic.
📖 The biggest barrier to AI adoption: Us!
When organizations do adopt AI, there are two key factors that unlock value. The first, of course, is the technology. The second is the ability of employees to use that tech. According to Omar Abbosh, chief executive of Pearson:
‘The biggest obstacle to AI adoption is the lack of human skills to work alongside these technologies.’
In a new report, released at the World Economic Forum, the Pearson team argue that AI productivity gains will depend on learning.
🤓 A brief note…
We spent much of the last year helping companies large and small adopt AI tools, from policy roll-outs to crowdsourcing good practice and support on live events.
We even built an AI tutor that helps users adopt AI, via our AI Skills Practice tool!
Get in touch if you’d like to find out more.
👋 And finally…
Ross G took umbrage when I sent this video to him last week. Make of that what you will. 😉
👍 Thanks!
Thanks for reading The L&D Dispatch from Mindtools Kineo! If you’d like to speak to us, work with us, or make a suggestion, you can email custom@mindtools.com.
Or just hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?





This piece made me think about zero baselines in AI ethics.