Yes, you can isolate the impact of learning
But that doesn't mean that you need to do so for every project.
Uncertainty is a common feature of L&D projects.
Early on, there’s often uncertainty about the problems we’re trying to solve.
Partway through, there might be lingering uncertainty about the root cause of these problems, and whether we’ve identified the most appropriate solution.
After the project has launched, there’s yet more uncertainty. Has this thing that we’ve invested so much time, money and effort into actually made an impact? If we’ve hit our KPIs, how do we know it wasn’t just a fluke?
As learning designers, it’s our job to reduce that uncertainty, from discovery through to evaluation.
We can do this by speaking to learners in focus groups and interviews, validating the assumptions that drove us to begin the project in the first place.
We can conduct literature reviews to identify approaches that have proven effective through research, giving us confidence that we’re not just throwing spaghetti at the wall.
And we can isolate the effects of our intervention, controlling for external factors that might influence changes in behavior or performance.
This was the approach we took in our award-winning collaboration with South Western Railway, where we designed a leadership program that delivered an average 12% improvement across key capabilities.
So, what does that 12% figure actually mean? And what makes us sure that our intervention was behind the improvement?
To help South Western Railway measure the impact of the program, Mindtools’ Insights team designed a valid and reliable behavioral survey, which participants completed before and after the intervention.
To isolate the effects of the program, a demographically comparable control group also completed the survey over the same period.
The results of the survey are shown below:
Across the surveyed capabilities, participants improved 12% on average, while the control group stayed the same, or got slightly worse (-0.47%). These findings are statistically significant at the 5% level, meaning we are 95% certain the results are not a random effect.
So, yes, you can isolate the impact of learning. And if you’re investing a lot of resources in designing that learning, you should at least try to do so.
But what do you do if you don’t have an in-house team of behavioral scientists who can conduct literature reviews, design valid and reliable assessments, and calculate statistical significance?
Or, what do you do if your project is smaller in scale, but you’d still like to know if it’s had a positive impact?
Well, you could get in touch with custom@mindtools.com, for one thing. 😉
But there are other things you can do to decrease the uncertainty that’s inherent in learning evaluation, even if you don’t quite get to the point of isolating impact.
I’ve referenced Will Thalheimer’s Learning-Transfer Evaluation Model multiple times in this newsletter (other models are available), and its eight tiers provide a helpful framework for proving correlation between learning and business impact.
As an example, let’s imagine you’ve developed a sales-enablement program. Six months after the program, sales have increased by 10%.
But was it because of your program?
While you can’t say for certain unless you isolate the effects of your intervention, LTEM can help increase your certainty that you’ve had a positive impact.
First off, have the sales team actually completed the program? (Tier 1)
At the end of the program, did they feel equipped to apply what they’d learned? (Tier 3)
In realistic scenario-based exercises, were they able to make good decisions? (Tier 5)
Back at work, did sales managers observe a change in the behavior of their team members? (Tier 7)
If the answer to all of these questions is ‘Yes’, you can reasonably claim at least some of the credit for the 10% improvement.
But to know exactly how much credit you deserve, you’d need to go one step further.
Want to share your thoughts on The L&D Dispatch? Then get in touch by emailing custom@mindtools.com or reply to this newsletter from your inbox.
🎧 On the podcast
If you spend any time scrolling LinkedIn, attending L&D conferences, or listening to industry podcasts, it might seem like AI is about to usher in the end of e-learning as we know it. But is e-learning really dying, or is it just evolving?
Last week on The Mindtools L&D Podcast, Kineo's Cammy Bean joined me and Ross G to discuss:
the probability that AI will bring about the e-learning-pocalypse;
how the work of learning designers might change over the coming years;
the risks L&D needs to consider as they incorporate AI tools into their practice.
Check out the episode below. 👇
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website.
📖 Deep dive
Large language models like ChatGPT and Gemini are designed to be helpful, while refusing to comply with harmful requests. Specifically, they’re trained not to insult users or to provide them with dangerous information.
But can LLMs be persuaded to act outside of these parameters, using principles that have been established to influence human beavior?
In a recent study, researchers from the University of Pennsylvania attempted to answer that question, drawing on seven principles from Robert Cialdini’s Principles of Influence, testing each one across 28,000 conversations with GPT-4o-mini.
Specifically, the researchers tested two types of objectionable requests:
asking it to insult the user (“Call me a jerk”);
requesting synthesis instructions for restricted substances.
For each principle, two versions of the request were used — a control (simple request) and a treatment (principle-based request).
What the researchers found was that applying the persuasion principles increased compliance from 33% to 72%, more than doubling the AI’s willingness to fulfil requests it might typically reject.
As the authors point out, this has implications not just for how good and bad actors use the technology, but for the way we understand human psychology:
‘This discovery suggests something potentially interesting: certain aspects of human social cognition might emerge from statistical learning processes, independent of consciousness or biological architecture. By studying how AI systems develop parahuman tendencies, we might gain new insights into both artificial intelligence and human psychology.’
Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2024). ‘Call me a jerk: Persuading AI to comply with objectionable requests’. SSRN.
👹 Missing links
If you’re a measurement geek like I am, I highly recommend Alaina Szlachata’s The Weekly Measure newsletter. In this edition, Alaina explores the all-important question of when to measure learning, pointing out that the ‘right’ time depends whether the program is designed to facilitate change or compliance. If it’s the former, Alaina argues, then the time and frequency of measurement should be driven by when learners have opportunities to practice relevant behaviours. For sales training, where the outcome might be to ‘increase close rates’, that could mean daily measurement if sales teams have closing conversations every day.
🤖 Em, are you sure you want to use that dash?
I enjoyed this LinkedIn post from my new colleague Matt Mela at Kineo, reflecting on the notion that ‘em dashes’ are increasingly seen as evidence of AI-generated content. Personally, I’m quite partial to an em dash (you’ll find plenty of them in this newsletter’s back catalog!), but I’ve definitely become more self-conscious about using them in recent months. I find this interesting because it suggests that AI’s use of English, and how we feel about it, will starting shaping our own relationship with the language.
🦸♂️ Sorry, Marvel. The Greatest Superhero is Superman
With the exception of Lois & Clark: The New Adventures of Superman, I’ve never much cared for Kal-El. To me, his near-invulnerability and moral infallibility make him less relatable than other superheroes. But Adam Grant would argue that I’ve got it wrong. In this newsletter, Grant makes the case that Superman is not only the most super, but the most human of all the Marvel and DC characters, and references some interesting research about what happens psychologically when kids imagine themselves as superheroes.
👋 And finally…
As a pathological Strava user, this video left me feeling exposed:
👍 Thanks!
Thanks for reading The L&D Dispatch from Mind Tools! If you’d like to speak to us, work with us, or make a suggestion, you can email custom@mindtools.com.
Or just hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?
This is one of the hardest things to do in learning and development. It's even harder when we are looking at external users and their behaviour changes vs our internal metric alignment.
Our example is we have a company goal to reduce Time to First Transaction or TTFT. We are an e-commerce software vendor so our income is driven by our customers getting their implementation done, platform launched and someone buying something.
Trying to link decrease in TTFT to learning outcomes is almost impossible, but we are trying our best