Long-term readers will know that we’re obsessed with evaluation here at Mindtools Towers. Whenever we launch a learning intervention, we want to know if it made a difference. But what question are we trying to answer?
🔬 Option 1: Did our intervention make a difference? (Evaluation)
😹 Option 2: How amazing are we at designing learning experiences? (Justification)
It’s a great provocation, put to us by our friend (and occasional client) Carl Akintola. So we thought we’d have some fun with it.
Carl wrote the scenario below to get you thinking, and we’ll be discussing our own reflections in an upcoming edition of The Mindtools L&D Podcast.
So have a read and please do share your own reflections either by replying to this newsletter from your inbox or by emailing custom@mindtools.com. We’ll feature responses in the podcast episode.
Now over to Carl!
Evaluation? Or validation? A tale of two L&D interventions
By Carl Akintola
You’re an L&D professional in a large, nationwide sales organization. Recently, you’ve been in conversation with a couple of senior leaders about how to improve the impact of their middle managers.
Sales are down, and there's broad agreement that weak line management is a major contributor. Your initial diagnosis reveals inconsistent application of essential practices: regular, meaningful 1:1s; clear goal-setting; timely feedback; and coaching.
You’ve got some ideas. You want to focus on embedding those behaviors — with targeted, cost-effective interventions designed around what the evidence tells us works in driving line manager performance and, ultimately, sales.
But there’s a wrinkle.
One of the senior leaders, Mike, is pushing for a different solution from a high-end supplier he’s used before: the Svengali Centre for Advanced Management. Their model? A three-day retreat in a luxurious countryside estate, with lectures from leadership gurus, breakout discussions, and lots of downtime to build connections. “It’s expensive”, says Mike. “But people loved it. They still talk about it to this day!”
You review the offer. And while you see value in peer connection, everything your experience and the research tells you says this won’t change behaviors back on the job. It's not that it’s bad — it just doesn’t solve the problem.
So, rather than push back hard, you take another route often suggested by L&D gurus: you propose putting both interventions to the test.
🧪 The Trial: Going head-to-head with Svengali
After some negotiation, you agree to a trial. Two groups of 15 managers will be randomly assigned to the two different interventions. Half will attend the Svengali retreat. And half will take part in your carefully-designed alternative: a lower-cost program based on the best available evidence about instructional design, behavior change and learning transfer. Minimal time off the job. Maximum impact.
You agree in advance to track a shared set of pre and post metrics for the teams these managers lead:
Sales performance
Scores on relevant items from the employee engagement pulse survey
Observable management behaviors (where feasible).
The interventions are run. The results are in.
And at this point, the story forks into two possible futures...
🥰 Outcome 1: You Win
Sales and engagement scores improve more in the teams of those who took part in your intervention. The difference isn’t massive, but it’s consistent. Mike is gracious: “Fair’s fair. The numbers don’t lie.” The rollout goes your way.
😭 Outcome 2: Svengali Wins
Surprisingly, the retreat group edges ahead — slightly better sales, slightly stronger engagement scores. Again, not a huge difference, but in the wrong direction as far as your intervention is concerned.
You believed your approach was more robust, more evidence-based, and more sustainable. But the results point the other way. So reluctantly you admit defeat. “Fair’s fair. The numbers don’t lie. We’ll roll out Svengali for everyone…”
🪞 Let’s reflect…
Two scenarios. Two decisions. Each based fairly and squarely on the data.
But here’s the challenge: how did you feel about each of those outcomes?
Were you happy to accept the result when you won, but uncomfortable or skeptical about the result when you lost? If so, might that reveal a tendency to use data to confirm what you already believe, rather than to uncover what is true?
Were you equally content to go with the data in either case? If so, are you overlooking the reality that trials like this can easily mislead if they are not rigorously designed and handled with extreme statistical care?
Or perhaps you questioned the trial itself: the sample size, the context, the design, the analysis. In which case, what kind of evaluation would you trust enough to act on, even if the results challenged your convictions?
The acronym formed from the Svengali Centre for Advanced Management will tell you everything you need to know about my biases. But what about yours? What might they tell you about how you approach your L&D role?
Questions to think about
How would you feel about each of these outcomes?
As an L&D professional, are you looking to evaluate, or just validate?
Is ‘some evaluation better than no evaluation’?
What’s the right balance of evidence-based decision making and evaluation?
Let us know your thoughts! Hit reply to this email or drop us a line at custom@mindtools.com. We’ll be discussing your answers (and our own reflections) in an upcoming edition of The Mindtools L&D Podcast.
🎧 On the podcast
If we want to motivate learners to engage with an experience, it stands to reason that we need to present that experience in a way that feels authentic and relatable to them. But what are the limits to authenticity in the workplace? And are some of the conventions of learning design at odds with authentic communication?
Last week on The Mindtools L&D Podcast, Kineo's Matt Mella joined Ross D and Claire to discuss:
What 'authenticity' looks like in the context of learning design
Why it's important, and how to design with authenticity in mind
How Mindtools and Kineo have achieved this in practice.
Check out the episode below. 👇
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website.
📖 Deep dive
This week’s deep dive looks at the human obstacles to AI adoption, identified in ‘The Endeavor Report’ from Dr Markus Bernhardt. (Full disclosure: Markus sits on our Product Advisory Board and presents our ‘Mastering AI for Managers’ course at mindtools.com)
In the report, Markus explores eight case studies from organizations that have used AI to drive meaningful workplace outcomes.
But it’s the blockers to adoption that I found most interesting, and which I think offer the most valuable insight for readers.
Here’s just a few examples:
Chartered Accounts Ireland leveraged adaptive learning, similar to our own Manager Skill Builder, to personalize learner experiences and reduce seat time by 50%.
The blocker: Stakeholders struggled to accept the shift from face-to-face learning to an algorithm-led experience.Gaylor Electric used AI for translation and faster course creation.
The blocker: Concerns about the accuracy of translated materials.Epam Systems used AI to grade learner assessments.
The blocker: Convincing learners that an AI could grade assessments as reliably as a human.
I won’t spoil the report by sharing all of the case studies, but the pattern is clear: Whether they’re end users or project stakeholders, people want reassurance that leveraging AI is going to work for them in their context.
What this report does nicely is bring to light those elusive use cases that show the impact we can have, now, with a bit of experimentation and an open mind.
Bernhardt, M. 2025. 'The Endeavor Report: State of Applied Workforce Solutions'.
👹 Missing links
Our pal Andy Lancaster did a good job this week of pulling together the arguments for and against digital badging. On the one hand, badges are great for motivation, verifying achievements and highlighting skills that traditional qualifications miss. But, as the old saying goes, beauty is in the eye of the badge-holder. The plethora of meaningless badges available online diminishes the value of badging as a concept.
Another nice curation came from Marc Zao-Sanders, pulling together comments from TikTok on the role that ChatGPT is playing as an always-available-and-ever-supportive companion. The attraction is obvious: ChatGPT is upbeat and doesn’t judge, no matter what you say. Which could be a problem. As one user said, it ‘gaslights me into thinking I'm a great human no matter what’.
🚰 What’s the cost of all that chatter?
At some point, in any discussion of AI, someone will bring up the environmental impact that it has. The actual scale of that impact though is… unclear. In this podcast from More or Less, the team investigated the claim that every AI query uses a bottle of water to return an answer. And while it’s definitely true that some water is used (mostly in electricity generation for data centres, but also in cooling for AI servers), the actual amount used depends on a variety of factors. It’s an interesting listen, even if the answer is uncertainty.
👋 And finally…
Was surprised this week to discover that someone had filmed me arriving at work with my Dispatch friend and co-author Ross Dickie.
👍 Thanks!
Thanks for reading The L&D Dispatch from Mindtools! If you’d like to speak to us, work with us, or make a suggestion, you can email custom@mindtools.com.
Or just hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?