Let's pretend we're going to evaluate your project
(Even though we may never get around to it)
Last week, I dropped my iPhone in a bathroom and smashed the screen protector. For the purposes of this illustration, no further details are necessary. Suffice to say that the smashed screen was a problem, the new one I ordered was a solution, and applying the new protector to my phone resolved the issue.
If only workplace learning were so easy. Usually we’re not certain what the problem is, our solution is based on a hunch, and we have no idea if the issue is solved afterwards.
Across our industry, talented professionals are trying to address this: Will Thalheimer’s LTEM model is great; Kevin M Yates offers his L&D Detective Kit for free; and last year’s Learning Technologies Conference offered 10 exhibition sessions on measurement and evaluation.
But let’s be honest: measuring impact is hard. Metrics often don’t exist; the time it takes could be better spent elsewhere; and many teams do not have the skills to measure effectively.
Here’s some internal Mind Tools data to back up this claim:
For organizations in Central and Local Government, evaluating impact is the least common capability L&D teams have in-house (14%), and yet most L&D leaders (77%) report needing these skills immediately.
— Mind Tools for Business internal insights, based on responses to our Learning Performance Benchmark.
So let’s zig when others zag: rather than stress about how to measure the success of an L&D intervention, let’s just pretend we’re going to do so.
Setting project outcomes that align with business goals helps us focus on what’s important - even if we never actually measure the impact.
Contrast these two potential project outcomes:
Option 1: Create an onboarding programme for my organization
Option 2: Improve new hire time-to-competence by 20%
Option 1 is relatively easily achieved, but there’s no reason to exclude anything from that programme. Every stakeholder with enough clout will be able to have a say in what’s included.
Option 2 is more difficult, and we may have no idea how to measure ‘time-to-competence’. But the way it is framed means we can now start speaking to recent hires and asking them useful questions like: ‘What would have helped you get up to speed more quickly? What blockers did you encounter?’
Our ‘pretend’ metric helps shape our intervention, gives us a reason to reject extraneous content, and one day - maybe - we might even measure if it worked.
If you want help defining outcomes - or measuring the impact of your projects - get in touch.
🎧 On the podcast
On the topic of measurement, Don Taylor’s Global Sentiment Survey has been measuring what L&D practitioners think will be ‘hot’ in workplace learning for the last ten years. And while certain trends have come and gone, the need to show value and demonstrate impact has remained more or less constant. As Don says:
All options trend downwards over the course of the survey. That is true, except for […] ‘consulting more deeply with the business’ and ‘showing value’, [which] have stayed resolutely in the middle of the table. […]
To hear Ross D’s full interview with Don, and discover what folks in L&D are excited about in 2023, listen here:
Don’s write-up of his survey results are available on his website.
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website. Want to share your thoughts? Get in touch @RossDickieMT, @RossGarnerMT or #MindToolsPodcast
📖 Deep dive
Patti Shank, very much a friend-of-the-show, has been working on a series of articles on how to create effective videos for learning. In part 3 - and this is very reductive of me - she outlines why we should personalize it, chunk it, and avoid elements that distract from learning.
Patti’s post is well researched, packed with advice, and does an excellent job of translating evidence-based insights into actions that we as learning experience designers can take when producing video.
Check out the article at eLearningIndustry.com.
👹 Missing links
👩🏫 Does ‘flipped learning’ work?
Predictably, the answer seems to be ‘it depends’. The concept is alluring: learners study materials alone so that time together can be better spent on collaborative activities. But that only works if learners actually do the pre-work. If the teacher or trainer then has to repeat material for learners who didn’t do the pre-work, then they’re actually increasing the amount of passive content delivery. If you’re designing flipped learning sessions, ask yourself: why would anyone do the pre-work?
✏️ “Everybody’s good at something. Being bad at something reminds us of how we ever got good at anything”
You’re never too old to learn, but at a certain point it’s tempting to feel like it’s not worth the bother. In this article, New Yorker writer and critic Adam Gopnik shares a meditative look at what it’s like to learn new skills later in the life and the effect that has on him and those around him.
💻 How tech has evolved - and what comes next
This one’s kind of gimmicky, but it’s a fun visualization of how everything we used to have on our desks has now been absorbed by our phone - with a tongue-in-cheek look into the future.
Am I a ‘business podcast bro’? I can’t quite decide, but after six years hosting The Mind Tools L&D Podcast I found this skit from Michael Spicer a timely reminder to keep my own B.S. in check.
Thanks for reading The L&D Dispatch from Mind Tools! If you’d like to speak to us, work with us, or make a suggestion, you can get in touch @RossDickieMT, @RossGarnerMT or email firstname.lastname@example.org.
Or just hit reply to this email!