Like AI, L&D has an alignment problem
There's a reason most compliance training isn't very good.
Imagine you’re a super-intelligent AI, tasked with manufacturing paperclips.
To achieve your objective, you begin gathering resources and building factories. Over time, these resources become increasingly scarce, and the humans who designed you start to worry about your impact on the environment.
You realize there’s a possibility these humans might decide to switch you off, preventing you from fulfilling your objective. You also realize all that iron sloshing around in their bloodstreams might just solve the scarcity problem you’ve been having…
This thought experiment is a version of the ‘paperclip maximizer’, conceived by philosopher Nick Bostrom. It’s often cited as an example of the ‘alignment problem’, which describes the challenge of aligning an AI’s objectives with human goals, ethics, and values.
But misalignment isn’t just a problem for AI. It’s also a problem for L&D.
To illustrate this, let’s consider another thought experiment.
Suppose you’re a super-intelligent L&D professional (which, of course, you are), tasked with developing health-and-safety training for your organization.
To achieve your objective, you start by carrying out a thorough needs analysis. You take an outcomes-based approach, focusing on the behaviors colleagues will have to develop to stay safe at work. You design a short, tightly scoped learning experience, including simulations that allow colleagues to practice these behaviors in a controlled environment.
But when legal reviews your intervention, they express concern. For compliance purposes, they believe the training should be more extensive, and include several lengthy excerpts from the organization’s policy.
By the time your intervention launches, it’s no longer a short, performance-focused experience, but a bloated, text-heavy course, emphasizing ‘understanding’ and ‘awareness’ over behavior change.
To be clear, the legal team isn’t the villain in this story. But neither is L&D. The issue here is simply that their objectives are misaligned.
To me, this is a fundamental tension in compliance training, whose purpose is both to prevent (or, at least, reduce the risk of) non-compliance occurring, and to protect the organization when non-compliance does occur. And I think this is why a lot of compliance training is, to put it mildly… not very good.
So, what’s the solution?
Obviously, one option is for L&D and legal to find a middle ground that is acceptable to both parties. If legal insists that policy wording needs to be included in the intervention, then it’s L&D’s job to bring the policy to life for learners by connecting it to practical, real-world situations.
But another option would be to separate these two objectives entirely. Instead of weighing your course down with legalese and increasing your learners’ cognitive load (see Ross G’s deep dive on cognitive load for more on this), simply ask them to read and acknowledge the policy. Instead of testing learners’ understanding of the policy, test their decision-making in relevant practice scenarios. Just don’t try to do both things at once.
Need help creating compliance training that does more than just tick a box? Then email firstname.lastname@example.org or reply to this newsletter if you’re reading it in your inbox.
🎧 On the podcast
Does L&D’s obsession with proving its worth distract it from solving real business problems? Debbie Carter thinks it does. As she points out in this week’s episode of the Mind Tools L&D Podcast, this obsession doesn’t appear to be shared by our colleagues in other functions:
Why are we doing this? Surely, if we are making a difference to the business, and if the business is achieving its targets, why does L&D have to prove itself? HR doesn't.
Debbie has worked at Training Journal for over twenty years. In this time, she’s seen many things change in our field, while others have stayed the same. To hear her reflections on the state of the profession in 2023, check out the full episode below:
📖 Deep dive
I stumbled across this week’s ‘deep dive’ while reading The Voltage Effect by John A. List (look out for a podcast episode on the book soon).
In one chapter, List references the effect of WEIRDness on behavioral science research, which often makes broad claims about human psychology based on samples drawn exclusively from Western, Educated, Industrialized, Rich, and Democratic societies. And as it turns out, people from these societies are not all that representative of humanity as a whole.
In a 2010 paper, published in Behavioral and Brain Sciences, researchers from the University of British Columbia find that ‘WEIRD’ subjects are, in fact, particularly unusual compared with the rest of the species when it comes to issues like visual perception, cooperation and moral reasoning.
So what’s the takeaway for L&D?
Well, if you work for a global organization - perhaps one that’s headquartered in Europe or North America - it suggests you should test L&D programs with a global audience. This might sound obvious, but it’s all too easy to pilot new initiatives with people who are on hand, who work the same hours as you do, and who speak the same language. But you can’t assume that what works for this audience will also work for your colleagues around the globe.
Henrich, J., Heine, S., & Norenzayan, A. (2010). ‘The weirdest people in the world?’ Behavioral and Brain Sciences, 33(2-3), 61-83.
👹 Missing links
With conference season in full swing, I wanted to recommend this lovely article by ‘friend of the pod’ and fellow newsletter-writer, JD Dillon. In this piece, JD explores how the conferences he’s attended over the years have helped him build a community, find work, and even overcome personal challenges.
We Brits have long attributed our poor performance in the Eurovision Song Contest to bias and bloc voting. Terry Wogan famously resigned his commentary role over the issue, claiming Eurovision was ‘no longer a music contest’. But is there anything to these accusations? In this episode of the FiveThirtyEight Politics podcast, Galen Druke interviews statistics and economics professor Gianluca Baio, who has created a statistical model to answer that question.
If you’ve ever wondered what ‘The Lords of the Rings’ trilogy would have looked like had it been directed by Wes Anderson, then you’re in luck! Using Midjourney and other AI tools, Reddit user u/Curious_Refuge has created a trailer for a movie to rule them all. Naturally, Bill Murray plays the role of Gandalf.
On the topic of Wes Anderson, my wife recently brought my attention to a new social-media trend, which sees people document their vacations in the iconic style of the eccentric director. Here’s one of the finest examples.
Enable 3rd party cookies or use another browser
Or hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?