Discover more from The L&D Dispatch
The Great Training Robbery
Everybody's busy at work: Don't let pointless training rob them of what time they have.
I had to complete some e-learning this week. On the surface, it had a defined business need, a short duration, and a clear call to action. But, to be honest, it kinda sucked.
The supplier in question would probably consider this crushingly unfair. Because we work in this space, we tend to be hyper critical. But the low standard of most e-learning isn’t just an isolated issue.
When we produce ineffective training, we’re robbing individuals of their most precious commodity: time. We’re costing our organizations money, both in terms of time spent on procurement and production, and in terms of employee time. There’s an opportunity cost, where other activities aren’t happening because time is being wasted.
And we’re producing the kind of content that leads to the belief that ‘e-learning is rubbish’.
So how can we do better?
I’ll use this week’s e-learning as an example.
First, the background: ‘Phishing’ is a technique used by cyber criminals to steal sensitive information or install malicious software on your device. If you’ve ever received an email from a ‘Nigerian Prince’ or other wealthy individual asking for help, in return for a substantial reward, then you’ve received a phishing email.
The criminal’s hope is that you’ll be motivated by kindness or the offer of a financial reward - and hand over your bank details.
The Anti-Phishing Working Group (APWG), a global consortium whose members include McAfee, VISA and MasterCard, reported that there were 4.7 million phishing attacks in 2022. That doesn’t include unreported attacks.
So phishing is a real problem.
The e-learning I completed this week was designed to help me identify phishing emails and report them. That’s a real business need, and training is a good solution.
The course didn’t meander into other areas: It focused solely on helping me do this one task, which was great.
It also took about seven minutes, end-to-end, to complete. So I felt like my time was respected. To an extent.
That’s the good stuff.
Now to the bad.
It began with a video that positioned me as a hero, able to protect my organization by using the ‘Phish Alert Button’. A second video introduced a ‘mentor’ character who repeated this message. A third video told me that my IT department was concerned about this, and more-or-less repeated the message of the first two videos.
The fourth video showed an example of ‘spam’ and an example of ‘phishing’, but it was difficult to compare the two. They only appeared briefly on screen and the voiceover didn’t stop to give me time to read them. This is an example of extraneous cognitive load, which we’ve discussed in a previous edition.
The fifth video introduced the ‘Phish Alert Button’, and underlined how easy it is to report a phishing email. I knew this, of course, because I’d already been told in the first video.
The sixth video re-visited the ‘spam vs phishing’ concept from video four, and warned me not to waste my IT department’s time by reporting spam as malicious.
The seventh video showed me, in an abstract animated sense, how to use the ‘Phish Alert Button’. You click it.
The eighth video showed me how to do this in the Outlook desktop app, which I don’t use. I tend to use the web version, so I went to the web app and had a hunt around. The button was in a different place, and looked a bit different.
The final video summarized the above and introduced a convoluted process: ‘Stop. Look. Think. Report.’ To return to the concept of ‘extraneous cognitive load’, this treats one job (“Report suspicious emails by clicking a button”) as four separate stages.
I had intended to revisit some of the questions from the end-of-course assessment but, on revisit, it looks like I can’t see them again. I remember them being very easy.
So, on the plus side, it only took about seven minutes to complete the training.
But even this felt like my time was being taken from me. I learned nothing, and at no point did I ever actually have to practice the thing I was being told to do: which is identify and report suspicious emails.
Here’s an alternative approach to this design:
Tell the learner that their organization is at risk from phishing emails, and that all they need to do is click a button to report these.
Ask the learner which email tool they use and show them where the button is located for that tool.
Present the learner with a series of emails and ask them to decide whether to report them or not. If they can do this four or five times in a row, correctly, then mark the course as complete. If they make mistakes, keep providing feedback and fresh examples until such a time as they pass.
This approach would take just a couple of minutes for learners who have experience in this area, while building the capability of those who need more practice.
More great training, than great training robbery.
Does your organization’s e-learning build capability? Or does it rob learners of their time? Contact email@example.com or reply to this newsletter if you’d like us to help.
🎧 On the podcast
We recorded a bit of a strange episode this week. Earlier this year, friend-of-the-show Will Thalheimer visited Ross G at the ATD Conference in San Diego. They got talking about what Ross was working on, and it quickly became apparent that Will had no idea what Mind Tools did.
This is no fault of Will’s. We frequently meet friends in the L&D industry, and people who have been on our show multiple times, who cannot say even vaguely how Mind Tools actually makes money.
To address this, Will offered to come on our show to interview Ross G and Owen about what heck Mind Tools does. If you’ve ever wondered, now’s your chance to find out. 👇🏾
📖 Deep dive
Since the launch of ChatGPT, there’s been a lot of debate around how AI will impact knowledge work, and how organizations can leverage the technology to improve performance.
In a new study from Harvard Business School, a team of social scientists partnered with consultants at Boston Consulting Group to examine the effect of AI on a series of realistic, knowledge-intensive tasks. After establishing a performance baseline for the study’s participants, the researchers assigned subjects to one of three groups: no AI access; GPT-4 AI access, or GPT-4 AI access with a prompt engineering overview.
What they found was that the consultants in the AI groups finished 12.2% more tasks on average, they completed them 25.1% more quickly, and they produced 40% higher-quality results than those without AI access.
Moreover, the group who received the prompt engineering overview (i.e. training) performed better than those that were simply given access to AI without any additional support.
This has a few clear implications for organizations and for those of us in L&D:
under the right conditions, AI can meaningfully improve the performance of knowledge workers;
those who don’t embrace the potential of AI tools risk being left behind;
L&D has a role to play in helping colleagues maximize the value of these new tools.
But there are also risks we need to bear in mind. Another interesting finding from the study was that when faced with a task the AI couldn’t perform correctly, the consultants without AI access were more successful than those with access. One possible interpretation of this is that the groups with AI access had too much faith in the confident hallucinations of GPT-4.
Dell'Acqua, Fabrizio and McFowland, Edward and Mollick, Ethan R. and Lifshitz-Assaf, Hila and Kellogg, Katherine and Rajendran, Saran and Krayer, Lisa and Candelon, François and Lakhani, Karim R., Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (September 15, 2023). Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013
👹 Missing links
"Hola y bienvenidos al Mind Tools L&D Podcast, un programa semanal sobre trabajo, rendimiento y aprendizaje!". If you’ve ever wondered what I’d sound like if I could speak Spanish, you might soon be able to find out. In partnership with OpenAI, Spotify is piloting a new machine-translation service, which promises to replicate podcasters’ voices in a variety of languages, including Spanish, French and German. For now, the company is testing out the new tool with big names like Lex Fridman, Monica Padman and Dax Shepherd. We assume our invitation got lost in the mail.
Another story that caught my attention in The Verge was the news that Google’s forthcoming ‘Duet’ AI will not only capture notes in Google Meet — it will actually be able to attend meetings on your behalf. If you’re running late or double-booked, Duet will generate text about what you might have wanted to discuss, and share these notes with participants during the call. The feature isn’t available yet, but The Verge reports it is likely to launch at some point next year.
Since his departure from FiveThirtyEight, journalist and statistician Nate Silver has been posting regularly on his Silver Bulletin Substack. Broader in scope than his work at FiveThirtyEight, the Silver Bulletin combines political analysis with contrarian takes on everything from Elon Musk to Taylor Swift. In this article, Silver asks whether America has better food than France and, in his trademark style, offers five nerdy ways of answering that question.
👋 And finally…
As we bid farewell to the great Sir Michael Gambon (forever Dumbledore in our hearts), this playlist has been the soundtrack to our work week.
Thanks for reading The L&D Dispatch from Mind Tools! If you’d like to speak to us, work with us, or make a suggestion, you can email firstname.lastname@example.org.
Or just hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?