You're a social scientist (even if you don't feel like one)
Four ways that 'thinking like a scientist' can help improve the L&D profession.
There’s an old joke about social scientists: They’re like real scientists, only more talkative.
It’s a play on the word ‘social’ and a warm embrace of stereotypes, but in many ways it’s also a deep truth: social scientists care about people. They want to know what motivates them, what they care about, and how they respond to events and circumstances.
This means that social scientists do have to talk a lot.
And, if you work in L&D, you are one!
This isn’t typically how we think of ourselves. Usually we’d describe ourselves as learning professionals, trainers, or perhaps cast-offs from the HR department. But our role in organizations is fundamentally the same as that of a social scientist.
We see the world one way, and we wonder if it could be different. We ask ourselves: if we were to do X, what would happen to Y?
That we don’t think of ourselves as social scientists, I think, explains some of where we in L&D have difficulty.
We often don’t totally understand the context that we’re working in. We try things out as if we’re the first person to ever have had the idea. We don’t measure the impact of what we’re doing. And we don’t share what we’ve done for the betterment of others.
Here’s a few ways we could improve our profession by thinking like social scientists:
👀 1. Immerse ourselves in the problem
If we think that a situation could be improved (better managers, increased sales, smoother onboarding, etc), we can take more time to understand the various factors that are at play. Focus groups, interviews, surveys and observations can all help us do this for minimal cost and only a little effort. With this data, we can set a baseline from which to measure the impact of what we do.
📚 2. Complete a literature review
Sounds complicated, right? It’s not really. Rather than brainstorming ideas for our latest learning intervention, as if no one has encountered this issue before, we can search the web for previous examples of when a similar intervention has been deployed. What worked? What didn’t? Is it possible to draw comparisons between that context and our own?
🌡️ 3. Measure progress
Remember that baseline we set in step 1 (above)? We can come back to this now. As we roll out our intervention, we can monitor that metric to see whether our actions have triggered a change. If possible, we can set up a control group (people who are not exposed to our intervention) to measure the extent to which our intervention made a difference. If stakeholders are resistant to a control group (they often are), we can complete a crossover trial, where everyone gets the intervention but at different times.
📣 4. Report our results
This is the most difficult step in the process. Most organizations are reluctant to publish details of interventions that worked. Why should they? It’s extra work to do so, and it gives secrets away to competitors.
So how comfortable do you think they feel sharing details of those interventions that had no impact?
But by withholding this data, we make life more difficult for ourselves - not easier. Like the rowers in a boat race who paddle out of rhythm, we end up going nowhere and everybody gets wet.
We’re fortunate at Mind Tools to have an in-house research team who complete this work for clients every day, so give us a shout if you’d like to explore this further.
Contact custom@mindtools.com or reply to this newsletter from your inbox if you want to chat about our approach in more detail.
🎧 On the podcast
It’s very strange. For years, we’ve been told that people want ‘resources, not courses’. But, when you ask them, it’s longer-term skill development that they’re interested in.
This week, Mind Tools are launching the most dramatic new content type in our library in years, Skill Bites: a nudge-based approach to course design that leverages spaced repetition, retrieval practice and learner motivation to trigger a measurable change in behaviour.
Says Owen, our Chief Product Officer:
‘Skill Bites are a way of developing a skill, over a period of weeks, through the delivery of bite-size pieces of content that is actionable (so there's theory in behind it, but it's stuff that you can act on), combined with a commitment mechanism where the user defines how they will apply what they have learned in practice. And then we give the user an opportunity to reflect on how they got on with that in the following session.’
There’s a lot in there so, to summarize: the user gets a piece of content on a new skill or technique, commits to acting, and is then prompted, via email, to reflect on the extent to which they were successful.
It is super cool.
You can listen to the full episode here:
And subscribe to Skill Bites at mindtools.com (membership required).
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website. Want to share your thoughts? Get in touch @RossDickieMT, @RossGarnerMT or #MindToolsPodcast
📖 Deep dive
I’m not the only person who thinks that failure to report on the results of learning interventions is a problem. Here’s Martin Grill, Anders Pousette and Annika Björnsdotter of the University of Gothenburg:
‘Organizations spend considerable amounts of time, money, and effort on management training. However, leadership training research is still rudimentary, suffering from a shortage of randomized controlled trials (RCTs) – particularly RCTs in naturalistic settings.’
We often hear that L&D teams waste money on courses that don’t work. But the argument that ‘we should stop running courses’ seems like a nuclear option compared to… doing research so we can run courses that do work.
In their study, Grill, Pousette and Björnsdotter looked at the effect of using ‘performance analysis, participative goal setting, instructions, practice with performance feedback delivery until the predetermined criteria are met, and homework’ to develop four leadership behaviors: goal setting, performance feedback, value-based performance feedback, and consequential listening.
Participants were split into a treatment group (n = 25) and a control group (n = 24), and behavior change was assessed across a range of different measures - with input from both managers and their employees.
The number of participants in the trial was relatively small, but the researchers saw improvement in all four behaviors plus ‘leader effectiveness’ and ‘employee engagement’.
This does not mean that all management training ‘works’, but that a thoughtful approach to developing managers based on pre-existing findings can generate positive change.
The entire article is available online.
Grill, M., Pousette, A., & Björnsdotter, A. (2023). Managerial Behavioral Training For Functional Leadership: A Randomized Controlled Trial. Journal of Organizational Behavior Management, 1-27.
👹 Missing links
🧑🤝🧑 Pay a little less attention to your friends
When we’re out and about, and we bump into someone, our eyes tend to gaze all around us. We’re stimulated by our environment, and that experience allows the conversation to move in unexpected directions. Compare that with video calls, where we stare intently into the other person’s eyes for 30 minutes. Which would you prefer? In this article in The Atlantic, psychiatrist Richard A. Friedman argues that we need to be less intense in our interactions in order to build deeper relationships.
🤖 Six principles for ethical AI
Canada’s Vector Institute, a not-for-profit research firm, has released six principles that it believes offer guidance to AI researchers, including the need for AI to ‘benefit humans and the planet’ and ‘to reflect democratic values’. The list is decidedly utopian and biased toward a WEIRD worldview (Western, Educated, Industrialized, Rich, and Democratic). As The Economist points out, the West and China are already in an AI arms race. Can these principles withstand the weight of a global competition?
🤕 The use case for a headless LMS
I’m increasingly interested in the way marketing teams are deploying learning resources as a way to generate revenue. In a previous Dispatch, I wrote about the work that Jaclyn Anku has done at Gusto. This week, I read a post from Talented Learning about the use case for a headless LMS: all the back-end functionality of an LMS like the ability to host courses, manage users, track events and so on, but with a fully customized front-end. This approach comes with an additional cost but, in a customer education context where a prerequisite is that the platform drives revenue, there’s a case for offering a completely bespoke one-off experience. Thanks Alan Hiddleston for bringing this to my attention!
And finally…
Comedian Kylie Brakeman asked ChatGPT to ‘write a two person scene from a comedy that could win an emmy’. It is rubbish. Watch it on Twitter.
👍 Thanks!
Thanks for reading The L&D Dispatch from Mind Tools! If you’d like to speak to us, work with us, or make a suggestion, you can get in touch @RossDickieMT, @RossGarnerMT or email custom@mindtools.com.
Or hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?