Where have all the answers gone?
Advanced analytics turns the refresher course into a business intelligence tool.
In the middle of last year, Mindtools acquired Kineo: one of the world’s best known custom development houses.
For our team of learning designers, it was a big moment. Not least because we could immediately see a clear opportunity to combine Kineo’s advanced analytics with the work that our behavioural science team were doing.
For years, we’d been asking questions of our learners: Pre-tests, formative activities, summative assessments. But limitations in standards, authoring tools and LMS reporting meant most responses disappeared into a black box.
Over the past few months, all of that has been changing.
📜 A brief history of SCORM
Why is it that the traditional e-learning course is such a black box?
In the early 2000s, the SCORM (Sharable Content Object Reference Model) standard was developed as a way of ensuring that an e-learning course developed by one vendor would function in a learning management system created by another.
For customers, this approach offered choice and freedom. They could work with multiple e-learning vendors or, as rapid authoring tools emerged, they could build their own courses.
But the data available on user activity within a SCORM course was limited and, in practice, most LMS reporting reduced this to completion status and a final score. Though SCORM 2004 expanded the interaction model, allowing question-level data to be captured more easily, many authoring tools and LMSs never fully supported it.
The other problem with any technical standard is that it’s difficult to change. The QWERTY keyboard layout dates back to the 1870s. The gauge of railway tracks dates back to a similar era.
Changing either is difficult because switching costs are enormous.
So it is with SCORM.
🥫 Enter xAPI
In the 2010s, an alternative standard known as xAPI (or Tin Can) was launched as a way of tracking learning events from multiple systems. Your LMS, sure, but also your mobile app, game or simulation.
This data would be captured by a Learning Record Store (LRS), and herein lies the problem: customers needed to buy another platform.
Combine this with patchy adoption, and xAPI became a square peg in a world of round holes: a neat idea that quickly got displaced by enterprise business intelligence platforms.
📈 Advanced analytics
Which takes us back to Kineo’s advanced analytics capabilities.
Rather than replace SCORM, the Kineo team accepted that the standard was here to stay - at least for now - and began tracking anonymised data via an external database.
This approach meant that learner responses can be surfaced wherever-we-want. For example, in our own dashboards or in a tool like PowerBI.
What value does this add?
🔁 Iterative improvement - When you can see how learners are responding to questions, you can improve your content. You can review problem areas, and assess whether a question is justifiably difficult or just badly designed.
⚠️ Spotting business risk - We often use ‘test outs’ in our compliance courses. If people can demonstrate competence upfront, we don’t force them to complete the corresponding content. Now, we can use that data to identify areas where learners often struggle and flag that to our clients.
💸 Demonstrating cost savings - Another benefit of using ‘test outs’ to reduce seat time is that it saves the business money on mandatory learning. Ten minutes saved, multiplied by your entire employee population, is a lot of time that could be spent elsewhere. While many LMSs will report on time in course, our dashboards compare average seat time (after test out) to total seat time (the whole course), making this information easily accessible.
💡 Pre- and post-confidence - Many courses use confidence sliders to give learners an insight into how they have changed. The learner rates their confidence at the start, and again at the end, and is asked to reflect on the difference. This both supports metacognition (“thinking about thinking”), and helps surface the value of the learning experience. With advanced analytics, aggregated results can then be surfaced to demonstrate the impact of learning on learner confidence.
🗳️ Social questioning - Capturing question-level responses in an external database means that those responses can then also be surfaced back inside the course. For example, a learner could be asked to identify the factor that has the biggest impact on sales performance. On submission, they then see how their response compares to all other responses.
🧠 Behavioral insights - Our most exciting development. The Mindtools behavioral scientists have used various data collection methods for years, but building these into a SCORM course creates a more streamlined experience for users. And it transforms your annual compliance course into a business intelligence tool, surfacing both behavioral risks and competency gaps that you can then action.
Some of these capabilities are already possible with SCORM 2004 or xAPI, where authoring tools or LMSs support it. But, with advanced analytics, we can now provide this level of insight agnostic to the host platform.
For us, it’s been fun to experiment with these capabilites, and offers a nice bridge from the world of SCORM to the integrated platforms of the future.
Want to share your thoughts on this week’s newsletter? Need help with your next project? Get in touch by emailing custom@mindtools.com or reply to this newsletter from your inbox.
🎧 On the podcast
In the introduction to his latest ‘Global Sentiment Survey’ report, Donald Taylor describes this year’s results as ‘the most significant’ in the survey’s history.
In this week’s episode of The Mindtools L&D Podcast, Don joins Ross D and Ross G to discuss:
whether interest in AI has peaked;
why Don believes the results reflect a breaking of old norms;
the steady climb of ‘showing value’ and the relative decline of ‘learning analytics’;
the key challenges L&D professionals are facing;
the things these professionals are doing now that they weren’t doing a year ago.
Check out the episode below. 👇
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website.
📖 Deep dive
This year’s Global Human Capital Trends report from Deloitte just dropped, based on a survey of 9,000 business and HR leaders across sectors and in 89 countries.
The report looks at how these leaders intend to grow their businesses, and it will come as no surprise that the majority are focused on speed. Economic uncertainty, political fallouts, and spreading conflict make it essential that organizations are able to pivot quickly but effectively.
That means adapting, not panicking.
One way to do this is to adopt new technologies that make decision-making faster, better, and more informed. Like, AI?
But the authors sound a warning note:
‘Those taking a tech-focused approach are 1.6x more likely to not realize returns on AI investments that exceed expectations compared to those that take a human-centric approach.’
The alternative is a human-centric focus, where culture, workflows and accountability mechanisms all have to evolve.
In essence, when organizations focus on technology the gains are limited by existing behaviors and norms. To truly make a difference, colleagues need to see the potential of these technologies, embrace them, and build new working practices that maximize their potential.
Poynton, S. et al. (2006). 2026 Global Human Capital Trends.
👹 Missing links
Training on AI skills development has focused too much on writing prompts. That’s the view of researchers from Stanford University, who argue that when training is kept at this level it tends to funnel employees toward low-impact activities like re-writing emails. The real benefit comes when they start to think like product managers: identifying meaningful problems, experimenting with solutions, and integrating those solutions with the wider ecosystem. The driving force behind this shift? How managers role model this behavior. (Hat tip Alyn Kinney for this one).
In other AI news this week, Anthropic have fallen out with the US Government over the use of their AI technology in military operations. According to the BBC, Anthropic CEO Dario Amodei has said that he does not want his technology used for “autonomous kinetic operations” (a rather chilling phrase for having an AI make military targeting decisions). In response, the US Government is phasing out existing work with Anthropic and has indicated that other companies with military contracts should do the same. This might not seem that relevant to those of us working in L&D, but it’s a reminder that we haven’t yet reconciled the ethics of AI - and the tools we choose to adopt come with both moral and commercial challenges.
⏱️ OpenAI dives into learning measurement
One of the big problems with AI (aside from autonomous kinetic operations), is the issue of ‘cognitive offloading’: where humans get dumber because the AI is doing the hard work for them. In this post, the “third Ross” (Ross Stevenson) breaks down OpenAI’s new ‘Learning Outcomes Measurement Suite’ (LOMS). This uses system instructions around how to behave as a teacher, then analyzes learner responses based on indicators of learning, quality of responses, changes over time and standardised assessments. Ideal when the learner is sufficiently motivated to learn to engage in the first place!
👋 And finally…
One of my direct reports (OK, it was Ross Dickie) sent me this clip on Instagram the other day. The boy can dream.
👍 Thanks!
Thanks for reading The L&D Dispatch from Mindtools Kineo! If you’d like to speak to us, work with us, or make a suggestion, you can email custom@mindtools.com.
Or just hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?



