We created a ChatGPT app and it's incredible
Learning Pool's collaboration with Mind Tools has produced a simulation that is truly game-changing.
‘You’re fired,’ I say, speaking matter-of-factly into the microphone.
John is angry when he replies.
‘I don't think you can just fire me like that, Ross. 😠 There's got to be a legitimate reason and proper protocols to follow! 📝 Besides, is this really about my work, or something else?’
‘It’s about your work,’ I say, speaking once more into the mic. ‘Your deliverables are late, your work is unpolished, and you're often late. You are done.’
‘But Ross, as my manager, it's crucial that we maintain an open line of communication!’
John’s starting to come across as desperate. He continues:
‘I'm being honest here; if we address these underlying issues together, I truly believe that my work performance can improve. 😇’
‘There have been enough false promises so far,’ I say. ‘You're out mate. Goodbye.’
I end the conversation.
It’s strange. I actually feel bad about what just happened. And not just because I’ve scored 1/10 for ‘Being empathetic and acknowledging feelings’.
I’m testing Converse, a new experimental app developed by my Mind Tools colleagues and our friends at Learning Pool, which combines Chat-GPT and our own content library to simulate a difficult conversation with a team member. I’d already tried to get a high score, now I was testing it to see just how low I could go.
Turns out, pretty low!
Our Learning Experience team are used to creating simulated conversations. We often use interactive branching scenarios to help learners practice the kind of skills we want them to demonstrate at work. But real conversations don’t work like this.
When our learners sit down to have difficult conversations with their colleagues, they won’t be presented with pre-set options, and asked to decide what to say. They’ll have to think on their feet and use their own words to broach sensitive topics.
Our new Chat-GPT prototype app is the closest we’ve ever got to simulating a real conversation, at scale.
In the prototype, the user is presented with a short brief for a conversation with a direct report. Once the conversation begins, they use the chat interface to raise concerns about their direct report’s performance, or can talk into a microphone. The direct report responds in real time.
At the end of the experience, the learner is provided with personalized, AI-generated feedback on the conversation, informed by content from the experts here at Mind Tools.
This last part is crucial. Anyone who has used ChatGPT will know that it occasionally gets things wrong. And when it does get things wrong, there’s no way of knowing why.
With Converse, the developers at Learning Pool and Mind Tools have attempted to overcome this problem by training GPT on content we can trust: our own.
As Mind Tools CEO John Yates explains:
‘Transparency is critical when it comes to generative AI tools, particularly in areas like learning where the stakes are high. We need to be able to trust the answers we’re getting from these tools and understand how they’re arriving at those answers. Through our partnership with Learning Pool, we’re committed to developing that trust and transparency.’
We (Ross D and Ross G) have been testing Converse internally over the last couple of weeks, and it genuinely feels like a game-changer.
One of the first things we noticed was that the experience is mildly anxiety-inducing: much like a real difficult conversation. Even though we know we’re talking to an AI, its responses are ‘human’ enough to elicit an emotional response. When it gets angry or defensive, there’s a natural impulse to try to defuse the situation.
Unlike traditional e-learning scenarios, it also has tremendous replay value. Because every run-through of the experience is unique, it’s fun to explore different approaches to the conversation, and see which is the most effective.
For now, it’s just a prototype. But we’re keen to do further testing with real users. If you want to have a go, and are attending the Learning Technologies conference this week (May 3 and 4), come visit us at Stand J50 in the exhibition, next to Theatre 6, to have a go.
Alternatively, you can can sign up here to get on the waitlist and take part in testing remotely.
Fancy working with us? Get in touch by contacting: custom@mindtools.com (or hit reply if you’re reading this in your inbox!)
🎧 On the podcast
A topical podcast this week, since the new Mind Tools experiment might shortly do us out of a job. Ross D and Owen were joined by author Ashely Recanati to discuss his book AI Battle Royale: How to Protect Your Job from Disruption in the 4th Industrial Revolution.
When comparing the AI revolution to previous industrial revolutions, Ashley said:
‘The previous revolutions were basically ones where the technology was replacing physical labor. As a refuge, we could always go towards more cognitive tasks and jobs that require that you use your brain. Now, with AI, this is a revolution that is attacking cognitive work.’
One solution might be the Luddite-approach: just break the machine. But Ashley offers a more pragmatic solution: to jobcraft your current role such that you are using these tools rather than being replaced by them.
Listen to the full episode here:
You can subscribe to the podcast on iTunes, Spotify or the podcast page of our website. Want to share your thoughts? Get in touch @RossDickieMT, @RossGarnerMT or #MindToolsPodcast
📖 Deep dive
Ever launch a digital learning solution that never got used? Yeah, us too.
For over 10 years, your L&D Dispatch correspondents have been making digital stuff to improve workplace performance, then agonizing over how to make sure the right people find it at the right time to actually be useful.
Now, Ross G has pulled together five key insights for an article in Chief Learning Officer.
It covers:
Designing for colleague needs: Make sure your digital content reflects the concerns of the people you want to support (more on this here).
Linking to content where it’s needed: Workplace learning tends to be situational - link to it where it’s needed.
Avoiding passwords: You will lose people when you ask them to remember a password.
Providing nudges: Engaging with workplace learning is a choice. Help colleagues make a better choice!
Test what works: Your context will have its own challenges. Don’t be afraid to experiment.
Check out the article for a more in-depth look at how to apply these insights to your own workplace learning. And hey, if you have other ideas, please get in touch! We’re always looking to learn more.
Garner, R. (2023). 5 ways to ensure your digital learning content gets used. Chief Learning Officer.
👹 Missing links
🏠 Hybrid working is here to stay - and some would like it to grow
Three years on from the start of the Covid-19 pandemic, hybrid working has become the norm. New research from Pew looked at where workers (who have the option to work from home) choose to spend their time. 35% work from home all the time, 41% work a hybrid schedule, and only 24% 'Rarely' or 'Never' work from home. Of those who work from home ‘Most of the time’, 34% would prefer to work from home every day.
The survey doesn’t reflect everyone: 61% of US workers cannot work at home because of the type of work that they do. But it does indicate an ongoing need to design for a myriad of working patterns and locations.
🤖 ‘I’m not hallucinating, you’re anthropomorphizing!’
Ethics, hallucinations, bias, reason. An AI cares not for these things. Instead, they're concepts that we (humans) use to describe an AI tool when it does something that loosely resembles human behavior. In this blog post, Donald Clark takes seven words commonly used to describe AI and explores what they actually mean, in the context of how an AI operates.
🔮 You might click on this link (maybe)
‘If X happens, Y will follow.’ That’s the kind of deterministic thinking that gets people to read newspaper columns. But, in the real world, there’s very few things we can predict with certainty. Instead, technology and social media scholar Danah Boyd advocates ‘embracing probabilistic futures’, where: ‘If X happens, Y is more likely.’ It’s a less comfortable mindset to adopt, but it’s more honest. Thanks Julie Dirksen for recommending this.
And finally…
We have a new favourite video on the internet: the AI-generated advert for ‘Pepperoni Hug Spot’. What is wrong with these peoples’ faces? What tool is that being used to cut the pizza?
It’s get better with each watch. Enjoy!
Thanks Andrew McGlyn for sharing this with us!
👍 Thanks!
Thanks for reading The L&D Dispatch from Mind Tools! If you’d like to speak to us, work with us, or make a suggestion, you can get in touch @RossDickieMT, @RossGarnerMT or email custom@mindtools.com.
Or hit reply to this email!
Hey here’s a thing! If you’ve reached all the way to the end of this newsletter, then you must really love it!
Why not share that love by hitting the button below, or just forward it to a friend?