My mission is to spark 100 million leaders in the next 10 years.
That’s a lot of people.
How can I possibly reach so many people in so little time? More specifically, how can we train that many people in that amount of time?
AI For Training And Development
The answer: artificial intelligence. Recently we’ve seen breakthroughs in AI-powered chatbots designed to change thought patterns and behaviors. These include Woebot and Wysa for mental health and ParentSpark for parenting skills.
So I naively set off on my hero’s quest to create and train “Coach Amanda,” the world’s first AI-powered executive coach.
There are many different chatbot platforms available including Dialogflow from Google, the Azure Bot Service from Microsoft and of course, IBM Watson Assistant. I ultimately decided to go with IBM Watson, not just for the technology, but because their R&D folks led by Dr. Robert Moore have created The Natural Conversation Framework. The theoretical aspects of the framework quickly advanced my understanding of conversational interface, and in the last year a lot of the theory has turned into functionality in Watson Assistant.
My initial goal is to have “Coach Amanda” be able to answer a wide range of questions related to management and leadership. A content analysis of typical first-time manager curricula, new manager questions, and common employee problems enabled me to create an initial content base of about 300 different topics.
Lesson 1: You Don’t Just Dump Your Content In
Experienced AI professionals will laugh at my naiveté but when I set out to create the world’s first executive coach I thought I would just feed it a lot of content (e.g., articles, books) and the AI would magically process it so I could ask a question and it could spit back the answer.
Without getting technical, there is a way to have AI process a raw content base, but the time and expense to format the content are much higher than I thought. Also, my vision was to have a chatty human-like chatbot, and not just a really good search engine.
Lesson 2: Three Critical Chatbot Terms (Intent, Entity, Utterance)
So I quickly discovered the three most important words in chatbot design and training: intents, utterances, and entities.
An intent is basically what the user wants to do (i.e., what is their intent). Maybe the user wants to get a definition of emotional intelligence, or wants to know how to handle an employee who keeps coming in late or wants to get tips for time management.
An utterance is anything the user says. This is important because this is the magic of AI for natural language understanding, but it doesn’t happen by itself. As the chatbot designer, you need to try to anticipate many different utterances for each intent. For example:
INTENT: How to deal with late employees
- How do I manage an employee who never comes in on time?
- What do I say to someone who comes in late all the time?
- Give me advice for a chronically late employee.
- I have a good team member who keeps coming in late. What should I do?
Finally, an entity is a category of something. A common entity would be “day of the week.” That’s the entity, and then, of course, Monday, Tuesday, Wednesday, and so on would fill the entity.
For my leadership and management chatbot, it quickly became important that I have special entities for things like “Role,” which include the typical hierarchies Manager, Direct Report, Peer, etc.
And synonyms are also important. Have you ever stopped to think of all the different ways people refer to their boss? Boss, manager, superior, higher up, etc.
Lesson 3: You Can’t “Proof” Your Own Chatbot
I had put hundreds of hours into the development of Coach Amanda, and I was pretty proud. She wasn’t ready to release to the world, but I was anxious for a friend to try her out.
His very first question:
“How do I motivate an employee?”
That’s a good, straightforward question. The answer back was:
“To fire an employee you should…”
Ouch. The chatbot failed miserably. In those early days when I thought we had covered “all” the content, she was only matching about 10% of the questions (utterances).
It was a good lesson to begin testing your chatbot early, and with only a handful of people. You won’t need many to quickly see the holes in your design.
Lesson 4: It’s The Mismatches That Get You
The early tests also taught me a very simple idea that I had thought hard about. When you ask a chatbot a question—any chatbot including your Alexa or Siri or Google Home—there are only three things that can happen:
- The chatbot will match and give a correct (good) answer
- The chatbot won’t match the intent and will say it doesn’t know about that topic
- The chatbot will mismatch your utterance with the answer to a wrong intent (i.e., you get an answer for a different question)
The mismatch issue is important for two reasons.
First, people get annoyed. When someone asks Coach Amanda a question and she says,
“Sorry I don’t know about that. I’m still young and learning, please be patient.”
People are okay with that. But when she gives an answer that’s unrelated to the question asked the users tend to get really frustrated and angry.
Also, depending on what’s at stake, mismatches are risky.
I recently heard a pharmaceutical company was building a chatbot to talk to patients about their drug.
Imagine someone asks the chatbot, “How many times a day can I take Widgetol?”
The chatbot can say correctly, “You can take one pill three times a day with meals.”
Or it can say, “I’m sorry I can’t answer that question,” which would be okay.
But what if it matched “Widgetol” with another drug called “Widgetux” and incorrectly said “You can take three pills every four hours”?
It’s the not the unmatched intents you need to look out for; it’s the mismatched intents.
Lesson 5: Think Hard About The Scope and Purpose
What kind of questions should your chatbot answer? What should it know about?
There is no universal knowledge graph that all chatbots can access. There is no easy or cheap way for it to know about things. So realistically you have to set limits on what it knows about. But here’s the dynamic:
A narrow purpose will make your chatbot very accurate, but not very useful.
A broad purpose will make it more useful, but not very accurate.
For example, if you want to create a chatbot that tells random jokes, that’s pretty easy to do. It’s pretty easy to create a chatbot to answer questions about the weather, or even a chatbot that can tell you about what flights are available to different cities.
But if you want to develop a chatbot that can tell you all about vacations and travel options it suddenly becomes a lot harder.
Many experts told me to launch Coach Amanda with only a single topic, like: Giving Effective Feedback.
She would be really accurate, but not very useful.
Instead, I’m committed to having Coach Amanda know about Feedback, Delegation, Employee Engagement and about 50 other topics. It’s just that it will take many many months and over a million dollars to get there. I’m okay with that.
Lesson 6: Humans Have To Learn To Speak Robot
It’s frustrating but true: humans don’t do what you want them to do. This relates to scope above.
When we first opened up Coach Amanda to the public we clearly said she was a digital coach to answer questions related to management and leadership.
What was the number one question people asked her?
How can I get a raise?
Is that a leadership question? I don’t think so.
What was the second most popular question?
How can I get my boss to like me more?
Now my emotional kneejerk reaction was to wonder, are people really that shallow and self-interested? I’m trying to create leaders, and they want to know how to suck up to their boss? But it would be foolish to deny user data. And an argument could be made that “managing up” is a part of being a successful manager.
Another common problem is that people don’t yet know how to speak robot. For example, people commonly ask long, multi-sentence, multi-context questions that are so specific that no AI anytime soon would have a chance to answer it. One user asked:
I got a promotion and will be moving from USA to Germany in the summer. My wife will be coming with me. What do we need to do to find her a good job? She’s really good at gardening and worked as a librarian.
Because so many people expect the chatbot experience to be like talking to HAL or Data from Star Trek, we’ve begun to spend a lot more time thinking about giving cues.
When Coach Amanda greets someone for the first time, she gives examples of utterances she would understand. She also asks you if you want more examples.
Lesson 7: This is Day One
As someone who has seen firsthand what it takes to create, design and train an AI chatbot with a fairly broad scope I am actually impressed and amazed with how good the technology is.
But I’ve learned that most users have one of two reactions when they talk to Coach Amanda.
- “I’m unimpressed. She didn’t understand anything I asked her.”
- “I’m unimpressed. She answered everything I asked her…but it’s like talking to my Alexa. So what?”
We are now in an age where people have seen Watson beat Jeopardy and Alexa can tell them what time the Sixers are playing tonight. They don’t understand just how hard it is to do that.
So whether the chatbot provides answers or not, they are unimpressed. Not that different than our reaction at tech in general. WTF?! This plane doesn’t have WiFi?
But that’s okay.
Our goal is not to impress people. Our goal is to help people, to train people, to help them to become a better boss.
To use an overused phrase, this is Day One when it comes to AI in general and chatbots in particular.
In the last six months Coach Amanda’s accuracy at matching utterances from strangers has gone from 10% to 55%. Our training is ongoing, and by the end of the year, we’ll get to 75%. Next year 85%.
Today Coach Amanda can answer these kinds of utterances:
- How do I fire someone nicely?
- How do I tell an employee she has body odor?
- Give me tips for speaking with an INTJ
- What are tips for active listening?
- How can I coach employees?
In the months ahead her knowledge graph will include information on over 50 different leadership competencies, personality types, and strengths at work.
The iPhone has been out for ten years, look how quickly it became a supercomputer in the palm of our hand. How quickly the Star Trek tricorder became a reality.
Technically this is year one, not “day one.” But you get the idea. Year by year AI technology will get better, and knowledgraphs will get wider.
Imagine what chatbots will do in a decade.
“Amanda, what will you be able to do in 10 years?”