From the beginning of this project, we were interested in imagining products beyond mobile apps for various reasons, and in the past couple months, we started to prototype a chatbot designed to help young cancer patients. Throughout this time, we were frequently asked: ”So you’re building a chatbot…what is that exactly?”
A Google search gives us the following definition:
chat·bot: /ˈCHatbät/ (noun) – a computer program designed to simulate conversation with human users, especially over the Internet.
“chatbots often treat conversations like they’re a game of tennis: talk, reply, talk, reply”
So to be clear…it’s not a real person, right?
No, this isn’t a real person. While the content is written by people, it is a computer program that actually delivers the content, prompts users to respond, and delivers more content in response. Much like an actual conversation.
So then how does it know what to say?
We’ve created a chatbot that serves up responses based on a decision tree. A simple example is this: the bot may ask the user, “What are you in the mood for?” and the user may select out of a few options, “I’m ready for a challenge.” Based on that input, the chatbot has a number of pre-programmed challenges to offer. Not all chatbots are built in this way; some are created to be artificial intelligence, where a computer learns and comes up with new responses to user input. However, at this point, we don’t know that we need this to be successful.
What exactly is it talking to people about?
The chatbot is designed to introduce users to behavioral interventions that drive better psychosocial and health outcomes. At the moment, we’re working with a team of therapists, behavioral psychologists, and care providers to build in these proven interventions. While the interventions make up the majority of the responses, we also want to make sure that our chatbot delivers it in a way that resonates with young survivors, almost as if they were getting this advice from a friend. So we’re working with group of young cancer survivors to dial in the tone of the chatbot and create what we call the “chatter” – the softer parts of talking that we don’t even think about but make a conversation dynamic and engaging and human.
Is this supposed to replace human relationships? Or real therapy?
No, and in fact, the opposite is true. One of the primary goals of this chatbot is to better facilitate connections between cancer patients, their community, and supporters, so the chatbot suggests ways they can reach out to people around them and tap into their available resources.
What this chatbot is intended for – and what we’ve heard is important from our user testers – is to help in the space between therapy visits, between support group sessions, between conversations with friends or family. We heard it’s important for them to have a place to have a conversation during those late nights when they feel anxious but don’t want to wake anyone, or when they’re sick in bed and everyone else is busy.
What happens when someone is in trouble?
The chatbot will redirect the user to resources that are designed to handle emergency crisis situations.
Why even use a chatbot and not a real person?
There are several reasons. For one, the chatbot may be more able to propose and gauge what interventions seem to work with different individuals, by helping them track what works and what doesn’t. Another reason that we’ve heard from several testers is that it’s easier at times to be honest with a chatbot and to take advice, since they know it is truly objective. Finally, in terms of scale, a chatbot can speak to more patients than one person can, and it meets people where they already are.
Still curious to learn more? Stay tuned for a deeper dive into the lessons we’re learning as we prototype our chatbot.