An illustration of Penang Hill, also known as the first colonial hill station developed in Peninsular Malaysia

How to improve your AI Chatbot response accuracy

Helping your chatbot to understand users

ximnet

--

To improve the response accuracy of our AI Chatbot, we should use an NLP such as Microsoft LUIS. What is NLP, you might ask. According to Wikipedia:

NLP (Natural language processing) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.

With an NLP, we do not need to do sentences or keywords mapping to understand what a user wants. We basically create intents and entities using Microsoft LUIS. We have shared about Microsoft LUIS on this article previously. So, in this article we will share some tips on using Microsoft LUIS and Azure Bot Framework to improve our Chatbot response accuracy. Some of the tips may apply to another bot framework as well if they support it.

The first tip is about the NONE intent. According to the documentation of Microsoft LUIS:

The None intent is the fallback intent, important in every app, and should have 10% of the total utterances. It is used to teach LUIS utterances that are not important in the app domain (subject area). If you do not add any utterances for the None intent, LUIS forces an utterance that is outside the domain into one of the domain intents. This will skew the prediction scores by teaching LUIS the wrong intent for the utterance.

For example, our AI chatbot Xandra, is a sales chatbot for our website. So, we added the none intents with user utterances as below:

None intent user utterances in Microsoft LUIS Portal

Here in the Microsoft LUIS portal, we are teaching LUIS that our chatbot is not selling pizza nor an educator chatbot about penguins and mountains.

The next tip is the user utterances for the intents. First you need to understand your users base. What are the questions they normally ask? A collection of emails from customer services or salesperson is a good start. Do not over train you bot with unnecessary user utterances that are not related to your company.

However, some small talks are recommended to make your chatbot more friendly to the users to make them continue to chat with your virtual assistant.

Smalltalk on XiMnet’s Xandra

Here are recommendations from Microsoft LUIS documentation on user utterances:

Collect utterances that you think users will enter. Include utterances, which mean the same thing but are constructed in a variety of different ways:
- Utterance length — short, medium, and long for your client-application
- Word and phrase length
- Word placement — entity at beginning, middle, and end of utterance
- Grammar
- Pluralization
- Stemming
- Noun and verb choice
- Punctuation — a good variety using correct, incorrect, and no grammar

You can refer to the full documentation on what are good user utterances for your chatbot’s intents.

If you use Microsoft LUIS, it has a very good interface for you to test your user utterances. From the portal, you can test the utterances and improve your chatbot accuracy.

Microsoft LUIS’ Testing Portal

Dialogs

Besides the user utterances, we can also use dialogs to help the chatbot to help the users to response accurately. In Azure Bot Framework, we can use dialog components. For example, in the lubricant finder below, once the bot knows that the users want to find a lubricant for his car, the bot will trigger the lubricant recommender dialog.

Lubricant recommender dialog in MeVA chatbot

In this dialog, there are multi dialog steps to ask and guide a user to provide correct information to find a car lubricant. For example, the dialog asks for the car brands, followed by engine type, age of the car and other related questions. In the steps you can also provide options for user to choose such as car brands so that user can give correct responses. After that, all the collected entities can be passed into API to get the result. We talked about the entities in the previous article about Microsoft LUIS here.

Language Translation

Since our chatbot’s users are based in Malaysia, we are expecting large number of users speaking to the bot in Malay language. Unfortunately, Microsoft LUIS does not support Malay language by default. So, we need to use Microsoft Translator Text API to translate users’ input into English first before sending to Microsoft LUIS. You can also do the same if your chatbot users base speak in your country’s native languages that is not supported by Microsoft LUIS or the NLP of our choice.

Quick Menu

Quick menu is also important to let the users know what your chatbot can do. This will help to manage users’ expectation and help them to find what they need quickly. At the end of the day, most of the companies want a chatbot to help customers to find and do what they need, without going through customer service live chat, email or phone call. Examples of quick menu as shown in our chatbot below:

Quick menu from Xandra, XIMNET AI Chatbot

Monitor Log & Training

After launching a chatbot successfully, continuous monitoring and training is important to make the chatbot response accurately. You need to review the log of the chatbot for unknown or wrong responses. If you integrate your Azure Chatbot with XTOPIA, you can view the list of most popular intents and unknown intents via our Chatbot Analytic Dashboard. From the list you can provide more user utterances to improve the chatbot or correct the wrong responses.

An overview of the Chatbot Analytic Dashboard in XTOPIA.IO

Here are just some of the ways we can use to improve our AI Chatbot response accuracy. We hope it inspires you to build your own AI Chatbot. Stay tuned for more articles about AI and Chatbot.

XIMNET is a digital solutions provider with two decades of track records specialising in web application development, AI Chatbot and system integration.

XIMNET is launching a brand new way of building AI Chatbot with XYAN. Get in touch with us to find out more.

--

--

ximnet
ximnet

No responses yet