Blog

A Chatbot on the Test Bench

Bild eines Roboters, einer Gruppe von diskutierenden Menschen, ein Chatfenster eines Chatbots

valantic puts an external customer’s chatbot through its paces

Who isn’t familiar with this problem? You focus on a topic or project for a long time and intensively and when it’s time to test it out, you quickly lose the objective view of things. It’s difficult to reveal errors and potential for improvement. You’re professionally blinkered. Impulses from the outside provide a way out here. Before the supposedly finished work is revealed to the public or put into use, a fresh pair of eyes should examine it. In this case, it was Felix Kopf and Samuel Hirsch, both Software Engineers and Consultants at valantic Integrated Business Solutions. Their task was to test a customer’s already completed chatbot before the go-live with regard to its conversational logic, the possible incorporation of external data sources, and voice quality. In addition, they were supposed to check what enhancement possibilities there might be for the chatbot.

In our blog contribution, Felix reports about this project and demonstrates what kinds of questions accompany chatbot development and how varied these are.

A chat and voicebot system for global use

The focus of our review project is a chatbot that advises its conversation partner with regard to our customer’s product portfolio. It is supposed to suggest appropriate products and answer all questions relating to them.

The heart of this chatbot is the conversational AI platform Cognigy.AI. Cognigy is one of the leading platforms for conversation automation, with which it is possible to create advanced, integrated conversation solutions with cognitive bots. Cognigy already offers a broad portfolio of pre-programmed sequences, which can be combined at will, so that complex flow developments are possible. That’s why Cognigy is an especially good choice for our customer’s application. The customer’s relevant product offerings are extremely varied, which is why possible customer dialogues branch out so far – but this is Cognigy’s area of specialization.

The first version of the chatbot was created in English. The plan is to roll the bot out gradually as part of an expanding chatbot system in additional countries and language regions.

The chatbot is good. But is it good enough?

The chatbot system to be tested was developed and implemented by our customer as an in-house project. After it was complete, my colleague Samuel and I were asked to conduct an extensive logic test and put the chatbot system through its paces as part of a quality control. In addition, we documented with an advisory opinion what improvements and adjustments are required based on our best practices and experiences in order to keep the chatbot system easier to maintain and localize it for additional language regions with as little work as possible. Here, for example, you have to consider that in another language region, frequently a different product portfolio is offered. The sales paths and approaches also differ. Customers in the USA tend to want to access relevant information about prices and delivery times quickly, whereas a European customer sooner tends to want to order products as custom-tailored as possible. How the chatbot system can address these different challenges is something else we have considered and for which we have formulated appropriate solutions.

What precisely was supposed to be examined and what project structure was selected?

In the course of the review, the following questions and tasks were addressed to us and we examined, evaluated, and documented:

  • How can the system be configured sensibly for global use including roles and responsibilities?
  • What tasks belong to specialized departments and which to central IT?
  • How can maintenance work be reduced overall?
  • What maintenance must and should be done centrally and what decentrally?

Our approach

So that we can test the chatbot sensibly, first we received a test installation from Cognigy. We imported the flows, the branching of the language flows, the code for the individual adjustments, and data from the productive system into the test system and created an exact copy of the original chatbot system this way. Then we were off to the races and we could test the bot with regard to logic and flows. After completing the test, we sent a version with comments and sample solutions back to the customer.

Test bench results

The conversational AI platform had already been configured carefully and the chatbot well-developed. We couldn’t find any weak points. The chatbot is stable and performs well online. Therefore, it can be deployed in good conscience. Furthermore, the test proved that a few adjustments are advisable for the upcoming localization for other language regions. We documented our recommendations and handed them over in a concept paper.

Lessons from the project

Now nothing stands in the way of the start-up of our customer’s chatbot, and with a little reworking, it will be ready for a quick, easy adaptation to additional language regions. The bot stood up to valantic’s test and proved its functionality.

However, the external testing of such a wide-branching conversational AI system is advisable in all cases in order to guarantee both the logic of the language flows and easy maintenance. Just before rolling out a bot in different language regions, sources of errors can be minimized this way.

Don't miss a thing.
Subscribe to our latest blog articles.

Register