通过对语音转文字点餐系统进行 AI 测试,将聊天机器人引入得来速

为了改善客户的得来速用餐体验,Centific 构建了一个支持 AI 的自动化系统,用于测试一系列潜在聊天机器人的语音识别能力。
Bringing chatbots to the drive-thru with AI-infused testing of speech-to-text ordering systems

The Challenge 

Our client – a multinational fast food retailer serving millions of customers daily – wanted to enhance its drive-thru ordering system. It planned to launch an AI-enabled Automatic Speech Recognition (ASR) application to convert spoken customer orders into Point of Sale orders quickly and accurately. The goal: improve operations and boost customer satisfaction. 

Building an AI-enabled chatbot to accomplish these tasks was feasible. However, the client had no objective way to evaluate the candidate chatbots for speech recognition ability and order accuracy. Centific was brought in to scale, standardize, and automate the testing process to evaluate chatbots efficiently and without bias. 

Key Successes 

  • Developed web application to record natural speech and test ordering scenarios 
  • Created scalable, iterative, and objective testing process to ensure speech recognition accuracy 
  • Designed automated processes for evaluating order fulfillment 

AI-Enabled App Development

With thousands of possible food combinations and customizations, ensuring the new system could capture all orders accurately was a top priority. To eliminate bias, Centific created an AI-powered web application that populated order scripts automatically. 

When testers recorded ordering scripts, the application converted orders into auto-generated text through AI speech recognition, enabling testers to verify transcript accuracy in near real-time. 

Chatbot Testing 

Using a SoundBoard application, recordings were played to the chatbots to assess speech recognition accuracy. As part of the testing process, generic responses were added to ensure the chatbot adapted as orders changed or became more complicated.

Chatbot test sessions underwent a manual evaluation process, enabling linguistic analysts to create and tag Gold transcriptions, and evaluate them against chatbot digital logs. 

 

Gold Selection and Evaluation  

Using our OneForma AI framework to compare large-scale data, we automated testing of Gold transcripts against chatbot data to auto-calculate order accuracy and error rate, improving chatbot efficiency by 90% and ensuring consistent evaluation criteria. 

 

Results 

Implementing a scalable and repeatable testing process, and using AI-driven technology, we helped the client create a framework for speeding up the ordering process, improving order accuracy, and providing a high level of customer service at every stage.