How Can Contact Centers Use AI-Powered Chatbots Responsibly?

Chatbots have been maturing steadily over the years. In 2022, however, they have shown that they are ready to take a big step forward.

When ChatGPT was unveiled a few weeks ago, the tech world was buzzing about it. The New York Times Technology columnist Kevin Roose called it “simply the best artificial intelligence chatbot ever released to the general public,” and social media was flooded with examples of its ability to render convincing prose according to the a person[1] Some venture capitalists went so far as to say that its launch could be as devastating as the introduction of the iPhone in 2007.[2]

ChatGPT is indeed a major step forward for artificial intelligence (AI) technology. But, as many users were quick to discover, it is still marked by many flaws – some of them serious. Its arrival is a sign not only of any fear of AI development, but an urgent call to reckon with a future that is coming faster than expected.

In essence, ChatGPT brings a new sense of urgency to the question: How can we develop and use this technology responsibly? Contact centers cannot answer this question alone, but they have a specific role to play.

ChatGPT: what’s all the hype about?

To answer that question first it is necessary to understand what ChatGPT is and what it represents. The technology is the brainchild of OpenAI, the AI ​​company based in San Francisco that also released an innovative image generator DALL-E 2 earlier this year. It was released to the public on November 30, 2022, and quickly gained steam, reaching 1 million users within five days.

Elon Musk, who originally co-founded OpenAI with Sam Altman, was surprised by the robot’s capabilities. He recalled the sentiments of many people when he called ChatGPT’s language processing “pretty scary”.[3]

Also Read :  Poor or no internet service at your home? State wants your help with data, mapping

So why all the hype? Is ChatGPT that much better than any chatbot that came before? In many ways, the answer seems to be yes.

The robot’s knowledge base and language processing capabilities are far superior to other technology on the market. It can produce quick, essay-length answers to seemingly trivial questions, covering a wide range of topics and even answers in different prose styles based on user input. You can ask him to write a resignation letter in a formal style or compose a quick poem about your pet. He delivers academic essays with ease, and his prose is persuasive and, in many cases, accurate. In the weeks following its launch, Twitter was flooded with examples of ChatGPT answering all kinds of questions users could think of.

Technology is, as Roose says, “Smarter. strange. More flexible.” It could come in a real sea of ​​change in conversational AI.[1]

A wolf in sheep’s clothing: the dangers of misinformation

For all its impressive features, however, ChatGPT exhibits many of the same flaws experienced in AI technology. In such a powerful package, however, these flaws seem more ominous.

Early users reported many worrisome issues with the technology. For example, like other chatbots, it quickly learned the biases of its users. Before long, ChatGPT was making offensive comments that lab coats were probably only janitors, or that only Asian or white men make good scientists. Despite the system’s reported guardrails, users were able to train it to make these types of biased responses fairly quickly.[4]

More concerning about ChatGPT, however, is its human qualities, which make its responses more convincing. Samantha Delouya, a journalist with Business Insider, asked him to write a story she had already written – and the results surprised her.

Also Read :  ASUS Unveils Sustainable and Innovative Products at CES 2023

On the one hand, the resulting piece of “journalism” was extremely pointed and accurate, if a little predictable. In less than 10 seconds, he produced a 200-word article that sounded a lot like something Delouya might have written, so much so that she called it “frighteningly convincing.” The catch, however, was that ChatGPT had made fake quotes in the article. Delouya found them easily, but an unsuspecting reader might not.[3]

Therein lies the rub with this type of technology. Its mission is to produce content and conversation that is decidedly human, not necessarily telling the truth. And that opens up scary new possibilities for misinformation and – in the hands of unscrupulous users – more effective disinformation campaigns.

What are the implications, political and otherwise, of this powerful chatbot? It’s hard to say — and that’s what’s scary. In recent years, we have already seen how easily misinformation can be spread, not to mention the damage it can cause. What if a chatbot can get lost in a more efficient way and determined?

AI cannot be left to its own devices: the test solution

Like many who read the headlines about ChatGPT, contact center executives may be wary of the possibilities of deploying this advanced level of AI for their chatbot solutions. But they must first address these questions and formulate a plan to use this technology responsibly.

The careful use of ChatGPT – or whatever technology comes after it – is not a one-dimensional problem. It cannot be solved by any single actor alone, and it ultimately involves a range of issues related not only to developers and users but also to public policy and governance. However, all players should be looking to do their part, and for contact centers, that means focusing on testing.

Also Read :  New tool enables comprehensive evaluation of datacenter performance

The best way to chaos is to leave chatbots alone to work out every user question on their own without any human guidance. As we’ve already seen with even the most advanced form of this technology, that doesn’t always end well.

Instead, contact centers using more advanced chatbot solutions need to commit to regular automated testing to uncover any defects and problems as they arise and before they are addressed with major problems. Whether they are simple customer experience (CX) flaws or more dramatic informational errors, you need to catch them early to correct the problem and retrain your bot.

Cyara Botium is designed to help contact centers keep chatbots under control. As a comprehensive chatbot testing solution, Botium can perform automated tests for natural language processing (NLP) scores, conversation flows, security issues, and overall performance. It’s not the only component in an overall plan for responsible chatbot use, but it’s a critical element that a contact center can’t afford to ignore.

Learn more about how Botium’s powerful chatbot testing solutions can help you keep your chatbots under control and get in touch today to set up a demo.

[1] Kevin Rose, The Brilliance and Weirdness of ChatGPTThe New York Times, 12/5/2022.

[2] CNBC. “Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays.”

[3] Insider Business. “I asked ChatGPT to do my work and write an Insider article for me. He quickly produced a convincingly alarmist article full of misinformation.”

[4] Bloomberg. “OpenAI Chatbot Throws Biased Masks, Despite Guardrails.”

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button