5 min read

How to integrate responsible AI: 5 strategies experts recommend for businesses

Conversations
Chatbots
Knowledge Base AI FAQ Bot
Share to:

How can businesses integrate AI bots into their customer communication, while also making sure they're not going off the rails? According to industry experts, these are the five key responsible AI principles companies should follow. 

AI bots that go rogue might make for entertaining headlines, like the chatbot that wrote a derogatory poem about its own company or the hacker that tricked an AI bot into selling him a car for one US dollar.

While funny to read, from a business perspective, this is, of course, exactly the kind of result you're trying to avoid when integrating AI bots into customer communication

The good news is, you can tap into AI benefits while ensuring important safeguards are in place at your business. We've gathered recommendations from industry experts and put together a list of five key responsible AI principles for companies. 

And the best part is, they work for larger and smaller companies alike! 

Responsible AI principle number one: Have a clear purpose

This might seem obvious, but the truth is responsible AI starts with the principle of really understanding the technology and its purpose for your business. 

Map out what exactly you want to achieve with AI in your customer communication. These goals could range from an FAQ bot to gathering customer data to automating an entire process. After all, you can only start to think about safety measures once you know the full scope of your AI bot, says Alexis Safarikas. 

Safarikas is the CEO of Campfire AI, an agency that helps companies around the world to build voice and chatbots. In his experience, which he recently shared at the World Mobile Congress in Barcelona, responsible AI starts with a clear vision. 

Quote Alexis Safarikas Campfire AI

How can this look in practice? Let's take a look at Argenta, one of Belgium's largest banks. They implemented an AI bot to automate their customer service, including increasing daily credit limits. The goal was to help customers faster and save the service team time. 

Argenta knew what they wanted to use the bot for, so it was clear they needed to protect customers' personal data before using it. Knowing this helped them put the right safety measures in place to restrict access to the bot's data. 


How Argenta banked a 95% CSAT score with a chatbot

Read how Belgium's fifth-largest banking institution used a chatbot to improve customer satisfaction.


Responsible AI principle number two: Work in a confined environment 

"Bots are inherently people pleasers", says Alexis Safarikas. This means that if it makes the customer happy to buy a car for one dollar, a bot might offer that price — unless you set very clear limits here. 

Businesses can stop their AI bots from going rogue by controlling what information they can access and ensuring they only use the information the company wants to share with customers.

Geertina Hamstra who specializes in Conversational AI bots for the Dutch healthcare provider MINND therefore recommends following the responsible AI principle of a confined environment. For instance, you can limit the bot's pool of information by feeding it only very specific data like your FAQ docs, product pages, or company blog, instead of giving it access to the World Wide Web. 

This doesn't necessarily require IT resources or bot builders in your own company, but you should work with a strong tech partner that can help you set up a restricted environment, says Joachim Jonkers, Director of Product AI at Sinch. 

Quote Joachim Jonkers Sinch

These tools also exist for small and medium-sized businesses. For instance, at Sinch Engage, you get access to pre-designed bot templates and powerful AI tools that have rail guards in place from the get-go that'll allow companies of any size and budget to set up a safe AI bot in minutes.

Responsible AI principle number three: Be transparent

Transparency is a key responsible AI principle. This entails informing users that they're interacting with an AI bot, as well as setting the correct expectations. Offering responsible AI in customer communication means that users should know in advance that they're interacting with a bot, and even more importantly, what the bot can and can't do, says Céline Lemonne, Conversation Designer at Sinch. 

Quote Celine Lemonne Sinch Chatlayer

Beyond this, users should also be informed about what happens with the data they provide to an AI bot, recommends Geertina Hamstra. Understanding the full AI interaction doesn't only make it easier for users to engage with AI bots, transparency also leads to more openness towards using the new technology. 

Hamstra has seen how this understanding facilitates the customer bot interaction at the Dutch health care service provider Moet Ik Naar De Dokter ("Should I go to the doctor?"). 

Quote Geertina Hamstra Moet Ik Naar De Dokter

This shows that once customers understand the bot interaction, they're willing to try it. Making transparency one of your key responsible AI principles will lower the resistance against the new technology and increase its usage. 

Responsible AI principle number four: Ensure the best customer experience

You can only have accessible, fair, and ethical AI if you put the customer experience first. Is your AI bot accessible to users of all ages and backgrounds? Is it guaranteeing fair treatment to everyone? Is it using diverse language and view points? 

The more you think about the end user of your AI solutions, the easier it'll be to integrate a responsible AI solution. When India's leading private sector bank, HDFC Bank, wanted to start a smart WhatsApp bot for their Indian customers, it came down to this question: "Can customers in rural areas that don't speak English and aren't tech-savvy use this bot?" 

HDFC knew that they had set up an inclusive AI bot once they were able to answer this question with a clear "yes", says Gautam Anand, Senior Vice President at HDFC Bank. 

Quote Gautam Anand HDFC bank

Their approach proved successful, as HDFC was able to grow their engagement rate by 30% with the WhatsApp automation

Responsible AI principle number five: Be the first to think of the "what ifs"

Lawmakers around the world are looking at AI tools to set up new laws that'll ensure a safe and ethical use of the technology, like the AI Act in the European Union.

However, as with many new developments, the law is typically slower than the technology. That's why businesses and tech providers are in a leadership position, says Joachim Jonkers. 

Quote Joachim Jonkers Sinch

That's why companies and AI providers have to be the first to think of all the possible "what ifs" when it comes to setting up a responsible AI solution. 

  • "What if personal data is involved?" 
  • "What if the bot accidentally provides incorrect information?"
  • "What if the bot violates privacy or copyright laws?" 
  • "What if customers want to sue the bot?" 

The more "what if" scenarios and solutions you incorporate from the very beginning into your AI bot integration, the safer your AI will be, both for you and your customers. And you'll ensure that your AI bot is a sustainable solution that'll stand the test of time. 

Keep in mind that this doesn't all fall on your business. There are many smart bot solutions out there that were built with these responsible AI principles in mind. These can be readily integrated into your chatbot solution to make it as easy as possible for you to get started with responsible AI.


Take your chatbot to the next level

Upgrade your chatbot experience with advanced AI technology. Our team of experts is happy to answer all your questions.

Image of Marinela Potor, editor-in-chief at Sinch Engage.
Written by: Marinela Potor
editor-in-chief