Chatbots –computer generated, online consumer help programs that simulate human-like responses – can impede customers from resolving problems, especially if the systems are poorly deployed, the federal consumer financial protection agency charged Tuesday.
In a release, the Consumer Financial Protection Bureau (CFPB) many financial institutions are integrating artificial intelligence (AI) technologies to steer people toward chatbots to reduce costs. The agency asserted that “a poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”
The agency said it has received many complaints from frustrated customers trying to “receive timely, straightforward answers from their financial institutions or raise a concern or dispute. Working with customers to resolve a problem or answer a question is an essential function for financial institutions – and is the basis of relationship banking.”
CFPB noted it actively monitoring the market and expects banks using the systems to do so in a manner consistent with their customer and legal obligations. The bureau also said it encourages people who are experiencing issues getting answers to their questions due to a lack of human interaction, to submit a consumer complaint with the CFPB.
According to the bureau, chatbots have been deployed widely among financial institutions. The bureau said 37% of the U.S. population is estimated to have interacted with a bank’s chatbot last year. Among the top ten commercial banks in the country, the agency said, all use chatbots of varying complexity to engage with customers.
“Financial institutions advertise that their chatbots offer a variety of features to consumers like retrieving account balances, looking up recent transactions, and paying bills,” the bureau said. “Much of the industry uses simple rule-based chatbots with either decision tree logic or databases of keywords or emojis that trigger preset, limited responses or route customers to Frequently Asked Questions (FAQs). Other institutions have built their own chatbots by training algorithms with real customer conversations and chat logs, like Capital One’s Eno and Bank of America’s Erica. More recently, the banking industry has begun adopting advanced technologies, such as generative chatbots, to support customer service needs.”
However, the agency contended, use of chatbots introduces several risks, including noncompliance with federal consumer financial protection laws, diminished customer service and trust, and harm to consumers (including providing inaccurate information regarding a consumer financial product or service).