Current ‘guardrails’ can aid AI risk mitigation, Brainard tells conference

The growing use of artificial intelligence (AI) applications poses its own challenges to financial system regulators – for example, how to mitigate risk without blocking expanded access to services – but current regulatory and supervisory “guardrails” offer a good place to start, according to remarks delivered Tuesday in Philadelphia, Pa., by Federal Reserve Board Gov. Lael Brainard.

“Although it is still early days, it is already evident that the application of artificial intelligence (AI) in financial services is potentially quite important and merits our attention,” Brainard said during a conference Tuesday at the Federal Reserve Bank of Philadelphia. “Through our Fintech working group, we are working across the Federal Reserve System to take a deliberate approach to understanding the potential implications of AI for financial services, particularly as they relate to our responsibilities.”

Brainard said financial services firms investing in AI are interested in at least five capabilities: better pattern recognition, cost efficiencies, more accurate processing, better predictive power, and “better than conventional approaches in accommodating very large and less-structured data sets and processing those data more efficiently and effectively.”

She said the potential breadth and power of new AI applications raise questions about potential risks to institutions, consumers and the financial system. She pointed to existing regulatory and supervisory “guardrails” as a “good place to start as we assess the appropriate approach for AI processes.”

Brainard noted a National Science and Technology Council study that concluded that an AI-related risk falling within bounds of an existing regulatory regime can be addressed by evaluating whether current rules are adequate or should be adapted to AI. Treasury reached a similar conclusion in its own fintech report this summer.

“With respect to banking services, a few generally applicable laws, regulations, guidance, and supervisory approaches appear particularly relevant to the use of AI tools,” she said. She pointed, for example, to the Fed’s guidance on model risk management (SR Letter 11-7) and guidance on vendor risk management (SR 13-19/CA 13-21), along with the prudential regulators’ guidance on technology service providers.

“For its part, AI is likely to present some challenges in the areas of opacity and explainability,” she added. “Recognizing there are likely to be circumstances when using an AI tool is beneficial, even though it may be unexplainable or opaque, the AI tool should be subject to appropriate controls, as with any other tool or process, including how the AI tool is used in practice and not just how it is built.”

Brainard’s speech covered a broad range of issues arising around the growth of AI tools: potential uses for delivering services, facilitating back-office functions, protecting against AI-driven attacks, facilitating access to services (for example, as a potential way to help consumers with little credit history to still obtain credit). She also noted the machine learning branch of AI and areas where human knowledge may be needed to ensure services are provided in a manner that is fair (and compliant with consumer protections).

Brainard said the Fed is still learning about how AI tools can be used in the banking sector, and she said it’s incumbent on regulators to facilitate an environment “in which socially beneficial, responsible innovation can progress with appropriate mitigation of risk and consistent with applicable statutes and regulations.”

What Are We Learning about Artificial Intelligence in Financial Services? – Speech by Federal Reserve Board Gov. Lael Brainard (during Fintech and the New Financial Landscape, Federal Reserve Bank of Philadelphia, Nov. 13, 2018)