The rapid advancement of Artificial Intelligence (AI) technologies has already transformed business operations across the globe. From customer service chat-bots to adaptive cybersecurity, the applications of AI are nearly limitless. When properly designed, AI can help minimize paperwork, reduce costs and drive better business decisions by increasing the predictive accuracy of future outcomes and mitigating cognitive biases inherent in human decision making.
And for the financial services industry, the potential of AI is staggering.
AI can expand access to affordable credit for consumers and small businesses, combat fraud, detect and prevent financial crimes, and increase financial inclusion. But many financial institutions remain reluctant to deploy AI to its maximum potential amid seemingly mixed messages from U.S. regulatory agencies.
Like many new technologies, the current AI landscape lacks a depth of established legal and regulatory precedent to rely on. Mistrust in AI technologies is largely born out of its novelty and a general lack of understanding combined with fear of new technology generally. In fact, Gartner research found that 79% of finance executives attributed "fear of the unknown" to their hesitancy to adopt AI.
To date, financial institutions are mostly employing AI to mitigate fraudulent transactions, know-your-customer (KYC) risks and cybersecurity risks. In cybersecurity, AI is used to monitor the mountains of data that would be impractical or impossible to do manually, and some cutting-edge systems can even “learn” from previous cyberattacks to predict and prevent future breaches. This “machine learning” is becoming an increasingly critical tool in protecting consumers’ financial data and thwarting criminal attempts at accessing sensitive customer information for nefarious purposes.
However, the greatest potential for AI in financial services remains in transforming consumer and small business lending. While empirical underwriting models have been deployed in lending for decades, the predictive power of traditional techniques pale in comparison to the predictive powers offered by AI and machine learning. Confidently deploying AI will lower borrowing costs for consumers and small businesses and bring more “underbanked” customers into the traditional banking system and, therefore, under the purview of federal regulators and consumer protections. Moreover, at a time of enormous financial instability, AI and machine learning have the very real potential to provide a pathway to affordable credit for scores of consumers and small businesses amid historic inflation and economic uncertainty.
AI-driven lending is most impactful when it is trained on the widest set of relevant datasets. By incorporating data beyond what is currently available in credit bureau reports, lenders can extend credit to borrowers with little to no conventional credit history, greatly expanding access to affordable credit. To that end, not only should the regulatory community get behind AI in lending, it should encourage the use of so-called “alternative data” – information that isn’t actually all that “alternative” – whether it be bank account transactions, payroll data, spending habits, or utility bill payments, all with the customer’s express consent.
While the consumer benefits from AI are clear, the current regulatory framework did not envision these types of technology-driven underwriting models. Understandably, a major source of concern with AI lending applications is compliance with fair lending laws and a perceived lack of transparency on how decisions are made. Despite recent and significant advances in the transparency and interpretability of AI lending algorithms, policymakers remain concerned about the potential for bias and discrimination. To wit: the Consumer Financial Protection Bureau recently reiterated that companies using AI in underwriting models must ensure compliance with the Equal Credit Opportunity Act. While such a guarded approach is understandable, the lack of a clear regulatory framework that directly addresses how AI can remain compliant with these important protections is desperately needed.
Thankfully, regulators focusing on this space. Last year, the five federal banking agencies jointly requested input on how AI is currently used in financial services. This was a welcome sign that regulators clearly understand the potential of AI and share the urgency to build an AI regulatory regime that balances innovation with consumer protection. While this was a significant step, regulators must press forward to issue binding guidance, and, eventually, formal regulations.
Financial institutions have been operating under a regulatory regime designed well before the development of AI, which is hindering greater adoption, innovation and consumer choices. Clear regulatory guidelines would pave the way for broader access to affordable credit, expand financial inclusion, and significantly reduce fraud and other financial crimes.