MBA Urges Federal Agencies to Clarify How AI Technologies Apply to Regs
The Mortgage Bankers Association last week asked federal regulatory agencies to clarify how existing fair lending and the Equal Credit Opportunity Act adverse action notification requirements apply to Artificial Intelligence technologies.
In the letter, MBA also describes AI’s benefits and uses in the mortgage space, especially in expanding credit, and offers suggestions for how the agencies can facilitate broader adoption of AI within the agencies’ regulatory framework.
The letter went to the Office of the Comptroller of the Currency, Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corp., Consumer Financial Protection Bureau and the National Credit Union Administration.
“Artificial intelligence and related technologies, including machine learning have the potential to provide substantial benefits for participants in the mortgage lending industry and the consumers they serve,” wrote Pete Mills, MBA Senior Vice President of Residential Policy and Member Engagement. “AI has particular potential in the area of credit underwriting where it can it be combined with alternative or non-traditional data to expand access to affordable (and sustainable) mortgage credit. Despite these benefits, broad adoption of AI has been slowed by uncertainty surrounding how AI fits within a regulatory framework that was largely created before its development. MBA encourages the CFPB to clarify its expectations in a way that facilitates the responsible use of AI. Such clarity is particularly necessary with respect to fair lending and Equal Credit Opportunity Act’s adverse action notice requirements.
The letter notes AI allows lenders to consider far more data than is possible with conventional underwriting models. “With greater data capacity, lenders can more easily consider alternative data sources, which may include financial data (e.g., cash flow histories, payment histories from housing rentals, cell phones, utilities, etc.) and nonfinancial data,” MBA said. “Using enhanced processing power and ability to handle a broader pool of data, AI technologies can identify correlations between consumer data and credit risk that would not be captured by conventional underwriting processes. In this way, AI can be used to produce a more comprehensive underwriting assessment that has been shown to be a more accurate predictor of credit risk.”
The benefits of AI can also, if used correctly, make the financial system more inclusive, MBA said. “By using non-traditional data to evaluate the creditworthiness of applicants, AI can expand access to credit to consumers who fall outside traditional underwriting models,” the letter said. “The potential benefits of this are significant…AI can be seen as a valuable tool to narrow racial gaps in credit and home ownership.”
Additionally, MBA said, AI has the potential to lower credit costs for consumers and improve consumer experience. “In the underwriting phase, AI facilitates a more accurate credit risk assessment, which can help lenders make more efficient pricing decisions,” the letter said. “In loan production, AI technology can be found in tools that provide workflow optimization, document verification, and fraud prevention. To the extent these improvements lower operational costs or reduce loan production times, they can result in consumer cost savings. Increased adoption of AI technology, and associated consumer cost savings, could result if the industry is provided more detailed regulatory guidance on key AI topics such as data contribution, data quality, testing and explainability.”
However, despite these benefits and the growing affordability of AI tools, AI adoption has been slower than might be expected. MBA said part of this reluctance can be attributed to businesses’ uncertainty over how the current regulatory framework would apply to the use of AI. “Lenders recognize that underwriting systems that rely on AI, like traditional credit underwriting systems, must satisfy applicable fair lending requirements,” the letter said. “Mortgage lenders and servicers wish to use AI in a manner that is consistent with fair lending laws. Unfortunately, unlike conventional underwriting processes, where regulator expectations are relatively well known, there is uncertainty surrounding how regulators will apply fair lending laws to AI used in underwriting or other phases of the credit transaction. While the existing framework, including widely used practices with implicit regulator approval, are generally instructive, the unique characteristics of AI suggest more may be necessary.
MBA urged the Agencies, in particular the CFPB, to commit to helping the industry navigate fair lending risks associated with AI. As a threshold matter, MBA recommends that the Bureau explicitly permit lenders to adopt approaches that use AI in ways that expand credit access and limit fair lending risks to the same or greater extent than existing models and techniques. “Given its role as the fuel that powers AI systems, it is equally important that the Bureau to clarify its fair lending expectations regarding the use of alternative data,” MBA said.
Additionally, MBA said modernizing elements of ECOA’s notice requirements for adverse actions would facilitate wider adoption of AI technologies. The letter noted under ECOA and its implementing rule, Regulation B, creditors must notify applicants regarding adverse actions taken in connection with credit applications. Such notice must include either a statement of specific reasons for the action taken or a disclosure of the applicant’s right to request a statement of specific reasons. The statement of specific reasons must “indicate the principal reason(s) for the adverse action.”
“For lenders utilizing AI underwriting systems, providing specific reasons for an adverse action can be particularly challenging if the creditor is expected to provide reasons that could be helpful for the consumer—i.e., reasons that enhance the consumer’s ability to improve their credit profile,” MBA said. “Given AI’s capacity to accommodate large, diverse datasets and its ability to identify previously unknown credit risk indicators within those datasets, an AI produced adverse action may be the result of factors that, at least to the consumer, appear meaningless. Further, as the AI system ‘learns’, the logic underpinning the underwriting decision evolves and the weight assigned to various data points may change. In this way, a factor that was significant to a creditor’s adverse action—potentially even the principal reason for an adverse action—on one day, may be less relevant in the future.”
MBA also noted the CFPB attempted to address concerns regarding AI and ECOA’s adverse action requirement in its 2019 Fair Lending Report. “While MBA appreciates the Bureau’s guidance, we believe additional clarity is needed,” the letter said. “Specifically, we encourage the Bureau to release additional sample notices that could be used for an AI-derived adverse action, a scenario that is not covered by the current sample notices. The sample notices should cover a broader range of factors, particularly those factors which commonly contribute to adverse actions by AI underwriting systems.”