CFPB, Federal Agencies Warn of AI Bias Risks

The Consumer Financial Protection Bureau and three other federal agencies on Tuesday issued a joint statement cautioning financial firms that use of artificial intelligence could increase risk of bias and civil rights violations.

The joint statement from the CFPB; the Department of Justice, the Federal Trade Commission and the Equal Employment Opportunity Commission warned technology marketed as “artificial intelligence” and as taking bias out of decision-making has the potential to produce outcomes that result in unlawful discrimination. The statement outlines the agencies’ commitment to enforcing laws and regulations against such potential discrimination.

“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” said CFPB Director Rohit Chopra. “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”

The joint statement follows a series of CFPB actions to ensure advanced technologies do not violate the rights of consumers. Specifically, the CFPB has taken steps to protect consumers from:

  • Black box algorithms: In a May 2022, circular the CFPB advised that when the technology used to make credit decisions is too complex, opaque, or new to explain adverse credit decisions, companies cannot claim that same complexity or opaqueness as a defense against violations of the Equal Credit Opportunity Act.
  • Algorithmic marketing and advertising: In August 2022, the CFPB issued an interpretive rule stating when digital marketers are involved in the identification or selection of prospective customers or the selection or placement of content to affect consumer behavior, they are typically service providers under the Consumer Financial Protection Act. When their actions, such as using an algorithm to determine who to market products and services to, violate federal consumer financial protection law, they can be held accountable.
  • Abusive use of AI technology: Earlier this month, the CFPB issued a policy statement to explain abusive conduct. The statement is about unlawful conduct in consumer financial markets generally, but the prohibition would cover abusive uses of AI technologies to, for instance, obscure important features of a product or service or leverage gaps in consumer understanding.
  • Digital redlining: The CFPB has prioritized digital redlining, including bias in algorithms and technologies marketed as AI. As part of this effort, the CFPB is working with federal partners to protect homebuyers and homeowners from algorithmic bias within home valuations and appraisals through rulemaking.
  • Repeat offenders’ use of AI technology: The CFPB proposed a registry to detect repeat offenders. The registry would require covered nonbanks to report certain agency and court orders connected to consumer financial products and services. The registry would allow the CFPB to track companies whose repeat offenses involved the use of automated systems.

The CFPB said it will continue to monitor development and use of automated systems, including AI-marketed technology, and work closely with the Civil Rights Division of the DOJ, FTC, and EEOC to enforce federal consumer financial protection laws and to protect the rights of American consumers, regardless of whether legal violations occur through traditional means or advanced technologies.

The CFPB said it will also release a white paper this spring discussing the current chatbot market and the technology’s limitations, its integration by financial institutions and ways the CFPB is already seeing chatbots interfere with consumers’ ability to interact with financial institutions.