Premier Member Editorial: Agentic AI–The Regulatory and Compliance Barriers to Adoption

Agentic AI, the use of autonomous systems capable of independent decision making in support of customer service and sales efforts, is sneaking up on the mortgage industry.

Andrew Liput

The thought of replacing employees with AI generated “bots” who are programmed to interface with consumers carries serious regulatory and compliance concerns which may limit their usefulness in external operations.

In considering agentic AI we must understand the difference between using automated platforms internally to collect, review and analyze internal data to expedite operations, as opposed to consumer-facing bots who might conceivably take loan applications, answer borrower questions, deliver rate and term decisions, withdraw applications, and even totally close a loan transaction. 

The former, which is already here, enjoys few risks aside from the need for internal monitoring to verify systems are doing what they are expected to do. There is also the possibility of internal systems handing sensitive consumer data where these systems interact through APIs with third parties, thereby creating data privacy and information sharing oversight responsibilities. However, when we start talking about agentic AI independently interacting with consumers as a replacement for or supplement to loan origination, processing, underwriting and closing, the issues are magnified significantly.

Consideration must be given to how the SAFE Act applies, as well as ECOA, FCRA, TILA, and RESPA.  Regulators require transparency, objectivity, bias-free, and risk-managed operational controls. AI driven by programmed reasoning and adaptive machine learning creates issues for consumers and lenders alike.

AI-driven loan application bots are not licensed loan originators. If underwriters cannot communicate with borrowers on loan details, why should bots be an exception? In addition, AI-driven decision making might be programmable for basic information, but when a consumer starts providing data in detail, or inputs go sideways with language barriers and even typographical errors, the output could lead to serious regulatory issues.

Let’s take, for example, bias and discrimination in lending practices.  Agentic-AI relies on probability-based models which could involve skewed data resulting in systemic biases, like denying a mortgage based upon zip codes and race.  Risk scoring and approvals would still require live employee oversight and monitoring to ensure there is a human in the loop to override or modify machine-made decisions.

Also, mortgage origination platforms (LOS) are built for rule-based automation, not adaptive AI, which could lead to compliance issues and result in fines and penalties.

What about Agentic AI marketing? We have already seen and heard how robo-calls have triggered state and federal legislation to protect consumers from unwanted harassment. AI technology now allows companies to dial tens of thousands of calls instantly with human sounding Agentic AI creations conducting the dialogue. For the mortgage industry to unleash this technology simply attracts FTC scrutiny to ensure there is proper prior authorization for communications, and that the communications square with the SAFE Act restrictions on unlicensed loan origination communications. Once again, humans in the loop are a necessity, and the maintenance of call records at such high volumes can create logistical nightmares.

Finally, consumers have shown a weariness and reluctance to embrace AI as a replacement for human interaction in important financial decisions. The push back is real, just ask anyone who has been forced to go through multiple layers of personal identification just to sign up for an online retail store.

Recently, the State of Michigan, Department of Insurance and Financial Services, published a bulletin warning lenders about the use of “Artificial Intelligence (AI) Systems” and “using such systems to make decisions or take actions that may impact consumers.” See Bulletin 2026-03-BT/CF/CU.

As regulators begin to decide how to manage and supervise the use of AI systems in mortgage lending transactions, the limits and restrictions of its impact on consumers will more fully develop. In the meantime, move slowly and carefully and follow all applicable state and federal regulations and guidelines for consumer protection, data privacy and security, and third-party outsourcing of any mortgage manufacturing obligations. 

AI will continue to play an important supportive role in internal operations, such as data gathering and validation) but the expansion of agentic AI to take mortgage loan transactions to an end-to-end automated platform remains a long and winding road filled with speed bumps and warning signs.

(Views expressed in this article do not necessarily reflect policies of the Mortgage Bankers Association, nor do they connote an MBA endorsement of a specific company, product or service. MBA NewsLink welcomes submissions from member firms. Inquiries can be sent to Editor Michael Tucker or Editorial Manager Anneliese Mahoney.)