AI Is Already Influencing Your Loan Decisions. Is Your Governance Keeping Up?
Rafael DeLeon is senior vice president of industry engagement with Ncontracts, Nashville.
AI is already a part of your lending operations. It’s screening applications, flagging risk, and powering the platforms your team relies on every day. In many cases, it’s also inside your vendors’ systems, running quietly in the background of decisions your organization is ultimately responsible for.
You may think you have a handle on your credit risk, but do you have a handle on the AI shaping it?
That gap is where exposure lives. In 2025, a Massachusetts lender reached a $2.5 million settlement tied in part to fair lending violations in its AI-driven underwriting — and it won’t be the last. The good news is that for most lenders, building a governance program that addresses it doesn’t mean starting from scratch.
The AI Regulatory Landscape for Lenders
There’s no single AI rulebook for lenders yet, but existing frameworks already apply, even when AI isn’t mentioned by name. Fair lending laws, including ECOA and the Fair Housing Act, apply to algorithmic credit decisions the same way they apply to human ones. State-level legislation in Colorado, California, Utah, and Texas has added another layer, and if you operate across state lines, your governance program should meet the most restrictive standards you face.
For lenders with Freddie Mac relationships, the deadline is already here. Freddie Mac updated its Seller/Servicer Guide to include a formal AI and machine learning governance requirement that took effect March 3, 2026, covering the entire loan lifecycle from origination through servicing — including AI embedded in vendor tools. Under Freddie Mac’s framework, responsibility for that governance doesn’t transfer to your vendors. It stays with you.
Across all of it, the same principle applies: “the model did it” is not a viable compliance defense.
Start With What You Have
Before any governance framework makes sense, you need a clear picture of where AI lives within your organization.
Shadow AI — which refers to the tools employees use without formal approval from IT, risk, or compliance — is far more common than leadership realizes. Maybe it’s the loan officer using a generative AI tool to draft borrower communications, or the operations team relying on an AI-enhanced workflow that quietly automates steps no one has mapped.
Building an inventory means surveying every department, reviewing vendor contracts for AI-related language — terms like “machine learning,” “automated decisioning,” and “artificial intelligence” are good places to start — and getting compliance, risk, IT, and business lines in the same conversation. No single team has full visibility on its own.
For every tool you identify, the core questions are the same: What decisions does it influence? What borrower data does it touch? Can its outputs be explained? A solid, in-progress inventory is far more defensible than no inventory at all.
Not All AI Carries the Same Risk
Credit underwriting models, fair lending decision engines, and AML monitoring systems sit at the top of any risk tiering. These require full model risk management treatment — independent validation, ongoing performance monitoring, and thorough documentation. For lenders, these are the tools most likely to draw scrutiny in an examination, and the ones where a governance gap creates the most direct exposure.
Customer-facing chatbots, marketing analytics platforms, and operational efficiency tools occupy a different tier. Meaningful oversight is still required, but it should be calibrated to a different risk profile. For example, a grammar checker doesn’t need a model validation report.
The discipline is knowing the difference and governing each AI tool accordingly.
Vendor AI Is Your Organization’s Risk
When a vendor uses AI, that risk belongs to your organization just as much as the AI running in your own systems. The problem is that AI doesn’t behave like conventional software, where you can review specs, documentation, or source code and understand exactly what you’re working with. Models learn, drift, and get retrained on data you can’t see. A vendor whose AI passed your initial due diligence may be running a materially different model six months later. Without visibility built into the relationship from the start, you’d have no way of knowing until something goes wrong.
For any vendor using AI, your due diligence should include how the model was developed and validated, what data it was trained on, how bias testing is performed, and whether the vendor will notify you before material model changes go live. That last provision is frequently missing from vendor contracts, but advance notice of significant model updates is the only way to assess impact before it reaches your borrowers and your exam file.
Build the Paper Trail
A governance policy on a shelf isn’t a governance program. When examiners assess AI governance, they want to see active oversight and the documentation to prove it, including an AI inventory with risk classifications, a board-approved risk appetite statement, vendor due diligence files with AI-specific elements, bias testing results, model validation reports for high-risk tools, and staff training records documenting who was trained on what and when.
Final Thoughts
The lenders building real programs now aren’t doing it because a rulebook told them to. They’re doing it because they understand what’s at stake when AI gets a credit decision wrong. That means knowing what AI you have, governing it by risk level, holding vendors accountable, and documenting all of it.
That’s what good governance looks like — and it’s something you can start building today.
(Views expressed in this article do not necessarily reflect policies of the Mortgage Bankers Association, nor do they connote an MBA endorsement of a specific company, product or service. MBA NewsLink welcomes submissions from member firms. Inquiries can be sent to Editor Michael Tucker or Editorial Manager Anneliese Mahoney.)
