Mark P. Dangelo: Playing Chicken with the Data Freight Train

The “mastery” of data is not about how much you capture, how large your data warehouses or lakes become, or the native cloud provisioning solutions you deploy—it is about creating, sustaining, and utilizing a data supply chain already being deployed by non-traditional lenders.  To think otherwise is akin to “playing chicken” with a freight train—hoping somehow it will veer from its tracks and spare you.

In 2022 when interest rates touched 7% right before the MBA Annual Conference, optimism among the participants was high that these rates were an aberration—not the norm for the industry’s future.  Yet in 2023, inflation continued to grow, the Federal Reserve maintained its promise to raise rates, and the consumer paid greater percentages of their income for home ownership.  A year later, affordability dominates buyer’s decisions with mortgage lenders experiencing shrinking margins and declining volumes. 

Additionally, it was during the 2022 Annual conference that I released “Adapt or Die: The Reimaging of the Mortgage Industry” at the request of the MBA.  It was a comprehensive future look at the impact that digitalization was going to have on growing digital products and services, which under the microscope of tradition, lacked volumes and were uniquely distinct from commodity driven profit margins. 

To put a finer focus on events and discussions a year later, interest rates are now the highest since 2001, volumes projected for 2024 will contract to levels not seen since 2011, and a recession is likely in 1H 2024 (see Business Insider / Fannie Mae, September 23, 2023).  So, what is next?  What strategy, operational improvements, or competitive differentiation should bankers and lenders (BL) embrace to survive 2024-2025 market conditions?  What needs to change especially when it comes to innovation, data, and systems?

The key to survival for many leaders will be a fundamental shift from BL process ideation and implementations (e.g., LOS, AVM, consumer) to data-discovery, ingestion, and reuse to fuel efficient ML and AI across-process solutions.  Indeed, the industry is being reimaged.

What’s in Your Data?

Across financial and lending groups, the growth of cloud capabilities since the Great Recession exploded—growing over 50% per annum.  It was in part fueled by extensive demands to deal with regulations, complexity of operations, customer requirements, and the exponential growth of FinTech solutions all underpinned by housing demand and a pandemic.  Today, the provisioning of off-premises computing and storage capacity is straightforward—thanks to the rise of AWS, Snowflake, and Azure. 

Yet the ability to reuse data, to ensure linkages back to the auditable systems-of-record (SOR) was discounted as the tools and methods to replicate, manipulate, and iterate data all without capital budgets or complex utilities become ubiquitous.  With data capture and storage doubling every 18-24 months, designers ignored the requirement for auditable SOR’s.  That is, paths of data lineage which cannot be traced back to their original sources.   

Today, while we prize our systems and claim “mastery” over our data, the reality is that we “sample” for conformance only an exceedingly small faction of the data sources—usually between 1% to 5% of all data utilized within and across our process driven systems.  Moreover, 60% of the data captured within a bank and lending institution is never accessed or reused once it is captured and stored leading to cluttered, fragmented repositories that are difficult to query (even with NLP) and complex to implement with data-driven technologies (e.g., LLM’s and generative AI). 

Moreover, as RPA, ML, and AI grow rapidly, the transparency of data, algorithms, and decisioning logic (e.g., ensembled AI—AI systems linked to other AI systems) is opaque or a black box for those using the systems to make projections, approve loans, and increasingly important, to understand cross-process risks embedded within the traditional systems.  When errors find their way into work and process flows, many organizations find themselves facing expensive downstream corrections that threaten consumer confidence, legal risks, and brand damage.

 TraditionalData-Driven
Business ModelsPaper to digital automation Commoditization with scale Volume driven and operationally defined Processing feesNative digital / digital assets Digital economy and ecosystem products Hyper personalization
Data CreationDependent on process modeling Created as a secondary to system functions Filtered within the system using “standards”Part of an end-to-end data supply chain Data isolated and managed independently (DIM rules) Reusability across systems—one SOR
Designs and ArchitecturesFintech solutions Large, vertical designs Limited computing power CentralizedDistributed / hybrid Edge computing Active data governance Data virtualization / mesh
Innovation and Data MonetizationSystem / process efficiencies Industry standards Fractional sampling CompetitionData value assessments and priority sequences Standards are a stepping stone for DIM’s 100% data sampling

As illustrated above, the traditional approaches of operation lose their efficacy quickly against declining market volumes—as the systems were designed for scale.  Yet volumes are fleeting and the margins sought cannot be regained even with process efficiencies—it requires a phase shift of approach and corporate focus.

Moreover, traditional solutions and customer interactions were automated using rules and designs that create complexities for data-driven solutions such as ML and AI.  These legacy approaches also contribute to rising regulatory costs, systemic risks, and data Inefficiencies that slice margins and profitability.  How do organizations adapt their existing architectures and infrastructures to participate efficiently across market realities?  

Developing an Adaptive Data Supply Chain and Organizational Mindset

Often referred to as “rewiring” the organization and its approach to data innovation, the identification and creation of a banking and lending data supply chains demands a shift of “ideation”—as compared to traditional system designs that concentrated on silos of features and functionality where process efficiency was a priority over data reusability. 

Transformation of operations from the current state into repeatable data approaches, methods, technologies, and entities must be designed and deployed to capitalize on emerging future customer conditions.  To survive enterprises must reassess operational systems deployed against the needs of emerging digital markets—and these represent material shifts of approaches and demands. 

The table below is a foundational representation of the sequenced steps to deliver a data supply chain independent of siloed process defined data.

Data Sources (systems of record, SOR)Multiple sources and uses of data across all silos of the mortgage origination, servicing, and securitization utilizing various methods of collection (e.g., LOS, AVM, credit, customer digital artifacts).  It represents a logical and physical map controlled by automation—not PDF’s.
Data Onboarding and IsolationData alignment, quality, deduplication, and cleaning to process transactional, financial, and historical data.
Data Governance and CurationAutomated policies and procedures to actively manage and enforce rules, standards, and lifespan (i.e., cradle to grave) unique to the organization and its customers.
Data Risks, Value, Privacy, Security, and AggregationLeveraging governance ingestions, ring-fence the data per regulatory compliance and customer preferences.  Moreover, determine the relative validity weight of data based upon the “six- V’s” of data.
Data Virtualization and Domains (Mesh /Fabric)Minimize replication, point-based silos, ETL / warehouses to reflect computing and storage advancements that improve auditability across domains of data—while reducing or eliminating fractional sampling.
Data Analytics / FraudTo ensure context for data usage, utilize isolation modules that can monitor and alert discrepancies across platforms and pipelines. 
Lifecycle FeedbackFilter out, using archival policies and regulations, across the supply chain process (i.e., creation, retention, manipulation, and deletion) for exploding digital assets against expiration of value and changing organizational objectives.

Assessed against the shifting markets of data-driven banking and lending products and services, the data supply chain shift marks a practical solution to leverage existing investments aligned with future needs and feasibility.  Rather than undertake a conventional lift-and-shift conversion approach, the implementation of a data supply chain creates rapid results and a sustainable shift of value creation encapsulated within data domains.  But is that all?

In Summary, the Swift Movement Away from Traditional System Processes

The creation of data domains serves as the foundation for layered data-driven solutions which include analytics, process automation, and AI complete with auditability.  Additionally, data supply chains that are domain centered can meet ethical and privacy demands along with legal considerations and discovery.  With audit and legal costs rising and the impacts to operations exploding, a priority for data supply chains and shifts from system ideation to data has very practical consequences.

While the data supply chain creates the multidirectional pipelines needed for customer personalization behaviors, it has been the explosion of data availability coupled with hyperscale computing which has shifted industry capabilities and options moving forward.  For bankers and mortgage lenders, the automation of the data supply chain as compared to process driven system ideations decreases the direct costs to consumers across the mortgage process.  It also reduces the dependency on volume to deliver profitability.

From time to secure the mortgage to the fees charged, innovative firms are repurposing their delivery strategies to reflect the current environmental pressures.  As the number of lenders and banking enterprises continues to shrink during the last 18 months, the future of those remaining and praying for a fast rebound is increasingly opaque.  Without a fundamental shift of how they view and implement operational solutions to compliment and streamline practices and partnerships, the image of playing chicken with a freight train comes to mind.

In conclusion, some state that it is the definition of insanity to keep doing the same thing and hoping for a different outcome—I believe the mortgage and banking industries have arrived as this conclusion.  There will be a permanent and fundamental shift to data ideation driven by the very demographics that bankers and loan officers are seeking as customers.  Yet, while the realization of change is now apparent, the demanding work is just beginning to be understood. 

As the market lens becomes clearer and closer, so does the train carrying all the data needed for the products and services being offered.  In 2024, especially during these active budgeting periods of Q4, the realities of digital loans and assets (not digitized paper-based processes) cannot be derailed while we wait for someone to save us.  The train is growing in speed and volume—are we ready to use our skills, our understanding, and our people to determine the path of the train?

(Views expressed in this article do not necessarily reflect policies of the Mortgage Bankers Association, nor do they connote an MBA endorsement of a specific company, product or service. MBA NewsLink welcomes your submissions. Inquiries can be sent to Editor Michael Tucker or Editorial Manager Anneliese Mahoney.)

Tags: