Tag: OCC

Feds to Lenders: Take AVMs Seriously

Regulators are signaling that they are going to be looking at how AVMs are used and whether lenders have appropriately tested them and continuously monitor them for valuation discrimination. This represents a change in the focus on AVMs and the need for all lenders to focus on AVM validation to avoid unfavorable attention from government regulators.

On Feb 12, the FFIEC issued a statement on examinations from regulators. It specifically stated that it didn’t represent a change in principles, nor a change in guidance, and not even a change in focus. It was just a friendly announcement about the exam process, which will focus on whether institutions can identify and mitigate bias in residential property valuations.

Law firm Husch Blackwell published their interpretation a week later. Their analysis included consideration of the June 2023 FFIEC statement on the proposed AVM quality control rule, which would include bias as a “fifth factor” when evaluating AVMs. They interpret these different announcements as part of a theme, an extended signal to the industry that all valuations, and AVMs in particular, are going to receive additional scrutiny. Whether that is because bias is as important as quality or because being unbiased is an inherent aspect of quality, the subject of bias is drawing attention, but the result will be a thorough examination of all practices around valuation, including AVMs, from oversight to validation, training, auditing, etc.

AVM quality has theoretically been an issue that could be enforced by regulators in some circumstances for over a decade. What we’re seeing is not just an expansion from accuracy into questions of bias. We’re also seeing an expansion from banks into all lenders, including non-bank lenders. And, they are signaling that examinations will focus on bias, which is an expansion from the theoretical requirement to an actual, manifest, serious requirement.

An Interview with Lee Kennedy: Trends, the Future, and Regulation

The AVMNews sat down with our publisher Lee Kennedy to discuss trends in the industry.

AVMNews: Lee, as the Managing Director at AVMetrics, you’re sitting at the center of the Automated Valuation Model (AVM) industry. What changes have you seen recently?

Lee: There’s a lot going on. We see firsthand how the evolution of the technology has affected the sector dramatically. The availability of data and the decline in costs of storage and computing power have opened the doors to new competition. We see new entrants using new techniques and built by fresh faces. We still have a number of large players offering well-established AVMs. But, we also see the larger players retiring some of their older models. The established AVM players have responded in some cases by raising their game, and in other cases, by buying their upstart rivals. So, we’ve seen increased competition and increased consolidation at the same time.

And, it’s true that the tools keep getting better. It’s not evenly distributed, but on average they continue to do a better and better job.

AVMNews: In what ways do AVMs continue to get better?

Lee: AVMetrics has been conducting contemporaneous AVM testing for over a decade now, and we have many quantitative metrics showing how much better AVMs are getting. Specifically, we run statistical analysis around the comparison of AVM estimates to sales prices that are unknown to the models. We have seen increases in model accuracy rates measured by percentage of predicted error (PPE), mean absolute error (MAE) and a host of other metrics. Models are getting better at predicting sale prices and when they miss, they don’t miss by as much as they used to.

AVMNews: What about on the regulatory side?

Lee: There is always a lot going on. The regulatory environment has eased in the last two years reflecting a whole new attitude in Washington, D.C. – one that is more open to input and more interested in streamlining. Take, for instance, the 2018 Treasury report that focuses on advancing technologies (See “A Financial System That Creates Economic Opportunities”).

Last November, I was at a key stakeholder forum for the Appraisal Subcommittee (ASC). One area of focus was harmonizing appraisal requirements across agencies. Another major focus was how to effectively employ new tools in support of the appraisal industry, including the growth of Alternative Valuation Products that utilize AVMs.

AVMNews: I know that you also wrote a letter to the Federal Finance Institutions Examination Council (FFIEC) about raising the de minimis threshold, below which some lending guidelines would NOT require an appraisal.  This year in July they elected to change the de minimus threshold from $250,000 to $400,000 for residential housing. What are your thoughts?

Lee: Well, I think that the question everyone is struggling with is “What does the future hold for appraisers and AVMs?” Obviously, the field of appraisers is shrinking, and AVMs are economical, faster and improving. How is this going to play out?

First, my strong feeling is that appraisers are a valuable and limited resource, and we need to employ them at their highest and best use. Trying to be a “manual AVM” is not their highest and best use. Their expertise should be focused on the qualitative aspects of the valuation process such as condition, market and locational influences, not the quantitative (facts) such as bed and bath counts. Models do not capture and analyze the qualitative aspects of a property very well.

Several companies are developing ways of merging the robust data processing capabilities of an AVM with the qualitative assessment skills of appraisers.  Today, these products typically use an AVM at their core and then satisfy additional FFIEC evaluation criteria (physical property condition, market and location influences) with an additional service.  For example, the lender can wrap a Property Condition Report (PCR) around the AVM and reconcile that data in support of a Home Equity Line of Credit (HELOC) lending decision.  This type of hybrid product offering is on the track that we’re headed down.  Many AMCs and software developers have already created these types of products for proprietary use or for use on multiple platforms.

AVMNews: AVMs were supposed to take over the world. Can you tell us what happened?

Lee: Well, the Financial Crisis is one thing that happened. Lawsuits ensued, and everyone got a lot more conservative. And, the success of AVMs developed into hype that was obviously unrealistic. But, AVMs are starting to gain traction again. We are answering a lot more calls from lenders who want help implementing AVMs in their origination processes. They typically need our help with policies and procedures to stay on the right side of the Office of the Comptroller of the Currency (OCC) regulations, and so in the last year, we’ve done training at several banks.

Everyone is quick to point out that AVMs are not infallible, but AVMs are pretty incredible tools when you consider their speed, accuracy, cost and scalability. And, they are getting more impressive. Behind the curtain the models are using neural networks and machine learning algorithms. Some use creative techniques to adjust prices conditionally in response to situational or temporary conditions. We test them and talk to their developers, and we can see how that creativity translates into improved performance.

AVMNews: You consult to litigants about the use of AVMs in lawsuits. How do you think legal decisions and risk will affect the use of AVMs?

Lee: This is an area of our business, litigation support, where I am restricted from saying very much. It has been and continues to be an enlightening experience as some of the best minds are involved in all aspects of collateral valuation and the “Experts” are truly that… experts in their fields as econometricians, statisticians, appraisers, modelers, etc.… It is also very interesting with over 50 cases behind us now, to get a look behind the legal system curtain and how all of that works. Therefore, I want to emphasize that my comments for our interview are in the context of contemporaneous AVMs that were tested during the time period shown here and not a retrospective AVM that was looking back to these time periods.

AVMNews: AVMetrics now publishes the AVM News – how did that come about?

Lee: As you and the many subscribers know, Perry Minus of Wells Fargo started that publication as a labor of love over a decade ago. When he retired recently, he asked if I would take over as the publisher. We were honored to be trusted with his creation, and we see it as a way to be good citizens and contribute to the industry as a whole.

AVMNews: I encourage anyone interested in receiving the quarterly newsletter for free to go to http://eepurl.com/cni8Db

 

The AVMNews is a quarterly newsletter that is a compilation of interesting and noteworthy articles, news items and press releases that are relevant to the AVM industry. Published by AVMetrics, the AVMNews endeavors to educate the industry and share knowledge about Automated Valuation Models for the betterment of everyone involved.

The Wild, Wild West of Automated Valuations

Recently the OCC, FDIC and the Federal Reserve proposed raising the de minimis threshold for residential properties below which appraisals are not required to complete a home loan. Currently, most homes transacting at $250K and above require an appraisal, but Federal regulators propose to raise that level to $400K. A November 30th Wall Street Journal article raises some interesting issues about the topic. They reported that the number of appraisers is down 21% since the housing crisis, but more homes require an appraiser, since more and more homes exceed the threshold each year. The article also states that these factors open the door for cheaper, faster and “largely untested” property valuations based on computer algorithms, also known as Automated Valuation Models (AVMS).

At AVMetrics, we have been continuously testing AVMs for over 15 years, so we’ve seen how they’ve performed over time. As an example, the accompanying chart shows model performance accuracy as measured by mean absolute error, a statistical metric of valuation error.  We utilize many statistical measures of evaluating model accuracy and precision, and they all show significant improvement in AVMs over time. And, as these automated tools get better and the workforce of appraisers continues to shrink, the FFIEC members’ proposed change seems warranted, but that doesn’t mean they don’t have their critics.

Mean Absolute Error of all tested AVM models for the last 10 years

Ratish Bansal of Appraisal Inc was quoted in The Journal describing the state of AVMs as “a wild, wild West,” inviting, “abuse of all kind.” Furthermore, he contrasts that with the voluminous regulatory standards covering the use of appraisals.

We note much of those voluminous standards represent nearly the same quality control that was in place before the Credit Crisis.  In other words, appraisals are not a guarantee against collateral risk.  They are simply one tool in the toolbox – an effective, but comparatively time consuming and expensive tool. Also of note, far from being the “wild, wild west,” AVMs are also governed by regulators, most notably, Appendix B of the Appraisal and Evaluation Guidelines (OOC 2010-42) and Model Risk Management guidance (OCC 2011-12). These regulatory guidelines require that AVM developers be qualified, users of AVMs use robust controls, incentives be appropriate, and models be tested regularly and thoroughly with out-of-sample benchmarks. They require documentation of risk assessments and stipulate that a Board of Directors must oversee the use of all models. In other words, if AVMs were the “the wild, wild west” they would be rooted in a town with oversight of the legendary Wyatt Earp.

My strong feeling is that appraisals should not be a sole and exclusive tool when evaluations can be effectively employed in appropriate, lower-risk scenarios. Appraisers are a valuable and limited resource, and they should be employed at (to use appraisal terminology) their highest and best use.  Trying to be a “manual AVM” is not the highest and best use of a highly qualified appraiser.  Their expertise should be focused on the qualitative aspects of property valuation such as the property condition and market and locational influences. They should also be focused on performing complex valuation assignments in non-homogeneous markets.  AVMs do not capture and analyze the qualitative aspects of a property very well, and they still stumble in markets with highly diverse house stock or houses with less quantifiable attributes such as view properties.

However, several companies are developing ways of merging the robust data processing capabilities of an AVM with the qualitative assessment skills of appraisers.  Today, these products typically use an AVM at their core and then satisfy additionally required evaluation criteria (physical property condition, market and location influences) with an additional service.  For example, a lender can wrap a Property Condition Report (PCR) around the AVM and reconcile that data in support of a lending decision.  This type of “Hybrid valuation” is on the track we’re headed down.  Many companies have already created these types of products for commercial and proprietary use.

We at AVMetrics believe in using the right tool for the job, and we believe there is a place for automated valuations in prudent lending practices. We think the smarter approach would be to marginally raise the de minimis threshold, but simultaneously to provide additional guidance for considering other aspects of a lending decision, specifically, collateral considerations and eligibility criteria for appraisal exemptions such neighborhood homogeneity, property conformity, market conditions and more.

Cascade vs Model Preference Table® – What’s the Difference?

In the AVM world, there is a bit of confusion about what exactly is a “cascade.” It’s time to clear that up.  Over the years, the terms “cascade” and “Model Preference Table®” have been used interchangeably, but at AVMetrics, we draw an important distinction that the industry would do well to adopt as a standard.

In the beginning, as AVM users contemplated which of several available models to use, they hit on the idea of starting with the preferred model, and if it failed to return a result, trying a second model, and then a third, etc.  This rather obvious sequential logic required a ranking, which was available from testing, and was designed to avoid “value shopping.”[1]  More sophisticated users ranked AVMs across many different niches, starting with geographical regions, typically counties.  Using a table, models were ranked across all regions, providing the necessary tool to allow a progression from primary AVM to secondary AVM and so on.

We use the term “Model Preference Table” for this straightforward ranking of AVMs, which can actually be fairly sophisticated if they are ranked within niches that include geography, property type and price range.

More sophisticated users realized that just because a model returned a value does not mean that they should use it.  Models typically deliver some form of confidence in the estimate, either in the form of a confidence score, reliability grade, a “forecasted standard deviation” (FSD) or similar measure derived through testing processes.  Based on these self-measuring outputs from the model, an AVM result can be accepted or rejected (based on testing results) in favor of the next AVM in the Model Preference Table.  This application reflects the merger of MPT rankings with decision logic, which in our terminology makes it a “cascade.”

CriteriaAVMMPT®Cascade“Custom” Cascade
Value EstimateXXXX
AVM RankingXXX
Logic + RankingXX
Risk Tolerance + Logic + RankingX

 

The final nuance is between a simple cascade and a “custom” cascade.  The former simply sets across-the-board risk/confidence limits and rejects value estimates when they fail to meet the standard.  For example, the builder of a simple cascade could choose to reject any value estimate with an FSD > 25%.  A “custom cascade” integrates the risk tolerances of the organization into the decision logic.  That might include lower FSD limits in certain regions or above certain property values, or it might reflect changing appetites for risk based on the application, e.g., HELOC lending decisions vs portfolio marketing applications.

We think that these terms represent significant differences that shouldn’t be ignored or conflated when discussing the application of AVMs.

 

Lee Kennedy, principal and founder of AVMetrics in 2005, has specialized in collateral valuation, AVM testing and related regulation for over three decades.  Over the years, AVMetrics has guided companies through regulatory challenges, helped them meet their AVM validation requirements, and commented on pending regulations. Lee is an author, speaker and expert witness on the testing and use of AVMs. Lee’s conviction is that independent, rigorous validation is the healthiest way to ensure that models serve their business purposes.

[1] OCC 2005-22 (and the 2010 Interagency Appraisal and Evaluation Guidelines) warn against “value shopping” by advising, “If several different valuation tools or AVMs are used for the same property, the institution should adhere to a policy for selecting the most reliable method, rather than the highest value.”

How AVMetrics Tests AVMs

Testing an AVM’s accuracy can actually be quite tricky.  It is easy to get an AVM estimate of value, and you can certainly accept that a fair sale on the open market is the benchmark against which to compare the AVM estimate, but that is really just the starting point.

There are four keys to fair and effective AVM testing, and applying all four can be challenging for many organizations.

  1. Your raw data must be cleaned up, to ensure that there aren’t any “unusable” or “discrepant” characters in the data; differences such as “No.” “#” and “Num,” must be normalized.
  2. Once your test data is “scrubbed clean” it must be assembled in a universal format and it must be large enough to provide reliable test results, even down to the segment level for each property type within each price level within each county, etc. and this might require hundreds of thousands of records. 
  3. Timing must be managed so that each model receives the same sample data at the same time with the same response deadline.
  4. Last, and most difficult, the benchmark sales data must not be available to the models being tested.  In other words, if the model has access to the very recent sales price, it will be able to provide a near-perfect estimate by simply estimating that the value hasn’t changed (or changed very little) in the days or weeks since the sale. 

AVMetrics tests every commercially available AVM continuously and aggregates this testing into a report quarterly; AVMetrics’ testing process meets these criteria and many more, providing a truly objective measure of AVM performance. 

The process starts with the identification of an appropriate sample of properties for which benchmark values have very recently been established.  These are the actual sales prices for arm’s-length transactions between willing buyers and sellers—the best and most reliable indicator of market value.  To properly conduct a “blind” test, these benchmark values must be unavailable or “unknown” to the vendors testing their model(s).  AVMetrics provides in excess of a half million test records annually to AVM vendors (without information as to their benchmark values).  The AVM vendors receive the records simultaneously, run these properties through their model(s) and return the predicted value of each property within 48 hours, along with a number of other model-specific outputs.  These outputs are received by AVMetrics, where the results are evaluated against the benchmark values.  A number of controls are used to ensure fairness, including the following:

  • ensuring that each AVM vendor receives the exact same property list (so no model has any advantage)
  • ensuring that each AVM is given the exact same parameters (since many allow input parameters that can affect the final valuation)
  • ensuring through multiple checks that no model had access the recent sale data, which would provide an unfair advantage

In addition to quantitative testing, AVMetrics circulates a comprehensive vendor questionnaire twice annually.  Vendors that wish to participate in the testing process complete, for each model being tested, roughly 100 parameter, data, methodology, staffing and internal testing questions.  These enable AVMetrics, and more importantly our clients, to understand model differences within both testing and production contexts, and it enables us and our clients to satisfy certain regulatory requirements describing the evaluation and selection of models (see OCC 2010-42).

AVMetrics next performs a variety of statistical analyses on the results, breaking down each individual market, each price range, and each property type, and develops results which characterize each model’s success in terms of precision, usability and accuracy.  AVMetrics analyzes trends at the global, market and individual model levels, identifying where there are strengths and weaknesses, and improvements or declines in performance.

The last step in the process is for AVMetrics to provide an anonymized comprehensive comparative analysis for each model vendor, showing where their models stack up against all of the models in the test; this invaluable information facilitates the continuous improvement of each vendor’s model offerings.

Cascade vs Model Preference Table – What’s the Difference?

In the AVM world, there is a bit of confusion about what exactly is a “cascade.” It’s time to clear that up.  Over the years, the terms “cascade” and “Model Preference Table”TM have been used interchangeably, but at AVMetrics, we draw an important distinction that the industry would do well to adopt as a standard.

In the beginning, as AVM users contemplated which of several available models to use, they hit on the idea of starting with the preferred model, and if it failed to return a result, trying a second model, and then a third, etc.  This rather obvious sequential logic required a ranking, which was available from testing, and was designed to avoid “value shopping.”[1]  More sophisticated users ranked AVMs across many different niches, starting with geographical regions, typically counties.  Using a table, models were ranked across all regions, providing the necessary tool to allow a progression from primary AVM to secondary AVM and so on.

We use the term “Model Preference Table” for this straightforward ranking of AVMs, which can actually be fairly sophisticated if they are ranked within niches that include geography, property type and price range.

More sophisticated users realized that just because a model returned a value does not mean that they should use it.  Models typically deliver some form of confidence in the estimate, either in the form of a confidence score, reliability grade, a “forecasted standard deviation” (FSD) or similar measure derived through testing processes.  Based on these self-measuring outputs from the model, an AVM result can be accepted or rejected (based on testing results) in favor of the next AVM in the Model Preference Table.  This application reflects the merger of MPT rankings with decision logic, which in our terminology makes it a “cascade.”
MPT vs Cascade vs Custom Cascade

The final nuance is between a simple cascade and a “custom” cascade.  The former simply sets across-the-board risk/confidence limits and rejects value estimates when they fail to meet the standard.  For example, the builder of a simple cascade could choose to reject any value estimate with an FSD > 25%.  A “custom cascade” integrates the risk tolerances of the organization into the decision logic.  That might include lower FSD limits in certain regions or above certain property values, or it might reflect changing appetites for risk based on the application, e.g., HELOC lending decisions vs portfolio marketing applications.

We think that these terms represent significant differences that shouldn’t be ignored or conflated when discussing the application of AVMs.

 

Lee Kennedy, principal and founder of AVMetrics in 2005, has specialized in collateral valuation, AVM testing and related regulation for over three decades.  Over the years, AVMetrics has guided companies through regulatory challenges, helped them meet their AVM validation requirements, and commented on pending regulations. Lee is an author, speaker and expert witness on the testing and use of AVMs. Lee’s conviction is that independent, rigorous validation is the healthiest way to ensure that models serve their business purposes.

[1] OCC 2005-22 (and the 2010 Interagency Appraisal and Evaluation Guidelines) warn against “value shopping” by advising, “If several different valuation tools or AVMs are used for the same property, the institution should adhere to a policy for selecting the most reliable method, rather than the highest value.”