Guest Writer Dr. Eva Cruz: A Walk Down Model Risk Lane
Dr. Cruz is the CEO and Founder of QuantDi.
Our guest writer this week is Dr. Eva Cruz, PhD, MBA, FRM, who has over 20 years of corporate and academic experience with advanced analytics solutions in financial services, banking and energy, among a wide range of applications. Eva previously was the SVP, Head of Compliance & Financial Crimes Model Development and Monitoring at Truist. Previous to Truist, Eva grew and led teams of quantitative experts with focus areas that included model risk at major financial services companies like TD, Ally, Fannie Mae, Bank of America and American Express. Currently, Eva is the CEO and founder of QuantDi, which is a decision and risk analytics consultancy. It helps growing organizations and startups integrate advanced analytics in their decision processes and business strategy, and to manage the risk of using, developing, deploying, or integrating AI/ML models in their products or operations.
For this article Eva adds as a footnote - "Original research and thoughts - No GenAI used in creating this post." We're excited to have her join us this week. Enjoy!
~~
𝚃̲𝚑̲𝚎̲ ̲𝙶̲𝚎̲𝚗̲𝚎̲𝚜̲𝚒̲𝚜̲ ̲𝚘̲𝚏̲ ̲𝚁̲𝚎̲𝚐̲𝚞̲𝚕̲𝚊̲𝚝̲𝚘̲𝚛̲𝚢̲ ̲𝙶̲𝚞̲𝚒̲𝚍̲𝚊̲𝚗̲𝚌̲𝚎̲ ̲𝚊̲𝚋̲𝚘̲𝚞̲𝚝̲ ̲𝙼̲𝚘̲𝚍̲𝚎̲𝚕̲ ̲𝚄̲𝚜̲𝚎̲ ̲𝚒̲𝚗̲ ̲𝙱̲𝚊̲𝚗̲𝚔̲𝚒̲𝚗̲𝚐̲
Did you know that the first U.S. supervisory guidance about the risks of using models in banking institutions was issued by the OCC back in 2000?
After the 1996 Basel Capital Accord amendment to include market risks, banks could use internal ratings models to estimate those risks and the capital allocation for their trading activities.
Not long afterwards, the demise of the hedge fund Long-Term Capital Management (LTCM) in 1998 was partly attributed to their misspecified models, specifically incorrect assumptions in market risk models. LTCM was known for using advanced mathematical models for their trading strategy. The potential repercussions of a LTCM bankruptcy on the financial system prompted the Fed to arrange for a rescue package funded by a consortium of banks and brokerage firms that had lent to LTCM.
The OCC bulletin known as OCC 2000-16, addressed to CEOs and Compliance leaders of national banks, was titled “Model Validation”, and it specifically referred to the need for financial institutions to address the risks from “reliance on computer-based financial models that are improperly validated or tested”. The concern centered on potential financial losses due to incorrect estimates of balance sheet or trading risk exposure, or valuation of financial instruments.
Key requirements laid out in OCC 2000-16 were as follows:
Independence between model validation and development, both staffed with qualified parties.
Clear communication of model limitations and assumptions to business users
Formal approval of a model before going into production
Minimum requirements for model documentation
Change control requirements, including approvals and model re-validation
Audit’s role ensuring that validation process meets internal policies and procedures
Validation scope: data, assumptions, processing component (theoretical foundation, the math, and the code to implement it), model results, and clarity of results presented to business users.
Prior to the 2008 financial crisis, large banks had built or started building independent model validation functions, as well as model inventories. So, 𝚠̲𝚑̲𝚊̲𝚝̲ ̲𝚠̲𝚎̲𝚗̲𝚝̲ ̲𝚠̲𝚛̲𝚘̲𝚗̲𝚐̲ with the instrument valuation and credit loss models at the dawn of the Great Financial Crisis? 𝚆̲𝚑̲𝚊̲𝚝̲ ̲𝚗̲𝚎̲𝚠̲ ̲𝚛̲𝚎̲𝚚̲𝚞̲𝚒̲𝚛̲𝚎̲𝚖̲𝚎̲𝚗̲𝚝̲𝚜̲ ̲𝚍̲𝚒̲𝚍̲ ̲𝚝̲𝚑̲𝚎̲ ̲𝚕̲𝚎̲𝚜̲𝚜̲𝚘̲𝚗̲𝚜̲ ̲𝚏̲𝚛̲𝚘̲𝚖̲ ̲𝚝̲𝚑̲𝚎̲ ̲𝙶̲𝙵̲𝙲̲ ̲𝚋̲𝚛̲𝚒̲𝚗̲𝚐̲ ̲𝚏̲𝚘̲𝚛̲𝚝̲𝚑̲, now enveloped within the Model Risk Management concept?
Link: https://www.occ.gov/static/rescinded-bulletins/bulletin-2000-16.pdf
𝚂̲𝚁̲–̲𝟷̲𝟷̲–̲𝟽̲:̲ ̲ ̲𝙼̲𝚘̲𝚍̲𝚎̲𝚕̲ ̲𝚅̲𝚊̲𝚕̲𝚒̲𝚍̲𝚊̲𝚝̲𝚒̲𝚘̲𝚗̲ ̲𝙰̲𝚕̲𝚘̲𝚗̲𝚎̲ ̲𝙽̲𝚘̲𝚝̲ ̲𝚂̲𝚞̲𝚏̲𝚏̲𝚒̲𝚌̲𝚒̲𝚎̲𝚗̲𝚝̲
After the Great Financial Crisis, and because of the role that, once again, poorly specified models played in underestimating the risks of financial instruments, the Federal Reserve Board and the OCC jointly issued a new supervisory guidance on Model Risk Management.
That joint supervisory guidance is known as SR-11-7, for short. SR-11-7 replaced OCC 2000-16, and the FDIC also adopted the joint guidance (FIL-22-2017 letter: https://www.fdic.gov/news/financial-institution-letters/2017/fil17022.pdf).
So, what is new in SR-11-7 compared to OCC-2000-16?
The most obvious change is that the supervisory agencies elevated and formalized the concept of model risk on par with other categories, such as credit or market risk, requiring Board and senior management oversight and model risk being included in the risk appetite statement.
Unlike OCC 200-16 that referenced ‘computer-based financial models’ the definition of a model in SR-11-7 is quite broad: “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates” and “the definition of model also covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature”.
Unlike market risk or valuation models that played a role in prior crises, a model does not need to involve complex algorithms or statistics-based predictions, nor be used only in high-risk or critical applications.
In SR-11-7 although independent model validation is a key requirement, it is no longer the central theme as in OCC 200-16. The guidance extends model risk management across all three lines of defense, with defined roles and responsibilities including “business units being responsible for the model risk associated with their business strategies” and the model owner being accountable for model use and performance”. Model risk management must include governance and controls, policies and procedures, and adequate organizational structure.
Model risk management is an ongoing process, not a one-time event that ends with the model validation report. Model development, model implementation, and model use are all stages throughout the model lifecycle when specific risks arise that need to be identified and mitigated until model retirement.
SR-11-7 also includes among the definition of model risk the inappropriate use of otherwise correct models, i.e., using it in ways not envisioned when it was developed.
Model governance establishes the mechanism for approval and guardrails on model use, and for ensuring awareness of model limitations and assumptions among decision makers.
Ongoing model monitoring is an essential process to confirm that the models continue to work as intended after they are implemented, and banks must have a program for ongoing testing as well as plans for addressing situations when models no longer meet performance specs.
Although OCC 2000-16 already stated that third-party (aka vendor) models must be validated, SR-11-7 declares that they need to the included in the broader model risk management framework and meet the same requirements as internally built models, with a few caveats to account for IP protection. SR-11-7 outlines not only what banks should require the vendors to provide, but also that the banks need to have a process for vendor model selection.
I would like to make a final observation based on personal experience with countless models developed, validated, and implemented in production, procedures built, and interactions with second-line, auditors, and regulators.
Supervision of model risk management is framed from the perspective of safety and soundness (i.e., the risk financial losses due to incorrect models or inappropriate use) and the experiences of LTCM and the Great Financial Crisis could lead us to believe that only valuation or capital models are in scope. In practice, however, the broad definition of what is a model extends the expectations laid out in SR-11-7 across all processes within a bank.
Besides obvious statistics- or AI/ML-based predictive tools, models per the SR-11-7 definition can be spreadsheets used for capacity planning in a bank call center, rule-based scenarios in AML processes, name-matching algorithms, one-time use tools used for strategic M&A assessments, or statistical inference tests (i.e., hypothesis testing) used for fair lending compliance monitoring. I list those examples because I have experienced each of them been categorized as models and subject to MRM policies, and not just at a single bank.
SR-11-7 does state that the implementation of the guidance should be commensurate with the bank's risk exposures, business activities, and the complexity and extent of its model use. It is, however, on the bank’s shoulders to demonstrate that they are not underestimating their model risk.
Link: https://www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf
𝚁̲𝚎̲𝚌̲𝚎̲𝚗̲𝚝̲ ̲𝙲̲𝚊̲𝚞̲𝚝̲𝚒̲𝚘̲𝚗̲𝚊̲𝚛̲𝚢̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲ ̲𝚘̲𝚏̲ ̲𝙼̲𝚘̲𝚍̲𝚎̲𝚕̲ ̲𝙶̲𝚘̲𝚟̲𝚎̲𝚛̲𝚗̲𝚊̲𝚗̲𝚌̲𝚎̲ ̲𝙵̲𝚊̲𝚒̲𝚕̲𝚞̲𝚛̲𝚎̲𝚜̲:̲ ̲𝚂̲𝚅̲𝙱̲
We do not have to go back far to the 90’s or even the Great Financial Crisis to find an example where model risk management failures played a role in a major financial institution’s undoing.
In the days following the run on deposits at Silicon Valley Bank and the subsequent FDIC intervention, much was written about the gaps in internal risk management and regulatory oversight that created the conditions for what happened to SVB. The FRB report concluded in no uncertain terms that the management at SVB had “failed to manage basic interest rate and liquidity risk” and acknowledged that the regulators did not take prompt and adequate measures.
A thorough read of the post-mortem reveals quite a few interesting findings relevant to model risk management, in particular, model governance.
In reference to the weaknesses in the liquidity risk management process, the FRB found that senior management at SVB adopted less conservative stress testing assumptions that “reduced the size of the modeled liquidity shortfall” despite failing their internal liquidity stress testing. The report concludes that “changing model assumptions, rather than improving the actual liquidity position, is not an appropriate way to restore compliance with limits”.
Similarly, instead of addressing the interest rate risk due to overexposure to long-dated securities and duration mismatch, they “made counterintuitive modeling assumptions about the duration of deposits to address the limit breach rather than managing the actual risk”. They “made model changes that reduced the level of risk depicted by the model…management changed assumptions rather than the balance sheet to alter reported risks”. The report describes “a poorly supported change in assumption to increase the duration of its deposits based on a deposit study conducted by a consultant and in-house analysis…the change reduced the mismatch of durations between assets and liabilities and gave the appearance of reduced IRR; however, no risk had been taken off the balance sheet. The assumptions were unsubstantiated”.
Furthermore, the report states that SVB “did not conduct back-testing, had limited sensitivity testing, and did not have an adequate second line function to provide review and challenge to decisions and model assumptions”.
It is worth noting that in these situations the models, left alone without superimposing new assumptions, may have predicted worse outcomes than what the senior leadership at SVB wanted to recognize. The change in assumptions can be categorized as management-driven overlays to dampen the severity of the model-based outcomes. So, what about the Board and their responsibility to oversee such decisions? That is fodder for another discussion, but the FRB report noted the lack of oversight by independent risk management and internal audit, with the Financial Risk Management group failing to provide effective challenge and Internal Audit lacking stature to enforce the resolution of findings related to inadequate governance of IRR models.
Link: https://www.federalreserve.gov/publications/files/svb-review-20230428.pdf
~~
Thank you Eva! Based on her final thoughts (and because you know we love multi-part writeups here at Fintech Compliance Chronicles), stay tuned as we'll likely have a follow-up down the road from Eva focused on SVB's model governance failures. See you next week!