Blockchain

The evolution of OCC expectations for model risk management

On 18 August 2022, the Office of the Comptroller of the Currency (OCC) published its model risk management (MRM) booklet of the Comptroller’s Handbook in OCC Bulletin 2021-39 (2021 handbook). As mentioned in the release note, the 2021 handbook presents the concepts and general principles of model risk management and provides guidelines for examiners in planning and conducting examinations on model risk management.

The handbook provides insights into the OCC’s current and expected approach to model risk supervision. The insights prove valuable for firms seeking to meet OCC’s supervisory expectations for MRM and firms subject to MRM oversight by other banking regulators.

While it doesn’t supplant the 2011 Supervisory Guidance on Model Risk Management released by the OCC as OCC 2011–12, the 2021 handbook extracts key elements of the 2011 guidance and provides supplementary explanations, as well as significant additional interpretations and clarifications.

The expansion of the 2011 guidance represents the evolution of model risk management examination practices consistent with industry developments and interim OCC issuances. These include the recent interagency statement on MRM for Bank Secrecy Act/anti-money laundering (BSA/AML) compliance (Interagency Statement on MRM in BSA/AML). And the 2020 FAQ on third-party relationships, which is considered for inclusion into the proposed interagency guidance on third-party risk management (TPRM).

Key takeaways:

  • Scope and model definition. Clarifies the definition of a model and expectations surrounding appropriate governance of non-model analytical tools. Importantly, it emphasizes applying the appropriate level of risk management activities regardless of whether an analytical tool meets the bank’s definition of a model.
  • Risk-sensitive MRM. Addresses tailoring of risk management activities to the level of risk as an important theme. To allocate resources for MRM effectively, banks should use a model’s risk level to determine the breadth, depth, priority and frequency of MRM activities.
  • Risks associated with model use. Explains how models can increase or decrease other types of risk (credit, price, strategic) and outlines the approach to incorporate model risk into assessments for other types of risk. For example, credit risk assessments should incorporate the model risk associated with relevant loss and revenue models for credit products.
  • Risk management of artificial intelligence (AI) models and non-Models. Singles out and emphasizes the need for appropriate risk management when using AI. The handbook notes that, while some AI does not produce quantitative estimates and might not meet the 2011 guidance definition of a model, the risks of AI can be high depending on the complexity of the methodology and its use. Special note was made on analyzing the potential for implicit bias in AI models and tools.
  • The role of MRMs in fair lending. Emphasizes the important role that MRM has in a firm’s fair lending program and lays out detailed expectations on fair lending considerations in a bank’s MRM framework. This emphasis is consistent with the various issuances from bank regulators stating that fair lending is a top priority on their agendas.
  • Third-party risk management considerations. Provides a detailed discussion of TPRM considerations, largely reflecting the 2020 FAQ on third-party relationships. Also, it offers explicit references and important clarifications in the treatment of vendor-funded validations and how a bank might use them in reviewing vendor models.

Scope and model definition

The handbook clarifies the definition of a model in the 2011 guidance. It notes that models are characterized by “uncertainty associated with a model’s estimate of the outputs” instead of by “outcomes defined by the deterministic rules” that non-model analytical tools produce. This clarification contributes to the model inventory process by clarifying the role of uncertain output estimates in distinguishing models from non-model analytical tools.

While the OCC has heightened supervisory expectations for the risk management of models relative to non-model analytical tools, the handbook clarifies that an excessively mechanical approach that doesn’t consider the risk and complexity of non-model analytical tools can promote model risk.

Specifically, the handbook states that risk management should be applied and be commensurate with the extent and complexity of a quantitative approach, irrespective of whether it meets the definition of a model. In other words, the critical issue is applying an appropriate level of risk management and controls regardless of an approach’s classification. For example, the machine learning clustering algorithm k-means may not be viewed as a model, but it can pose a substantial risk when applied to customer risk rating.

These statements represent an evolution in OCC thinking that reduces the emphasis on whether a bank determines whether an analytical tool is a model or a non-model. It focuses instead on the adequacy of risk management regardless of categorization. This reflects a concern on the part of OCC that the emphasis placed on model definition might have had the unintended consequence of reducing governance and controls around complex and critical non-model analytical tools. There may also be an over-emphasis on the model/non-model determinations. The handbook’s clarification also echoes the recent Interagency Statement on MRM in BSA/AML, which states that “regardless of how a BSA/AML system is characterized, sound risk management is important.”

Risk-sensitive MRM

The discussion before this reflects the OCC’s general principle that sound risk management requires allocating resources according to risk, with riskier areas receiving relatively greater attention. Although the 2011 guidance allowed for the range and rigor of initial validation activities to depend on the potential risk of the model, it did not explicitly refer to risk-sensitive MRM. The handbook provides further guidance on how banks can develop means of effectively allocating resources for MRM beyond initial validation. It explicitly mentions the risk-based approach for model validations, which should be performed with a frequency appropriate for a bank’s risk profile. The frequency and depth of ongoing monitoring should be commensurate with the risks involved.

Given the extensive model inventories of large financial firms, we observed that many organizations have sought to implement risk-sensitive MRM requirements consistent with the preceding statements. A model’s risk level can determine validation activities’ breadth, depth, priority and frequency. For example, a high-risk model might require a periodic full revalidation every two years, whereas banks can validate low-risk models less frequently. Similarly, a model’s risk level might impact the extent and nature of testing or the acceptable level of transparency for complex models.

As expected, firms that fail to differentiate models according to risk cannot implement risk-sensitive MRM. The handbook says model risk rating methodologies “are often based on the model type and objectives; complexity, uncertainty and materiality; interrelationships; data; and capabilities and limitations.” The handbook further highlights that model ratings should consider explainability for AI models.

Risks associated with model use

The handbook clarifies how the standard eight OCC risk types may be impacted by model risk. Specifically, model risk should be considered a distinct risk that can influence the OCC risk categories: credit, interest rate, liquidity, price, operational, compliance, strategic and reputation. For example, a bank’s strategic risk can increase due to failure to adjust model inputs and assumptions for a changing macroeconomic environment, market conditions, and consumer behaviors, translating into financial losses and reputation risk.

To determine the quantity of each risk associated with the bank’s model use, the handbook provides detailed considerations such as the nature, extent and complexity of models the bank uses. It also considers the extent to which models contribute to decision-making, and the level of uncertainty or inaccuracy of model inputs and assumptions. Thus, a firm should consider to what extent model risk should be incorporated into other risk assessments across the organization.

Firms can also benefit from incorporating this approach into their model risk reporting framework. In doing so, firms need a clear model-use taxonomy to classify model use, capturing different data dimensions of model use, including mapping each model use to associated risks. For example, models used in the compliance risk and AML compliance programs influence the compliance risk and the reputation risk posed by potential non-compliance.

Finally, the handbook also emphasizes the importance of risk appetite for model risk that defines the boundaries within which management can operate. It also highlights the need to limit the use of risk-propagating “exceptions, management overrides, policy deviations and limits on model use.”

AI risk management

The handbook lays out detailed expectations around sound AI risk management for both model and non-model tools. This includes an inventory of AI uses, risk identification, effective technology controls and effective processes to validate that AI use provides sound, fair and unbiased results. However, subsequent detailed references in the handbook refer only to AI classified as models. Therefore, the extent to which the activities described for AI models should be used for AI non-models needs to be determined based on the risk-sensitive principle for non-model risk governance discussed above.

The handbook sets an expectation that a bank’s MRM policy should include standards for documenting conceptual understanding of AI approaches. This is important because the underlying theory and logic might not be transparent for complex AI models. On the other hand, some AI approaches are conceptually simple but might pose a substantial risk due to their heuristic nature (that is, based on intuition rather than statistical or mathematical theories). For example, distance-based clustering might lead to opposite results depending on the scaling of the data.

Relatedly, conceptual understanding should cover the model hyperparameters and the approach to their calibration. This is particularly important for AI approaches in a constant cycle of improvement (continuous training), in which conceptual design might drift unnoticed. It is critical to maintain detailed and current documentation, including details on reparameterization and recalibration processes and acceptable boundaries for model changes that can be considered routine updates (thus not requiring validation).

Regarding validation, the handbook states that policy and procedures should include the prioritization, scope and validation frequency, including AI models’ underlying algorithms and other frequently updated parameters. Frequent updating can benefit model accuracy, but it needs a dynamic approach to control activities such as backtesting and monitoring. Sometimes, such as decision-tree-based algorithms, recalibration might lead to different quantitative and qualitative outcomes. Frequent updating can also be wasteful in terms of time and processing power if the data patterns change with a different frequency than the model updating. Data fluidity and the extent of potential model changes should inform the frequency and scope of validation.

Recognizing that conceptual soundness assessment of AI models can be challenging, the handbook stresses transparency and explainability as critical considerations in managing the risk of complex AI models. The OCC’s guiding principle is that banks should tailor the level of explainability to the model’s use. For example, AI models used in credit underwriting would be subject to relatively high standards for documentation and validation to adequately demonstrate that the model is fair and operating as intended. In contrast, an AI image recognition used for remote deposit capture does not require as much scrutiny.

MRM’s role in fair lending

Fair lending risks may be introduced through biased input data, algorithms that amplify data biases, and biased use of model output. MRM functions should play an essential role in the fair lending program at financial firms that increasingly rely on models. The handbook reinforces this role and lays out detailed expectations on fair lending considerations in a bank’s MRM framework. It explicitly references MRM policies that should include fair lending considerations, including standards for ensuring models do not cause or promote discrimination through disparate treatment or disparate impact.

More specifically, a bank’s MRM framework should include controls for model development, implementation and use, including controls to monitor potential discriminatory outputs or results. We observed that models affected by fair lending issues often rely on dynamic, frequently updated data, such as transaction monitoring models.

A model that produces fair and unbiased outcomes might fail in this respect when the input data drifts. For such reasons, the ongoing monitoring framework for these models should include monitoring for biases in the data. Similarly, the handbook states that the MRM framework should establish appropriate standards for model validation to identify biases in data or model outcomes. Finally, the MRM framework should recognize that fair lending risks may be introduced through management overlays and adjustments.

The handbook also mentions models built solely to assess compliance with fair lending laws and discusses alternative testing approaches. As the relevant policies (such as credit underwriting) often guide the development of such models, the inability to find a good model fit may impair a firm’s ability to assess fair lending risks effectively. For the same reason, standard backtesting is not possible, as the models are built to reflect policies for a given period. Alternatively, manual file reviews should be carried out to highlight any data errors, identify factors not included in the statistical model. Manual file reviews also assess whether those additional factors might explain the remaining differences between the prohibited basis group members and control group members.

Importantly, these efforts need to be sized appropriately to be effective, and the recently updated OCC’s Sampling Methodologies booklet provides the necessary guidance.

Recognizing the prevalent use of AI models, the handbook calls for effective processes to validate that AI use provides sound, fair and unbiased results. As they are usually more complex, AI models can easily obscure biases in the data, modeling and outcomes.

The OCC states that it is not sufficient to establish the unbiasedness of individual variables, as complex interactions typical of AI approaches can lead to unintended impacts or outcomes. Complex combinations of inputs implicitly considered by AI models can serve as proxies for prohibited classes. Models relying on such implicit proxies must represent a spurious relationship that resulted from inputs and outcomes being affected by unobserved prohibited characteristics.

Also, appropriately defining a model’s output for bias evaluation is essential. While the nominal output of a classification model can be a binary label, the actual output is typically an estimated probability of each outcome. Whether the model provides unbiased results can depend on which type of output is used and analyzed. For example, a credit underwriting model can provide non-biased delinquent and not delinquent projected statuses, but it might provide biased probabilities of these statuses. An analysis of projected status would not reveal the potential issues and might be problematic if the latter is used to inform underwriting decisions.

Third-party risk management considerations

In the last decade, we have witnessed increasing reliance upon third-party service providers by financial institutions for critical aspects of their operations. This includes using third-party models and data and engaging third parties to perform model development and risk management services. Within this context, the handbook provides detailed third-party risk management considerations, largely reflecting the 2020 FAQ on third-party relationships.

One area in which the handbook appears to offer explicit reference and important clarifications is in the treatment of vendor-funded validations. The handbook states that “bank management should understand and evaluate the results of validation and risk control activities that third parties conduct… Bank management should conduct a risk-based review of each third-party model to determine whether it is working as intended and if the existing validation activities are sufficient.”

The preceding statement clarifies that banks can use vendor-funded validations in risk-based third-party model reviews. These clarifications aid MRM functions by avoiding devoting unnecessary resources to replicating vendor-funded validations. Nonetheless, banks are expected to perform adequate due diligence of those validations. The handbook cautions that bank management should not take vendor-funded validation reports at face value and must understand “any of the limitations experienced by the validator in assessing the processes and codes used in the models.”

Model risk management remains central to risk management, given the growing importance of models and the growing impact of models on the various risk types across financial services firms. Model risk management is important in addressing fair lending concerns and the risks associated with the growing use of artificial intelligence.

The publication of the handbook signals the importance that supervisors place on MRM and provides a useful guide for banking organizations to assess and strengthen their MRM programs.

Contact an IBM Promontory expert about your risk management needs

Was this article helpful?

YesNo


Source link

Related Articles

Back to top button