IRB 2.0: Current Validation “trends” and commonly observed difficulties in practice during the validation exercise

To shorten the waiting time until our regulatory IRB-Homeschooling starts, we would like to give you some insights into our experiences around validation exercise, we gained with various national and international credit institutions.

Introduction:

As institutions are developing models for an increasing scope of risk management and decision making, concurrently, the number of models has been rising dramatically (i.e. models for capital provisioning and stress testing, pricing, strategic planning as well as asset liquidity purposes). More complex models are being developed with even more advanced analytics techniques (i.e. machine learning) to achieve higher performance. Big data and advanced analytics are opening new opportunities for more sophisticated models.

There are several reasons for this phenomenon; banks have started using more data sources about the retail lending activities, and new regulatory changes such as IFRS9 have introduced the use of new models what has not been in the bank’s agenda some ten years ago.

For all these models, institutions need to ensure that they have adequate in-house knowledge to manage the risk for all activities related to the management, use, validation and monitoring of these models. In this article, we would like to present the most recent validation trends as well as to highlight key issues observed in the market: (please click to enlarge)

Recent validation “trends”

The figure below illustrates the most recent validation trends / open discussion points which are mainly focused on the effective application of validation approaches on the grey areas of the regulatory guidelines based on our recent project experiences:

1.Technically inadequate framework for the validation of LDPs

Currently the standards are focused on high (or medium) default portfolios where default data are usually sufficiently large. The validation process for these types of models are described in-depth in several guidelines (i.e. “CRR”, “Instructions for reporting the validation results of internal models”). However, there is no adequate framework describing the validation of the LDP models. As a result, across banks significant differences can be observed in the applicability of the LDP validation methods. As of now, the most common methods for the validation (and development) of the LDP PD* models are listed below: (*we focus on the PD models given the fact that many LDP portfolios will migrate to Foundation IRB status under the Basel IV regime and there will be no use of the LGD models for such portfolios going forward.)

 

Approach Short Description
Upper Confidence Bounds The approach is based on the most conservative boundary of a confidence region. Upper confidence boundary for the estimation value is a well-established procedure in many financial institutions.
Bayesian Approach Based on this approach, the upper confidence bounds are determined as quantiles of a Beta-distribution which is consequently identified as a posterior Bayesian distribution.
Shadow Rating Approach The external ratings of obligors are used as a target variable for the back-testing of the internal (shadow rating) model. Default rates for these external ratings are determined based on the default studies provided by external rating agencies on a regular basis.

Other not so commonly used approaches: (i) Expert judgement, (ii) Extended Default horizon, (iii) Through the pricing of market observables financial instruments, (iv) Conservative DoD treatment.

 2. Implementation of MoC framework

  • The insufficient framework regarding the implementation of Margin of Conservatism results with the significant differences between the approaches that are applied across the institutions – especially for LDP portfolios where there are not enough data. Currently it is observed that the banks are in process of developing structured frameworks for the comprehensive quantification and calculation of the MoC parameter – capable of capturing efficiently all relevant model uncertainties.
  • It is commonly observed that the institutions justify the existence of a model or data deficiency by raising an adds-on as for the MoC. Although MoC can be used to address the model uncertainty – it cannot be used as a permanent counterweight for the model deficiencies that should be addressed on a specific time horizon (par 50. “PD and LGD requirements”).

3. Impact of Basel IV on validation exercises

The expected migration of specific portfolios to the Foundation IRB status results with a relative relaxation on the treatment of the model deficiencies – as it provides for the development teams an additional argument that the model is migrating or going to be migrating to a (more) conservative status. It should be highlighted here that the Basel IV is currently set to be implemented in 2023, and until then Banks do need to continue to have sound models.

Key Issues observed with several Financial Institutions during the TRIM exercises during Validation Exercise

This section illustrates key issues that we have frequently observed across several EU banks under the ECB supervision:

1. Data Issues

 (1) Due to the existence of legacy systems (old systems integrated with the new ones) as well as the structural changes with the complex credit products (i.e. products that existed in the past that have experienced several product level changes across time), the definition of the portfolio perimeter is not always clear and well defined. Moreover, sometimes the final RDS (‚Reference Data Set‘) is not representative of the corresponding current portfolio, and there are issues defining the scope of the application (i.e. overlapping entries between different portfolios or even missing data).

(2) It is a common approach across the institutions to assess the quality of the data in terms of the: (i) completeness, (ii) traceability, (iii) validity, (iv) timeliness, (v) availability etc..  These dimensions have been promoted by the TRIM Data Quality chapters since TRIM inspections rigorously introduced such concepts into the model validation and development premises. However, one can find significant differences on the implementation of these tests across different institutions even across different models within the same institution. The reason is that there is no common framework on how these tests should be implemented and thus:

  • data quality tests are not standardized (may differ significantly from an institution to another or even from a model to another) and
  • the level of the data quality assessment may also differ across similar models within the same institution (i.e some institutions are performing the data quality on the data source / database level while some others at the final RDS level).

(3) Until recently, institutions used to use testing and application datasets for independent purposes – as a result the consistency between these databases / data sources was not challenged in-depth. However, the new guidelines and standards (Validation Reporting requirements to the ECB) require the usage of application datasets for testing purposes. This is quite challenging from data perspective due to the following reasons:

  • data sources may have a different level of aggregation,
  • key columns required for testing purposes are usually not contained or deployed in application systems,
  • existence of discrepancies between the two sources (entries existing only in one dataset and not in the other).

Our recommendations:

  • Given the fact that  the model and data deficiencies are usually addressed by different teams, the central storage of all data issues/deficiencies (separately from model deficiencies) would help bank to address them more efficiently.
  • The banks are advised to develop a data quality framework, describing in detail how to assess the quality of each dataset given the characteristics/factor inputs of each model.
  • Application systems are expected to be used for testing purposes more and more over time. As a result, institutions are recommended to initiate projects with relevant IT/Data teams in order to enhance the consistency between the corresponding systems.

2. Implementation and Coding Issues

 In several cases, what we have observed was the situation that the submitted codes and scripts during the model development, are delivered without a corresponding document containing key/basic information on: key coding fields, tables as well as databases. This results with a number of questions from the validation teams with respect to the basic information about  the script – which is usually not the first priority of validation teams – and consequently precious time would be lost.

Our recommendations:

  • It is recommended to Model Development teams to submit – along with the model scripts – short description on what each variable/field/table represents. This could theoretically save a lot of time for the validators.
  • The validation exercises are time consuming while the validation activities are increasing day by day. It is highly recommended to set up an automated process – at least for most of the tests – in order to perform more effectively the validation standard tests.
  • It should be highlighted that there is a new trend in the market where the models are developed in Python programming language. However, the majority of the model validation teams are still performing their exercises on a SAS environment. Our recommendation in order to avoid data inconsistency issues is to use for developing and validation purposes the same programming/statistical package. This way both development and validation teams can make use of the same  set-up in the coding library with a set of tests in both SAS and Python (or even R) that can be used to perform the validation exercises concurrently.

3. Setting the validation framework for new type of models

 The new types of ELBE and DT LGD models require an update on the current validation procedures and tests. However, there is an open discussion on the definition of severity as well as the identification of downturn. Another open issue is the determination of the length of the historical data in which the downturn period will be identified.

Our recommendations:

  • Institutions should enhance their validation procedures while covering the methods how to perform the validation on the new type of models. Especially for DT LGD, institution should focus on challenging/validating the methodology used for the identification of downturn period as well as to assess how the impact was quantified within the model.

4. Governance

 (1) According to the regulatory requirements, validation exercise should be a completely independent exercise, stipulating that the Validation Team should construct autonomously the RDS for the performance of the relevant tests.  However, this is not the case for many institutions with very complex database/sourcing systems resulting with a dependent relationship between validation and monitoring/development teams. Particularly for many banks, the Development Team creates an RDS which is then delivered to Monitoring and Validation teams in order to perform their exercises. Naturally, this is not compliant with the regulatory guidelines.

(2) There is a confusion between the Monitoring and Validation scope. In some institutions the monitoring exercise is overlapping with the validation exercise while for some others the monitoring exercises and tests are almost identical with the ones performed for validation purposes. Consequently, these overlapping exercises raise questions regarding the scope of each analysis. Currently, there are many discussions on how to efficiently differentiate between the monitoring and validation exercises (i.e. monitoring can be performed in a higher frequency only on the key tests (i.e. back-testing)).

(3) The majority of the EU institutions are focusing only on the validation of regulatory (i.e. capital provisioning) models, and not on the validation of ‘business decision’ models.

(4) The majority of the institutions have a very structured set of requirements to perform the annual validations – the majority of the banks have an internal document (annual validation procedures document) – which is describing in detail how each annual validation should be performed (what assessments should be conducted for each model, how to be executed etc). However, this is not the case for initial validations which are usually performed in an unstructured process.

(5) Specific timelines should be set by Model Validation Teams (MVT) in order to have sufficient time to perform all the necessary validations. This mainly refer to initial validations when the model submission is already late so that there are not sufficient time left for a comprehensive model assessment.

Our recommendations:

  • Each MVT should establish autonomous and independent processes on how to create the relevant RDS used for each validation exercise. MVT should be able to construct the RDS in this fashion independently.
  • Institutions should define in detail what is the scope of the validation and monitoring exercises. The institutions should focus on differentiating at a maximum level the corresponding exercises and avoid performing the same assessments twice.
  •  Institutions are mainly focusing on the assessment of regulatory models (capital provisioning). However, given the criticality of other models (i.e. strategic planning models, pricing etc) institutions should ensure that they have adequate in-house knowledge to address the associated risks, by performing validation exercises also for these non-regulatory models.
  • Institutions should enhance their framework for initial validations, clarifying the minimum types of tests that need to be performed for each model.
  • Institutions should enhance their internal validation frameworks by specifying the minimum time period needed for the completion of a full comprehensive model validation/assessment. This mainly refers to initial validations where time limitations might hinder that the validation work can be conducted with the expected quality.

In case of any further questions about validation exercise, please contact our Quant team. If this topic was already interesting for you, please have also a look at our agenda for our IRB Homeschooling. Our IRB experts are looking forward to your participation and will answer all relevant questions during the sessions.

 

 

Kaan Aksel

Phone: +49 69 9585 5874

kaan.aksel@pwc.com

 

 

George Chartios

Phone: +49 69 9585 6880

george.chartios@pwc.com

 

Hinterlassen Sie einen Kommentar

Ihre E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

/* */