A bridge designed by the same team that built it, inspected by the same firm that constructed it, and opened to traffic on the builder’s own assurance of quality is a structural liability. The work might be excellent. The materials might be sound. But without an independent party verifying both, there is no objective basis for confidence, and no one to catch the errors the original team could not see in their own work.
RBI’s August 2024 circular on “Regulatory Principles for Management of Model Risks in Credit” applies the same logic to AI models used in lending and collections. An AI model that scores NPA risk, segments borrower portfolios, and drives recovery decisions is load-bearing infrastructure for an NBFC’s financial health. The circular’s position is direct: the team that builds a model cannot also be the team responsible for confirming it works correctly. Independent validation is not a best practice recommendation. It is a regulatory requirement, and it applies to every credit model an NBFC deploys, including those used for collections.
What the August 2024 RBI Circular Actually Requires
The circular, issued on August 5, 2024, addresses all Regulated Entities including all Non-Banking Financial Companies and Housing Finance Companies. Its scope is broad: any quantitative method that applies statistical, economic, financial, or mathematical principles to produce an output used for credit decisions falls within the definition of a “credit risk model.”
For collections operations, this definition covers NPA scoring models, propensity-to-pay models, collections segmentation models, recovery prioritisation algorithms, and any AI system whose output influences which accounts receive outreach, in what sequence, through which channel, or with what offer.
The circular’s governance requirements are specific. Every NBFC must put in place a detailed, Board-approved policy covering the entire model lifecycle. That policy must address:
- Governance and oversight arrangements, calibrated to the materiality of each model
- Processes for model development or selection
- Documentation standards for every deployed model
- Independent validation processes, separate from model development
- Change control procedures for model updates
- An ongoing monitoring and reporting framework
The NBFC must also maintain a model inventory listing every approved model in production, whether developed internally or sourced from a third party. The circular is explicit that outsourced and vendor models are subject to the same requirements. An NBFC cannot meet its obligations by pointing to a vendor’s proprietary validation documentation as a substitute for its own governance.
New models were required to follow the guidelines from the date the circular came into force, three months after August 5, 2024. Existing models were required to be validated under these guidelines within six months of the circular’s issue date.

What Independent Validation Requires in Practice
The independence requirement is one of the most operationally significant aspects of the circular for NBFCs that have historically run model development and review within the same team or department.
Independent validation means the function responsible for validating a model must be separate from the function that developed or selected it. The circular states that each model must be validated before deployment and after any material amendments, as well as through periodic reviews at minimum annually.
For Base Layer NBFCs, independence can be achieved through a clearly separated internal review function or by engaging an external validator. For Upper Layer and Top Layer NBFCs, a dedicated Model Risk Management function with defined authority and reporting lines to the Risk Management Committee of the Board is the expected standard.
The independence requirement carries practical authority. The validator must be able to raise findings that can delay or block deployment. A validation function that documents concerns but has no power to act on them does not satisfy the spirit of the circular’s requirement. The RMCB must receive and formally consider validation outcomes before a model goes into production or is materially changed.
The Five Components Every Validation Must Cover
The August 2024 circular defines the minimum scope of a validation exercise with precision. Every validation must cover all five of the following components:
Assumption review: Every model is built on assumptions about how variables relate to outcomes, how borrower behavior patterns hold over time, and how the economic environment affects credit risk. The validator must examine whether these assumptions are valid, substantiated by evidence, and still hold in the current operating environment. For collections AI, this includes assumptions about the correlation between behavioral signals and payment likelihood, and about the stability of default patterns in the portfolio.
Data verification: The validator must confirm that the data used to build and run the model is accurate, complete, and sourced from reliable systems. Data lineage must be traceable from the source system through every transformation to the model input. If the model ingests bureau data, transaction data, and behavioral signals, the validator must verify the integrity of each data pipeline independently.
Regulatory compliance check: The validation must confirm that the model’s operation complies with all applicable regulatory and statutory requirements. For collections models, this includes fair lending requirements, explainability obligations under RBI’s Fair Practices Code, data localisation requirements from the Digital Lending Directions 2025, and DPDP Act 2023 data handling obligations.
Documentation assessment: The validator evaluates whether the model documentation is complete and accurate enough for regulators, senior management, and users to understand what the model does, how it was built, what data it uses, and what decisions it produces. The circular requires documentation to include sensitivity of model outputs to its assumptions and inputs.
Backtesting: The validator assesses the model’s predictive efficacy by comparing its predicted outcomes against actual historical outcomes on out-of-sample data. The circular specifies that backtesting results must be expressed in terms of “suitable and easily understood ex ante parameters” and compared against benchmarks defined in the policy. These results must be presented to the RMCB or its designated sub-committee.

Backtesting for NPA Collections AI: What It Looks Like Technically
Backtesting is the component of validation most likely to surface performance gaps that cannot be detected through visual inspection of model architecture or documentation review. For NPA collections AI, a rigorous backtesting exercise evaluates several dimensions simultaneously.
Discriminatory power measures how well the model separates accounts that will default from accounts that will perform. The standard metrics are AUC-ROC (Area Under the Receiver Operating Characteristic Curve) and the KS (Kolmogorov-Smirnov) statistic. A model with strong discriminatory power assigns materially higher risk scores to accounts that subsequently default than to accounts that subsequently pay. The benchmark thresholds for acceptable discriminatory power must be defined in the NBFC’s model risk policy before backtesting is conducted, not after the results are reviewed.
Calibration accuracy measures whether the model’s predicted probabilities match the actual observed rates. A model that predicts 20% default probability for a group of accounts should see approximately 20% of that group default in reality. Poor calibration means the model’s scores cannot be used directly for provisioning or capital calculations without adjustment. For collections segmentation, poor calibration means accounts are being routed to inappropriate treatment buckets.
Stability testing evaluates whether the model’s input variable distributions have shifted between the training period and the current operating period. The Population Stability Index (PSI) is the standard measure for this. A PSI above 0.25 typically indicates significant population shift that requires revalidation, regardless of whether performance metrics have visibly degraded yet.
For collections-specific models, backtesting must also evaluate performance against collections outcomes, not just default prediction outcomes. This means testing propensity-to-pay predictions against actual payment behavior, right-party contact predictions against actual contact rates, and segmentation accuracy against recovery rates by bucket.
What Triggers Revalidation
The circular requires validation before deployment and at least annually thereafter. Beyond the scheduled annual cycle, specific events must trigger revalidation regardless of when the last review occurred:
- Any change to model inputs, including adding a new data source, removing a variable, or modifying how a feature is engineered
- Any change to model assumptions, including updates to the economic scenario assumptions embedded in the model
- Any structural change to the algorithm, including changes to model architecture or hyperparameter settings that materially affect model behavior
- Performance degradation detected through ongoing monitoring, where metrics fall below the thresholds defined in the model risk policy
- A significant shift in the borrower population the model is operating on, such as entry into a new product segment, geography, or borrower tier
- Regulatory changes that affect which inputs are permissible or what outputs must be explainable

The Model Inventory Requirement
Every model in production must be listed in a maintained model inventory with critical information on each entry. The circular does not prescribe a fixed inventory format, but the information required by the governance framework makes the minimum content clear:
- Model name and unique identifier
- Purpose and the credit decisions it influences
- Development date and the team or vendor responsible
- Data inputs and sources
- Algorithm type and key assumptions
- Validation history including dates, findings, and outcomes
- Current approval status and the RMCB approval reference
- Ongoing monitoring status and last review date
- Materiality classification, which determines the depth of governance required
For NBFCs using vendor or third-party collections AI platforms, the inventory must cover those models as well. The contractual arrangements with third-party vendors must provide the NBFC with access to minimum technical documentation sufficient to understand the model’s design, configuration, and operation. RBI explicitly retains the right to engage external experts to validate outsourced models deployed by NBFCs, which means the documentation access right must be contractually guaranteed, not assumed.
Building Governance Infrastructure That Satisfies the Circular
For NBFC model risk officers and compliance heads, the August 2024 circular translates into a specific set of infrastructure requirements. The following elements represent the minimum build-out required for compliance:
- A Board-approved model risk policy reviewed at least annually, with defined materiality thresholds, validation scope, and revalidation triggers
- A maintained model inventory in a system of record, covering all models in production including vendor and third-party models
- Documented development records for every model covering data used, algorithm selected, assumptions made, and rationale for design decisions
- An independent validation function with authority to delay or block deployment pending satisfactory validation outcomes
- A deployment approval workflow requiring RMCB sign-off, with documented evidence of the committee’s consideration of validation findings
- An ongoing monitoring framework with defined alert thresholds and a calendar of formal review cycles
- A backtesting schedule with out-of-sample evaluation at intervals defined in the policy, with results formatted for RMCB presentation
- Change control procedures with documented materiality thresholds that determine when a model change requires revalidation
- Exam-ready documentation that compiles all of the above into a format reviewable by RBI supervisors within hours
How iTuring Supports This
The model governance infrastructure described in the RBI circular maps directly to what iTuring provides as standard platform functionality.
The iTuring platform maintains a universal model inventory covering all models in production, including third-party and vendor models. Every model carries complete development documentation, version-controlled change history, and a maker-checker approval record that satisfies the RMCB workflow requirement. Validation support includes SHAP and LIME explainability at the individual prediction level, which satisfies the circular’s requirement that model outcomes be “consistent, unbiased, explainable and verifiable.”
Ongoing monitoring runs across 60 parameters continuously post-deployment, with configurable alert thresholds that trigger when performance metrics, input distributions, or output distributions cross defined boundaries. When a revalidation trigger is detected, the monitoring system flags the model and surfaces the relevant drift evidence for the validation team.
One-click exam documentation compiles model lineage, approval history, validation records, and monitoring logs into a package that can be presented to RBI supervisors without manual reconstruction.
For NBFCs working to meet the August 2024 circular requirements, this means the governance infrastructure and the AI operations infrastructure are the same system.
Book a discovery session with iTuring to see how the platform’s model governance layer maps to RBI’s requirements for your specific NBFC tier and model portfolio.


