TL;DR

  • South Africa fully aligned with Basel III post-crisis reforms as of 1 July 2025
  • POPIA Section 71(1) prohibits automated credit decisions based solely on AI profiling
  • NCA credit risk assessment and NCA affordability assessment are legally distinct obligations that a combined AI workflow must satisfy separately
  • Prudential Authority D12-2025 requires prior written approval before any IRB model change
  • TCF Outcome 2 requires models to be designed for identified customer groups, with evidence of suitability monitoring

A South African bank CRO building or reviewing an AI credit risk scoring model in 2026 faces three regulatory demands simultaneously, each from a different supervisory body, each with its own documentation requirements, and each carrying its own examination consequences.

The Prudential Authority requires that IRB credit risk scoring models are validated to Basel III standards, that model changes receive prior written approval, and that the model governance documentation is produced in a format suitable for PA examination. The FSCA requires that credit products are designed for and suitable for their intended customer groups, that automated decisions can be explained to customers, and that the bank can demonstrate TCF outcomes across the portfolio on an ongoing basis. The National Credit Regulator requires that credit risk assessment and affordability assessment are conducted as legally distinct exercises, that affordability is verified through direct financial evidence rather than predicted through proxy variables, and that reckless lending obligations are satisfied regardless of what the AI model predicts.

These three demands do not conflict with one another. They do, however, require a credit risk modelling architecture that satisfies all three simultaneously rather than optimising for one at the expense of the others. Most AI credit underwriting platforms built outside South Africa are optimised for one regulatory environment. Building a credit risk model that passes PA examination, satisfies FSCA conduct supervision, and complies with NCA obligations requires understanding precisely what each framework requires and where the interaction points are.

The SA-Specific Credit Risk Modelling Challenge

The challenge that distinguishes South African credit risk scoring from peer markets is not the sophistication of the regulatory framework. South Africa’s regulatory environment is sophisticated by global standards, but so are those of the US, UK, and EU. The distinguishing challenge is that South Africa operates a Twin Peaks regulatory model in which prudential oversight and market conduct oversight are held by separate authorities with separate examination powers, and AI credit risk scoring models fall under both simultaneously. Structured model risk management that satisfies both regulators requires purpose-built governance architecture, not the adaptation of a framework designed for a single-regulator environment.

The February 2026 Financial Regulation Journal analysis of credit risk and affordability in South African credit regulation identifies the core operational tension with precision: within the Twin Peaks model, combined automated credit risk and affordability assessments create supervisory challenges. A singular credit decision may implicate prudential considerations (credit risk) and conduct considerations (affordability), but the outcome does not indicate which assessment failed. Without clarity, it becomes difficult for supervisors to understand market behaviour or emerging risks.

This is the architectural challenge for AI credit risk models in South Africa. The model must produce outputs that are separable by regulatory framework: the PA examiner must be able to review the credit risk model’s discriminatory power, calibration, and validation methodology. The FSCA examiner must be able to review the model’s TCF compliance, its suitability for the target customer group, and its fair treatment outcomes. The NCR must be able to verify that affordability assessment was conducted as a separate, statute-compliant exercise rather than as a sub-function of the credit risk scoring model.

Basel III Post-Crisis Reforms: What Changed for SA Banks on 1 July 2025

South Africa fully aligned with the outstanding components of the Basel III post-crisis reforms as of 1 July 2025, making it one of the few jurisdictions to implement on the internationally agreed timeline. Underwriting automation at this scale, with banks deploying AI across thousands of credit decisions daily, makes the new model validation and governance obligations under D12-2025 more operationally demanding than they were under the previous framework. The most significant changes for credit risk model governance affect IRB approach banks directly.

The KPMG Basel III Post-Crisis Reforms analysis of South African implementation identifies the key credit risk changes: a more detailed risk weighting approach replacing flat risk weights in selected cases, reduced reliance on external credit ratings requiring banks to develop internal assessments, and enhanced requirements for IRB model validation and documentation.

The Prudential Authority’s D12-2025 Credit Risk Roadmap governs IRB model implementation in three phases. Phase 1 covers the immediate Basel III reform alignment. Phase 2 requires banks to assess IRB credit risk model gaps arising from Phase 1 implementation plus gaps from the revised credit restructures directive. Phase 3 covers full compliance over 18 to 42 months from the effective date.

Infographic stating that three D12-2025 requirements are directly relevant for banks deploying AI credit risk models: prior written approval for model changes, Tier 2 model change communication with the Prudential Authority, and adherence to validation documentation standards.

Three D12-2025 requirements are directly relevant for banks deploying AI credit risk models.

Prior written approval for model changes. Banks using the IRB approach must obtain prior written approval from the PA before making any model changes to IRB credit risk models related to the Basel III reform rollout. For AI models that self-update, this creates the same governance challenge identified in SA-1: a model that changes parameters continuously requires explicit governance criteria defining which parameter updates constitute a model change requiring PA approval and which constitute operational recalibration that can proceed under notification rather than approval.

Tier 2 model changes require PA communication. Tier 2 changes, defined as recalibration of models without changes to model methodology and without material impact, must be communicated to the PA with model validation documentation presented. This is a notification requirement, not an approval requirement, but the documentation obligation is the same: the PA must receive validation evidence that the recalibrated model remains within its approved performance parameters.

Validation documentation standards. All IRB model validation documentation must be maintained in a format that can be presented to the PA on examination. For AI credit risk models, this means the validation report must address discriminatory power, calibration accuracy, stability, and conceptual soundness in terms that a PA examiner trained in Basel III validation methodology can assess. The Gini coefficient, Kolmogorov-Smirnov statistic, and Brier score remain the standard discriminatory power metrics expected in SA PA examinations.

What Makes a Credit Risk Model TCF-Compliant

FSCA TCF Outcome 2 requires that products and services marketed and sold in the retail market are designed to meet the needs of identified customer groups and are targeted accordingly. For a credit risk scoring model, this creates two specific compliance obligations that most model governance frameworks do not formally address.

Customer group suitability design documentation. The model must be designed with specific customer groups in mind, and the design documentation must show that the model’s features, outputs, and policy rules are appropriate for those groups. A model designed for salaried urban professionals is not automatically suitable for self-employed rural borrowers. If a bank deploys the same model across both groups without documented suitability review, it faces TCF Outcome 2 exposure regardless of the model’s aggregate performance.

Ongoing suitability monitoring. TCF requires demonstration, not assertion. A model validated as suitable for its target customer group at deployment must be monitored on an ongoing basis to confirm that suitability is maintained as the customer population, economic conditions, and available credit products evolve. This means tracking approval rates, default rates, pricing outcomes, and complaint rates disaggregated by customer segment, and documenting the monitoring process in a form reviewable by FSCA.

TCF Outcome 4 adds a further dimension for AI credit models: customers must be given clear information and kept appropriately informed before, during, and after the point of sale. For automated credit decisioning, this means that customers who are declined must receive clear information about the reason for the decision, in terms they can understand, before they leave the interaction. The obligation extends beyond the Adverse Action Notice concept familiar from US practice: it covers the full information environment within which the customer experiences the credit decision.

Quote explaining that TCF compliance for AI credit models requires documented customer group suitability at the design stage and ongoing suitability monitoring after deployment, which many governance frameworks currently lack.

POPIA Section 71: The Automated Decision-Making Obligation

POPIA Section 71 is the most frequently misunderstood compliance requirement in South African AI credit underwriting. The misunderstanding runs in both directions: some institutions treat it as an absolute prohibition on automated credit decisioning, while others treat it as a nominal disclosure requirement.

The Webber Wentzel analysis of POPIA implications for AI published in March 2026 sets out the operative text precisely: Section 71(1) prohibits a bank from making a decision to grant or reject a loan application based solely on the profile created by the AI system. The key word is solely. Credit risk decisioning is permitted under POPIA Section 71(2) where it is authorised by a law or code of conduct specifying appropriate protective measures, or where it is taken in connection with the conclusion or execution of a contract. Most credit agreements satisfy the Section 71(2)(c) exception. The critical requirement is in Section 71(3).

Section 71(3)(b) requires that the responsible party provide the data subject with sufficient information about the underlying logic of the automated processing to enable them to make representations. The Swart Law analysis of this obligation is precise: compliance requires clear and understandable information about the logic of the automated decision-making. Providing the actual AI code does not satisfy the requirement, because the code itself does not meet the standard of clear and understandable information.

In operational terms, this means every AI credit decision that results in a decline or a materially worse credit offer than the customer sought must be accompanied by an explanation of the model’s reasoning in plain language. SHAP-based explanations that identify the top contributing features to the decision, translated into customer-facing language, currently represent the most practical mechanism for satisfying Section 71(3)(b) at scale. The explanation must be provided proactively at the time of the decision, not reactively on request.

The NCA Affordability Assessment: A Separate Legal Exercise

The February 2026 Financial Regulation Journal analysis establishes the regulatory architecture that AI credit risk models must respect: credit risk assessment and affordability assessment are legally distinct obligations with different purposes, beneficiaries, and legal consequences.

Credit risk scoring is predictive and model-based: it estimates the probability that a borrower will default. Affordability assessment is statutory and evidence-based: it determines whether a borrower can meet the proposed repayment obligations without becoming over-indebted under the NCA’s definition. The National Credit Act requires affordability assessment through verified financial evidence, specifically existing obligations identified through a credit bureau file and gross income verified through payslips, bank statements, or other documentary evidence.

Underwriting automation can accelerate and systematise the affordability assessment process, ingesting payslips digitally, extracting income and obligation data from bank statements via Account Aggregator equivalents, and applying the NCA minimum expense norms automatically, but it cannot substitute the statutory verification requirement with a predictive model. The NCR has consistently resisted attempts to treat credit risk modelling as a substitute for affordability assessment. The fact that a predictive model incorporates more data variables than a traditional affordability calculation does not mean the model satisfies the NCA’s affordability obligation. Predicting that a borrower is unlikely to default does not establish that they can repay instalments without becoming over-indebted.

For South African banks deploying AI credit underwriting, this distinction requires a specific architectural decision: the AI system must produce two separable outputs. The credit risk score satisfies the prudential requirement for discriminatory power and risk-based pricing. The affordability assessment satisfies the NCA’s statutory requirement for verified financial capacity. The two outputs must be traceable to their respective input sources in the audit trail, so that a combined decline or approval can be attributed to the correct regulatory assessment by any supervisory authority reviewing the decision.

The August 2025 draft amendments to the National Credit Act Affordability Assessment Regulations tighten several affordability verification requirements, including expanded minimum expense norms and enhanced requirements for verifying existing obligations. Banks deploying AI credit workflows must monitor these regulatory amendments and update their affordability assessment modules accordingly, independently of any credit risk model changes.

IRB Model Validation: What PA Examiners Assess

The Prudential Authority’s Basel III model validation standards for IRB credit risk models require examination across four technical dimensions. Understanding what PA examiners assess is the starting point for designing a model risk management program that passes examination rather than responding to it. Each dimension must be addressed in the validation documentation before the model enters production, and must be revisited at every material change event under the D12-2025 governance framework.

Infographic titled “IRB Model Validation: What PA Examiners Assess,” highlighting four evaluation criteria—discriminatory power, calibration accuracy, stability, and conceptual soundness.

Discriminatory power. The model’s ability to rank borrowers by default probability, assessed using the Gini coefficient, the Area Under the ROC Curve, and the Kolmogorov-Smirnov statistic. The PA expects Gini coefficients above 40 percent for retail portfolios and above 30 percent for wholesale portfolios as minimum acceptable performance. For AI models, discriminatory power must be assessed on out-of-time test samples drawn from a different time period than the training data, not just out-of-sample test sets from the same period.

Calibration accuracy. The model’s predicted default rates must be compared against actual observed default rates across rating grades and portfolio segments. Systematic over-prediction or under-prediction in specific segments is a calibration finding that requires model adjustment before PA approval. For AI models that update parameters, calibration must be verified after each material update.

Stability. Population Stability Index analysis tracking whether the distribution of model scores changes over time as the credit portfolio evolves. Significant score distribution shift is a governance trigger requiring investigation into whether the model’s predictive relationships remain valid in the current portfolio environment.

Conceptual soundness. The model’s methodology must be sound and its feature relationships must be directionally consistent with credit risk theory. For AI models, conceptual soundness assessment uses SHAP analysis to verify that feature contributions to credit decisions are directionally and economically coherent. A model that assigns high default probability to borrowers with more years of employment, or lower default probability to borrowers with higher existing debt-to-income ratios, has conceptual soundness issues that SHAP analysis will surface during validation.

How iTuring Addresses This

iTuring’s AI credit risk scoring platform is designed for the specific multi-authority compliance environment South African banks operate in under the Twin Peaks regulatory model. The platform’s model risk management architecture produces governance documentation that satisfies PA, FSCA, and NCR examination requirements from a single integrated system, without requiring the institution to maintain separate governance processes for each regulator.

The platform produces two separable model outputs: a credit risk score satisfying PA IRB model validation requirements, and an affordability assessment module that processes verified financial evidence in compliance with NCA Regulation 23A standards. The audit trail for every credit decision separately attributes the credit risk outcome and the affordability outcome, enabling examination by either the PA, the FSCA, or the NCR without requiring the institution to reconstruct the decision logic from a single combined output.

SHAP-based account-level explanations are generated automatically for every credit risk decisioning event, with customer-facing translations available in all 11 official South African languages to satisfy POPIA Section 71(3)(b) requirements. TCF Outcome 2 suitability documentation is maintained for each customer segment the model is deployed against, with ongoing monitoring of approval rates, default rates, and pricing outcomes disaggregated by segment.

IRB model validation documentation is produced in the format required by D12-2025, covering discriminatory power, calibration, stability, and conceptual soundness in the structure PA examiners expect. Change governance workflows support the D12-2025 prior written approval process for Tier 1 changes and the PA communication process for Tier 2 recalibrations.

Regulatory Disclaimer
This article is for informational purposes only and does not constitute legal or compliance advice. Prudential Authority Basel III requirements under D12-2025, FSCA TCF obligations, POPIA Section 71 requirements, and NCA affordability assessment regulations are subject to change and ongoing regulatory development. The draft amendments to the NCA Affordability Assessment Regulations published in August 2025 have not been finalised as at the publication date of this article. Consult qualified South African legal and compliance professionals for guidance specific to your institution.

Sources: Prudential Authority D12-2025 Credit Risk Roadmap | KPMG: Basel III Post-Crisis Reforms South Africa | BIS: 2025 Principles for Credit Risk Management | Financial Regulation Journal: Credit Risk and Affordability SA 2026 | Webber Wentzel: AI Has POPIA Implications March 2026 | POPIA.co.za: Section 71 Automated Decision-Making | Swart Law: Transparency and Automated Decision-Making POPIA | FSCA: Treating Customers Fairly | Baker McKenzie: SARB and FSCA AI Insights | Dvara Research: TCF Policy South Africa | Mondaq: NCA Affordability Assessment Draft Amendments | EvalFin: Affordability Assessment Guide South Africa | SARB/FSCA: AI in SA Financial Sector 2025