RBI · AI Governance · Banking Compliance · DPDP Act 2023

RBI AI Guidelines for Banks
Compliance Team Briefing [2025]

RBI AI guidelines for banks are not contained in a single direction — they are distributed across the Master Direction on IT Governance, the Fair Practices Code, Digital Lending Guidelines, and the Outsourcing Framework. Indian banks deploying artificial intelligence in credit scoring, fraud detection, customer-facing chatbots, and risk management must synthesise compliance obligations from at least six RBI frameworks, plus the Digital Personal Data Protection Act 2023 which introduces automated decision-making rights for Data Principals.

This briefing maps every applicable RBI requirement to AI use cases in banking, identifies the DPDP intersection points, and provides a compliance action plan for bank compliance teams. By Advocate Subodh Bajpai, who advises banks and NBFCs on regulatory technology compliance.

WhatsApp ConsultationSchedule a Call
Subject:AI governance and compliance framework for Indian banks
Applicable Law:RBI Master Direction on IT Governance (2023), Fair Practices Code, DPDP Act 2023
Who Must Comply:Scheduled commercial banks, small finance banks, payments banks, NBFCs
Key Obligation:Human-in-the-loop for AI credit decisions, model explainability, data governance
Maximum Penalty:Up to Rs.250 crore (DPDP) + RBI monetary penalties under Banking Regulation Act

Table of Contents

  1. The RBI AI Compliance Framework — No Single Direction
  2. AI Use Cases in Banking and Applicable RBI Requirements
  3. Master Direction on IT Governance — What It Requires
  4. DPDP Act and Automated Decision-Making Rights
  5. Model Explainability — Practical Requirements for Banks
  6. Compliance Action Plan for Bank AI Systems
  7. FAQs — RBI AI Compliance
Regulatory Landscape

The RBI AI Compliance Framework — No Single Direction

Indian banks seeking a single RBI circular on AI governance will not find one. The Reserve Bank of India has chosen a principles-based, sectoral approach rather than issuing a standalone AI regulation. This means AI compliance for banks requires mapping across multiple existing frameworks — an exercise most bank compliance teams have not yet completed.

The primary frameworks that collectively govern AI in banking are: (1) the Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices dated 7 April 2023, which establishes IT governance including emerging technology adoption; (2) the Fair Practices Code and its revisions, which mandate transparency in lending decisions; (3) the Digital Lending Guidelines dated 2 September 2022, which impose disclosure requirements on digital lending models; (4) the Master Direction on Outsourcing of Information Technology Services dated 10 April 2023, which governs third-party AI vendors; (5) the Charter of Customer Rights dated 3 December 2014, which guarantees the right to fair treatment and transparency; and (6) the Framework on Regulation of Payment Aggregators and Payment Gateways, which applies to AI in payment systems.

Overlay on these RBI frameworks the Digital Personal Data Protection Act 2023 — which applies to all personal data processing including AI-driven processing by banks — and you have the complete regulatory envelope. Banks that treat AI governance as solely an IT function, or solely a legal function, will find gaps. AI compliance requires coordinated action across IT governance, legal, compliance, risk management, and business units.

AI Applications

AI Use Cases in Banking and Applicable RBI Requirements

AI Credit Scoring and Loan Underwriting

Banks increasingly use machine learning models for credit scoring, replacing or supplementing traditional scorecards. The RBI Fair Practices Code requires that loan rejection reasons be communicated to borrowers in writing. The Digital Lending Guidelines require that the names of credit information companies whose data was used be disclosed. Under DPDP Section 11, borrowers can challenge purely automated credit decisions. Compliance requirement: human-in-the-loop review for AI rejections, documented model validation, and borrower-facing explainability.

Fraud Detection and Anti-Money Laundering

AI-driven transaction monitoring, fraud scoring, and suspicious transaction detection are now standard in Indian banks. The Master Direction on KYC and the PMLA guidelines require that suspicious transaction reports (STRs) be based on reasonable grounds — an AI flag alone may not constitute reasonable grounds without human review. The IT Governance Direction requires that automated monitoring systems be subject to regular validation and testing. Banks must maintain audit trails of AI-driven fraud alerts and the human decisions made on them.

Customer-Facing Chatbots and Virtual Assistants

RBI-regulated chatbots that interact with customers on banking products are subject to the Fair Practices Code, the Integrated Ombudsman Scheme, and DPDP consent requirements. The chatbot must not make misleading representations about products. Customer data processed through the chatbot requires DPDP-compliant consent. The chatbot infrastructure — whether hosted internally or through a vendor — must comply with the IT Governance Direction and the Outsourcing Direction.

AI in Risk Management and Asset Classification

Banks using AI for internal risk rating, early warning signal detection, and NPA classification must ensure model governance under the IT Governance Direction. The RBI Master Direction on Classification, Valuation and Operation of Investment Portfolio requires that internal rating models be validated. AI models used for provisioning calculations must be auditable and their outputs explainable to statutory auditors and RBI inspection teams.

Core Direction

Master Direction on IT Governance — What It Requires

RBI/2023-24/04 · April 7, 2023
Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices — applicable to all scheduled commercial banks, small finance banks, and payments banks.

The IT Governance Direction is the closest the RBI has come to addressing AI governance. Chapter IV on IT and Information Security Risk Management requires banks to identify and manage risks arising from emerging technologies — explicitly including artificial intelligence and machine learning. Banks must document risk assessments for each AI deployment, including model risk, data quality risk, and vendor dependency risk.

Chapter V mandates a comprehensive IT audit framework. AI systems fall within the audit scope. Banks must ensure that their internal audit teams — or external auditors — have the competence to audit AI models, including validation of training data, model drift monitoring, and output accuracy. The Direction does not prescribe specific AI audit standards, but the expectation is that banks adopt internationally recognised frameworks such as NIST AI RMF or ISO/IEC 42001.

Chapter III on IT Governance requires board-level oversight of technology strategy, including AI adoption. The Board or its IT Committee must approve significant AI deployments. The Direction requires that IT governance policies address emerging technology adoption — banks that deploy AI without board-level or IT Committee approval are in non-compliance.

For third-party AI systems — including cloud-based AI scoring engines, vendor chatbot platforms, and outsourced fraud detection — the Outsourcing Direction applies in parallel. Banks must ensure that AI vendors provide audit access, that data processing occurs in compliance with RBI data localisation requirements, and that the bank retains ultimate responsibility for AI outputs regardless of the vendor relationship.

Data Protection Overlay

DPDP Act and Automated Decision-Making Rights

DPDP Act 2023 · Section 11
The Data Principal shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or significantly affects them.

Section 11 of the DPDP Act 2023 is the provision that most directly impacts AI in banking. It gives every Data Principal — every bank customer — the right to challenge decisions made solely by automated processing. For banks, this means that AI-only credit decisions, automated account closures, automated KYC rejections, and AI-driven product recommendations that have material financial impact all require a human review mechanism.

The practical compliance requirement is a documented human-in-the-loop process. When a customer exercises their Section 11 right, the bank must be able to route the AI decision to a human reviewer who can assess the decision on its merits, not merely rubber-stamp the algorithm. This requires that AI models produce interpretable outputs — or that the bank maintains a parallel manual review capability.

Section 6 consent requirements add another layer. Using customer data to train AI models, to run credit scoring algorithms, or to perform behavioural profiling requires consent that is free, specific, informed, unconditional, and unambiguous. The consent notice under Section 5 must describe the AI processing in terms the customer can understand. Generic consent to “data processing for banking purposes” is unlikely to satisfy the specificity requirement for AI model training.

Banks that process children’s data through AI systems — for example, in insurance linked to education loans — must comply with Section 9, which requires verifiable parental consent and prohibits tracking, behavioural monitoring, and targeted advertising directed at children. AI-driven marketing platforms must be configured to exclude minors.

Technical Compliance

Model Explainability — Practical Requirements for Banks

Model explainability is where regulatory compliance meets technical implementation. Indian banks must be able to explain AI decisions to three audiences: (1) the customer, who under DPDP and the Fair Practices Code has the right to understand why a decision was made; (2) the RBI inspection team, which during on-site inspections will assess whether AI systems comply with governance directions; and (3) the internal audit function, which must validate AI model outputs.

For credit scoring models, global best practice — and the direction Indian regulation is heading — requires at minimum: feature importance rankings (which factors most influenced the score), counterfactual explanations (what would have to change for the outcome to be different), and confidence levels (how certain the model is of its prediction). Banks using black-box deep learning models for credit decisions face a higher regulatory risk than those using interpretable models such as logistic regression, decision trees, or gradient boosting with SHAP values.

The Responsible AI principles articulated by NITI Aayog — safety, fairness, non-discrimination, transparency, accountability, and privacy — are not legally binding, but they signal the direction of Indian AI regulation. Banks that adopt these principles proactively will be better positioned when the RBI issues more prescriptive AI governance directions, which RBI Deputy Governor T. Rabi Sankar has publicly indicated are under consideration.

Documentation is critical. Every AI model deployed in a bank should have a model card documenting: the model’s purpose, training data characteristics, performance metrics, known limitations, bias testing results, and the human review process. This documentation serves both regulatory compliance (IT Governance Direction audit requirements) and legal defence (if a customer challenges an AI decision before the Banking Ombudsman or Data Protection Board).

Implementation Roadmap

Compliance Action Plan for Bank AI Systems

01

AI Inventory and Risk Assessment

Catalogue every AI system deployed — credit scoring, fraud detection, chatbots, risk models, marketing automation. For each, document the data inputs, processing logic, output use, and downstream decisions. Assess each against IT Governance Direction risk management requirements.

02

DPDP Data Mapping for AI Systems

Map personal data flows through every AI system. Identify lawful bases for processing (consent vs. legitimate use). Review consent mechanisms for specificity — does the consent notice specifically describe AI processing? Implement Section 11 human review mechanisms for all customer-affecting AI decisions.

03

Model Governance Framework

Establish a model governance framework covering model development, validation, deployment, monitoring, and retirement. Assign model owners for each AI system. Implement model drift monitoring. Create model cards for every deployed model. This framework must be board-approved under the IT Governance Direction.

04

Vendor AI Due Diligence

For third-party AI systems, conduct due diligence under the Outsourcing Direction. Verify data localisation compliance. Ensure audit access to vendor AI systems. Confirm that vendor contracts include RBI inspection rights. Assess vendor AI model governance practices.

05

Explainability Infrastructure

Implement explainability tools for customer-facing AI decisions. At minimum, credit scoring models must produce feature importance and counterfactual explanations. Train customer service teams to communicate AI decisions in plain language. Create escalation paths for Section 11 challenges.

06

Board Reporting and Audit

Establish quarterly board reporting on AI deployments, incidents, and compliance status. Include AI systems in the internal audit plan. Ensure auditors have competence to assess AI model governance. Prepare for RBI on-site inspection questions on AI governance.

Frequently Asked Questions

FAQs — RBI AI Compliance

Does the RBI have a standalone AI regulation for banks?

No. As of 2025, the RBI does not have a standalone AI regulation. AI governance for banks is addressed through a combination of the Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices (2023), the Outsourcing Directions, the Fair Practices Code, the Customer Protection Framework, and various circulars on digital lending. Banks must synthesise compliance across these frameworks when deploying AI systems. A dedicated AI framework is expected but has not been issued.

Can a bank deny a loan based solely on an AI credit scoring model?

Not without safeguards. The RBI Fair Practices Code requires banks to communicate reasons for loan rejection in writing. If the rejection is based on an AI credit scoring model, the bank must still provide intelligible reasons — not merely state that the model produced a negative score. Additionally, under Section 11 of the DPDP Act 2023, Data Principals have the right not to be subject to decisions based solely on automated processing that significantly affect them. Banks must therefore maintain human-in-the-loop review mechanisms for AI-driven credit decisions, particularly for rejection cases.

What is the RBI Master Direction on IT Governance and how does it apply to AI?

The Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices (April 2023) applies to all scheduled commercial banks, small finance banks, and payments banks. It mandates IT governance frameworks, risk management practices, cybersecurity controls, and third-party IT risk management. While it does not mention AI explicitly, it covers "emerging technology" adoption — which includes AI and machine learning. Banks deploying AI must ensure their AI systems comply with the IT governance framework, undergo risk assessment, have documented change management, and are subject to internal audit.

Are banks required to explain AI model decisions to customers?

Yes, in practice. The RBI Fair Practices Code, the Charter of Customer Rights (2014), and the Digital Lending Guidelines (September 2022) collectively require that customers receive clear, intelligible explanations for decisions that affect them — including credit decisions, fraud alerts, and account restrictions. When these decisions are AI-driven, banks must be able to explain the key factors that led to the decision in a language the customer understands. This effectively requires model explainability or interpretability frameworks for customer-facing AI applications.

How does the DPDP Act intersect with AI in banking?

The Digital Personal Data Protection Act 2023 creates several intersection points with AI in banking. Section 4 (lawful purpose) requires that personal data processing through AI systems have a clear, lawful purpose. Section 5 (notice) requires banks to inform customers before using their data for AI processing. Section 6 (consent) requires specific consent for AI-driven profiling unless the processing falls under legitimate use exemptions. Section 11 (right regarding automated decisions) gives customers the right to challenge decisions based solely on automated processing. Banks must map these DPDP obligations onto every AI use case.

What penalties do banks face for non-compliant AI systems?

Penalties come from multiple regulators. The RBI can impose monetary penalties under Section 47A of the Banking Regulation Act for non-compliance with its directions — penalties that can run into crores. The Data Protection Board under DPDP can impose penalties up to Rs.250 crore for data processing violations. Additionally, the RBI can issue directions restricting or prohibiting specific technology deployments. For listed banks, SEBI disclosure requirements around operational risk events may be triggered. The combined regulatory exposure makes AI compliance a board-level risk for Indian banks.
Related Resources
DPDP Consent ManagementSignificant Data FiduciaryIndia AI Governance BillRBI NBFC Directions

AI Compliance Advisory for Banks

Unified Chambers advises banks and NBFCs on the intersection of RBI technology governance, DPDP compliance, and AI model governance. Advocate Subodh Bajpai available for compliance team briefings.

WhatsApp Now+91 84008 60008
More on DPDP
DPDP Penalties GuideCompliance ChecklistData Breach 72-Hour RuleDPDP vs GDPRDPDP Lawyer — OverviewDPDP Compliance GuideData Protection Board
Free ConsultWhatsAppCall Now
WhatsApp