RBI AI Guidelines for Banks
Compliance Team Briefing [2025]
RBI AI guidelines for banks are not contained in a single direction — they are distributed across the Master Direction on IT Governance, the Fair Practices Code, Digital Lending Guidelines, and the Outsourcing Framework. Indian banks deploying artificial intelligence in credit scoring, fraud detection, customer-facing chatbots, and risk management must synthesise compliance obligations from at least six RBI frameworks, plus the Digital Personal Data Protection Act 2023 which introduces automated decision-making rights for Data Principals.
This briefing maps every applicable RBI requirement to AI use cases in banking, identifies the DPDP intersection points, and provides a compliance action plan for bank compliance teams. By Advocate Subodh Bajpai, who advises banks and NBFCs on regulatory technology compliance.
Table of Contents
- The RBI AI Compliance Framework — No Single Direction
- AI Use Cases in Banking and Applicable RBI Requirements
- Master Direction on IT Governance — What It Requires
- DPDP Act and Automated Decision-Making Rights
- Model Explainability — Practical Requirements for Banks
- Compliance Action Plan for Bank AI Systems
- FAQs — RBI AI Compliance
The RBI AI Compliance Framework — No Single Direction
Indian banks seeking a single RBI circular on AI governance will not find one. The Reserve Bank of India has chosen a principles-based, sectoral approach rather than issuing a standalone AI regulation. This means AI compliance for banks requires mapping across multiple existing frameworks — an exercise most bank compliance teams have not yet completed.
The primary frameworks that collectively govern AI in banking are: (1) the Master Direction on Information Technology Governance, Risk, Controls and Assurance Practices dated 7 April 2023, which establishes IT governance including emerging technology adoption; (2) the Fair Practices Code and its revisions, which mandate transparency in lending decisions; (3) the Digital Lending Guidelines dated 2 September 2022, which impose disclosure requirements on digital lending models; (4) the Master Direction on Outsourcing of Information Technology Services dated 10 April 2023, which governs third-party AI vendors; (5) the Charter of Customer Rights dated 3 December 2014, which guarantees the right to fair treatment and transparency; and (6) the Framework on Regulation of Payment Aggregators and Payment Gateways, which applies to AI in payment systems.
Overlay on these RBI frameworks the Digital Personal Data Protection Act 2023 — which applies to all personal data processing including AI-driven processing by banks — and you have the complete regulatory envelope. Banks that treat AI governance as solely an IT function, or solely a legal function, will find gaps. AI compliance requires coordinated action across IT governance, legal, compliance, risk management, and business units.
AI Use Cases in Banking and Applicable RBI Requirements
AI Credit Scoring and Loan Underwriting
Banks increasingly use machine learning models for credit scoring, replacing or supplementing traditional scorecards. The RBI Fair Practices Code requires that loan rejection reasons be communicated to borrowers in writing. The Digital Lending Guidelines require that the names of credit information companies whose data was used be disclosed. Under DPDP Section 11, borrowers can challenge purely automated credit decisions. Compliance requirement: human-in-the-loop review for AI rejections, documented model validation, and borrower-facing explainability.
Fraud Detection and Anti-Money Laundering
AI-driven transaction monitoring, fraud scoring, and suspicious transaction detection are now standard in Indian banks. The Master Direction on KYC and the PMLA guidelines require that suspicious transaction reports (STRs) be based on reasonable grounds — an AI flag alone may not constitute reasonable grounds without human review. The IT Governance Direction requires that automated monitoring systems be subject to regular validation and testing. Banks must maintain audit trails of AI-driven fraud alerts and the human decisions made on them.
Customer-Facing Chatbots and Virtual Assistants
RBI-regulated chatbots that interact with customers on banking products are subject to the Fair Practices Code, the Integrated Ombudsman Scheme, and DPDP consent requirements. The chatbot must not make misleading representations about products. Customer data processed through the chatbot requires DPDP-compliant consent. The chatbot infrastructure — whether hosted internally or through a vendor — must comply with the IT Governance Direction and the Outsourcing Direction.
AI in Risk Management and Asset Classification
Banks using AI for internal risk rating, early warning signal detection, and NPA classification must ensure model governance under the IT Governance Direction. The RBI Master Direction on Classification, Valuation and Operation of Investment Portfolio requires that internal rating models be validated. AI models used for provisioning calculations must be auditable and their outputs explainable to statutory auditors and RBI inspection teams.
Master Direction on IT Governance — What It Requires
The IT Governance Direction is the closest the RBI has come to addressing AI governance. Chapter IV on IT and Information Security Risk Management requires banks to identify and manage risks arising from emerging technologies — explicitly including artificial intelligence and machine learning. Banks must document risk assessments for each AI deployment, including model risk, data quality risk, and vendor dependency risk.
Chapter V mandates a comprehensive IT audit framework. AI systems fall within the audit scope. Banks must ensure that their internal audit teams — or external auditors — have the competence to audit AI models, including validation of training data, model drift monitoring, and output accuracy. The Direction does not prescribe specific AI audit standards, but the expectation is that banks adopt internationally recognised frameworks such as NIST AI RMF or ISO/IEC 42001.
Chapter III on IT Governance requires board-level oversight of technology strategy, including AI adoption. The Board or its IT Committee must approve significant AI deployments. The Direction requires that IT governance policies address emerging technology adoption — banks that deploy AI without board-level or IT Committee approval are in non-compliance.
For third-party AI systems — including cloud-based AI scoring engines, vendor chatbot platforms, and outsourced fraud detection — the Outsourcing Direction applies in parallel. Banks must ensure that AI vendors provide audit access, that data processing occurs in compliance with RBI data localisation requirements, and that the bank retains ultimate responsibility for AI outputs regardless of the vendor relationship.
DPDP Act and Automated Decision-Making Rights
Section 11 of the DPDP Act 2023 is the provision that most directly impacts AI in banking. It gives every Data Principal — every bank customer — the right to challenge decisions made solely by automated processing. For banks, this means that AI-only credit decisions, automated account closures, automated KYC rejections, and AI-driven product recommendations that have material financial impact all require a human review mechanism.
The practical compliance requirement is a documented human-in-the-loop process. When a customer exercises their Section 11 right, the bank must be able to route the AI decision to a human reviewer who can assess the decision on its merits, not merely rubber-stamp the algorithm. This requires that AI models produce interpretable outputs — or that the bank maintains a parallel manual review capability.
Section 6 consent requirements add another layer. Using customer data to train AI models, to run credit scoring algorithms, or to perform behavioural profiling requires consent that is free, specific, informed, unconditional, and unambiguous. The consent notice under Section 5 must describe the AI processing in terms the customer can understand. Generic consent to “data processing for banking purposes” is unlikely to satisfy the specificity requirement for AI model training.
Banks that process children’s data through AI systems — for example, in insurance linked to education loans — must comply with Section 9, which requires verifiable parental consent and prohibits tracking, behavioural monitoring, and targeted advertising directed at children. AI-driven marketing platforms must be configured to exclude minors.
Model Explainability — Practical Requirements for Banks
Model explainability is where regulatory compliance meets technical implementation. Indian banks must be able to explain AI decisions to three audiences: (1) the customer, who under DPDP and the Fair Practices Code has the right to understand why a decision was made; (2) the RBI inspection team, which during on-site inspections will assess whether AI systems comply with governance directions; and (3) the internal audit function, which must validate AI model outputs.
For credit scoring models, global best practice — and the direction Indian regulation is heading — requires at minimum: feature importance rankings (which factors most influenced the score), counterfactual explanations (what would have to change for the outcome to be different), and confidence levels (how certain the model is of its prediction). Banks using black-box deep learning models for credit decisions face a higher regulatory risk than those using interpretable models such as logistic regression, decision trees, or gradient boosting with SHAP values.
The Responsible AI principles articulated by NITI Aayog — safety, fairness, non-discrimination, transparency, accountability, and privacy — are not legally binding, but they signal the direction of Indian AI regulation. Banks that adopt these principles proactively will be better positioned when the RBI issues more prescriptive AI governance directions, which RBI Deputy Governor T. Rabi Sankar has publicly indicated are under consideration.
Documentation is critical. Every AI model deployed in a bank should have a model card documenting: the model’s purpose, training data characteristics, performance metrics, known limitations, bias testing results, and the human review process. This documentation serves both regulatory compliance (IT Governance Direction audit requirements) and legal defence (if a customer challenges an AI decision before the Banking Ombudsman or Data Protection Board).
Compliance Action Plan for Bank AI Systems
AI Inventory and Risk Assessment
Catalogue every AI system deployed — credit scoring, fraud detection, chatbots, risk models, marketing automation. For each, document the data inputs, processing logic, output use, and downstream decisions. Assess each against IT Governance Direction risk management requirements.
DPDP Data Mapping for AI Systems
Map personal data flows through every AI system. Identify lawful bases for processing (consent vs. legitimate use). Review consent mechanisms for specificity — does the consent notice specifically describe AI processing? Implement Section 11 human review mechanisms for all customer-affecting AI decisions.
Model Governance Framework
Establish a model governance framework covering model development, validation, deployment, monitoring, and retirement. Assign model owners for each AI system. Implement model drift monitoring. Create model cards for every deployed model. This framework must be board-approved under the IT Governance Direction.
Vendor AI Due Diligence
For third-party AI systems, conduct due diligence under the Outsourcing Direction. Verify data localisation compliance. Ensure audit access to vendor AI systems. Confirm that vendor contracts include RBI inspection rights. Assess vendor AI model governance practices.
Explainability Infrastructure
Implement explainability tools for customer-facing AI decisions. At minimum, credit scoring models must produce feature importance and counterfactual explanations. Train customer service teams to communicate AI decisions in plain language. Create escalation paths for Section 11 challenges.
Board Reporting and Audit
Establish quarterly board reporting on AI deployments, incidents, and compliance status. Include AI systems in the internal audit plan. Ensure auditors have competence to assess AI model governance. Prepare for RBI on-site inspection questions on AI governance.
FAQs — RBI AI Compliance
Does the RBI have a standalone AI regulation for banks?
Can a bank deny a loan based solely on an AI credit scoring model?
What is the RBI Master Direction on IT Governance and how does it apply to AI?
Are banks required to explain AI model decisions to customers?
How does the DPDP Act intersect with AI in banking?
What penalties do banks face for non-compliant AI systems?
AI Compliance Advisory for Banks
Unified Chambers advises banks and NBFCs on the intersection of RBI technology governance, DPDP compliance, and AI model governance. Advocate Subodh Bajpai available for compliance team briefings.