AI Compliance India
Algorithmic Accountability & DPDP
AI compliance in India requires navigating six regulatory frameworks simultaneously: the DPDP Act 2023 for any AI system processing personal data, RBI guidelines for AI in banking and credit scoring, SEBI regulations for algorithmic trading, IRDAI guidelines for AI underwriting, the Consumer Protection Act for AI-driven pricing and dark patterns, and the IT Act for AI-generated content and intermediary liability. Unified Chambers provides integrated AI governance advisory that maps obligations across all applicable regulators for enterprises deploying artificial intelligence in India.
India has no standalone AI Act — but AI is far from unregulated. Senior Partner Advocate Subodh Bajpai, LLM, MBA (XLRI Jamshedpur), advises banks, NBFCs, fintech companies, and corporates on building AI governance frameworks that satisfy regulatory expectations across sectors.
No AI Act — But Six Regulatory
Frameworks Apply
India has deliberately chosen not to enact a horizontal AI Act, preferring sectoral regulation that addresses AI risks within the context of each industry. The NITI Aayog’s Responsible AI principles (2021) and the IndiaAI Mission (2024) set policy direction but are not legally enforceable. What is legally enforceable is the intersection of existing legislation that applies to AI systems — and this intersection creates a compliance matrix that most organisations have not mapped.
The DPDP Act 2023 is the most consequential piece of AI regulation in India, even though it does not mention “artificial intelligence” by name. Section 4(1) applies to any “wholly or partly automated operation” performed on personal data — which covers AI model training, inference, and automated decision-making. Every AI system that processes personal data is a data processing activity under DPDP, subject to consent requirements, purpose limitation, data principal rights, breach notification, and penalties up to Rs 250 crore per contravention.
Beyond DPDP, each sector regulator has begun addressing AI within its domain. RBI has flagged AI explainability in credit decisions. SEBI requires exchange approval for algorithmic trading strategies. IRDAI is developing norms for AI-driven insurance underwriting. The Central Consumer Protection Authority has investigated AI-driven dark patterns. And the IT Act’s intermediary liability framework applies to platforms that use AI for content recommendation and moderation. An enterprise deploying AI in India’s financial services sector may simultaneously face obligations from three or more of these frameworks.
Six Frameworks That Regulate
AI in India Today
Automated Decision-Making Under DPDP
Data Principal Rights
The DPDP Act does not create an explicit “right to explanation” for automated decisions — unlike the GDPR’s Article 22, which grants data subjects the right not to be subject to purely automated decision-making with legal effects. However, DPDP’s existing provisions implicitly require transparency in AI-driven processing.
Section 6(1) requires that consent be “informed” — the data principal must know what processing will occur before consenting. If an AI model will process their data to make credit decisions, the privacy notice must disclose this. Section 11 grants the right to access information about processing — a data principal can request to know how their data has been processed, including by AI systems. Section 12 grants the right to correction and erasure — if an AI model retains training data that is inaccurate, the data principal can request correction or deletion.
The practical implication is significant. A borrower whose loan application is rejected by an AI credit model has the right under DPDP to: (a) know that AI was used in the decision (from the privacy notice); (b) access information about how their personal data was processed (Section 11); (c) request correction of any inaccurate data that influenced the model (Section 12). The RBI’s fair practices code further requires that banks provide reasons for loan rejection — together with DPDP, this effectively mandates explainability of AI credit decisions in Indian banking.
For AI model training specifically, the consent challenge is acute. An organisation that trains an ML model on historical customer data must determine whether the original consent (given for, say, loan processing) extends to model training. In most cases it does not — model training is a separate purpose that requires either fresh consent or reliance on a legitimate use under Section 7. Anonymisation is the alternative: if training data is genuinely anonymised such that no individual can be identified, DPDP does not apply. But the standard for effective anonymisation in the context of ML models is high — research has shown that models can memorise and reconstruct training data, potentially re-identifying anonymised records.
RBI AI Guidelines for Banking
& Credit Scoring Compliance
The Reserve Bank of India has progressively addressed AI risk in the financial sector through its IT governance framework, digital lending guidelines, and sector-specific circulars. While a comprehensive RBI AI regulation is pending, the existing framework creates substantial compliance obligations for banks, NBFCs, and fintech companies using AI.
Credit Scoring: AI-driven credit scoring models are the most widely deployed AI application in Indian banking. RBI requires that credit decisions be communicated with reasons — a borrower must be told why their application was rejected or approved at specific terms. For rule-based scoring systems, generating reasons is straightforward. For deep learning or ensemble models (XGBoost, random forests, neural networks), explainability is technically challenging. Banks must invest in model interpretability tools (SHAP, LIME) and document their explainability methodology for RBI inspection. Our DPDP for banks practice addresses the intersection with data principal rights.
Fraud Detection: AI systems monitoring transactions for fraud operate in real-time and make decisions that directly affect customers — freezing accounts, blocking transactions, flagging suspicious activity. These are automated decisions with immediate consequences for data principals. Under DPDP, a customer whose transaction is blocked by an AI fraud detection system has the right to know that an automated system made the decision and to challenge the decision. Banks must ensure their fraud detection AI has a human review mechanism for contested decisions.
Chatbots & Customer Service AI: AI chatbots deployed by banks and NBFCs for customer service process personal data — account balances, transaction history, complaint details — in conversational format. Under DPDP, each chatbot interaction is a processing activity. Banks must ensure: chatbot conversation logs are subject to storage limitation (not retained indefinitely), customers are informed they are interacting with AI (informed consent), and chatbot-collected data is not repurposed for marketing or model training without separate consent.
AI Governance Framework
Advisory Services
Unified Chambers advises enterprises on building AI governance frameworks that satisfy regulatory expectations across all applicable sectors. Our advisory encompasses: board-level AI policy development, AI risk classification methodology, pre-deployment legal compliance review for high-risk AI use cases, data protection impact assessment for AI systems processing personal data, bias testing and fairness audit coordination, explainability standards aligned with RBI/SEBI/IRDAI expectations, and incident response procedures for AI system failures.
The framework we develop is practical, not theoretical. It maps each AI use case to its applicable regulatory requirements, identifies compliance gaps, and provides actionable remediation steps. For a bank using AI for credit scoring, fraud detection, and customer service chatbots, the governance framework addresses RBI IT governance requirements, DPDP consent and purpose limitation, fair practices code explainability, and data breach incident response — all within a single, auditable structure.
For organisations subject to DPBI scrutiny, a documented AI governance framework serves a critical evidentiary function. Section 33(2) of DPDP directs the Board to consider whether the data fiduciary made efforts to mitigate damage and acted in good faith. A comprehensive, board-approved AI governance framework — implemented before any regulatory inquiry — demonstrates exactly that. It transforms a potential Rs 250 crore penalty exposure into a documented compliance effort that supports mitigation.
Explore Our DPDP & AI Practice
AI Compliance India — Key Questions
Is there a standalone AI regulation or AI Act in India?
No. As of 2025, India does not have a standalone AI Act comparable to the EU AI Act. However, AI systems in India are regulated through a patchwork of existing legislation and sector-specific guidelines. The primary touchpoints are: (1) DPDP Act 2023 — applies to any AI system processing personal data; (2) RBI guidelines on AI in banking — credit scoring, fraud detection, customer service chatbots; (3) SEBI regulations on algorithmic trading; (4) IRDAI guidelines on AI-driven underwriting and claims assessment; (5) Consumer Protection Act 2019 — unfair trade practices through AI-driven pricing or dark patterns; (6) IT Act Section 79 — intermediary liability for AI-generated content. India's approach is sectoral regulation rather than horizontal AI law. The NITI Aayog's Responsible AI framework provides guiding principles but is not legally binding. This creates a compliance landscape where AI developers must navigate multiple regulatory frameworks simultaneously.
How does the DPDP Act apply to AI systems?
The DPDP Act applies to AI systems at every stage of the AI lifecycle that involves personal data. Section 4(1) covers processing — which includes data collection for training datasets, model training on personal data, inference (applying the model to make decisions about individuals), and storing model outputs that relate to identifiable persons. Key DPDP obligations for AI systems: (a) consent for each processing purpose — training a model is a different purpose than making credit decisions; (b) purpose limitation — data collected for one purpose cannot be repurposed for model training without separate consent; (c) data principal rights — individuals can request access to their data and information about how it is processed, including by AI systems; (d) reasonable security safeguards — AI models that contain or can reconstruct personal data must be secured; (e) storage limitation — training data must be erased when the purpose is served, subject to legitimate retention needs.
What are the RBI guidelines on AI in banking and credit scoring?
The RBI has issued guidance on AI through multiple channels: (a) the Report on Enabling Artificial Intelligence in Financial Services (2020) identifying risks of AI opacity, bias, and lack of explainability; (b) the Master Direction on IT Governance requiring banks to have board-level oversight of technology including AI systems; (c) the Digital Lending Guidelines requiring that credit decisions be communicated with reasons — implicitly requiring explainability of AI credit models; (d) fair practices code requirements under which banks must provide reasons for loan rejection — AI models must be capable of generating these reasons. The RBI's approach emphasises explainability, fairness (non-discrimination in credit decisions), and human oversight of AI-driven decisions. Banks and NBFCs using AI credit scoring must ensure their models can be audited, can provide individual-level explanations for credit decisions, and do not systematically discriminate against protected classes.
How does SEBI regulate AI and algorithmic trading?
SEBI regulates algorithmic trading through its Circular on Broad Guidelines on Algorithmic Trading (2012, updated periodically). Key requirements: (a) all algo strategies must be approved by the stock exchange before deployment; (b) the broker is responsible for all orders emanating from algos, including erroneous ones; (c) real-time monitoring systems are mandatory; (d) kill switches must be available to halt trading immediately; (e) audit trails must be maintained for all algo orders. For AI-driven trading (beyond rule-based algos), SEBI's concerns extend to: model risk — the AI may behave unpredictably in unprecedented market conditions; market manipulation — AI systems may inadvertently engage in spoofing or layering; and systemic risk — correlated AI strategies across multiple participants may amplify market volatility. Entities using AI for trading decisions must maintain documentation of model development, validation, and monitoring that can be produced on SEBI inspection.
What are the Consumer Protection Act implications for AI-driven decisions?
The Consumer Protection Act 2019 and the Consumer Protection (E-Commerce) Rules 2020 create liability for AI-driven practices that affect consumers. Key areas: (a) dark patterns — AI-driven personalised pricing, algorithmic nudging, and manipulative UX that qualifies as an "unfair trade practice" under Section 2(47); (b) discriminatory pricing — AI systems that charge different prices to different consumers based on their data profiles may violate fair trade practice norms; (c) product liability — if an AI system causes harm to a consumer (e.g., wrong medical diagnosis by an AI tool, defective autonomous driving), the manufacturer/service provider is strictly liable under Section 82; (d) misleading advertisements — AI-generated promotional content that makes false claims attracts penalties under Section 89. The CCPA (Central Consumer Protection Authority) has actively investigated dark patterns used by e-commerce platforms, and AI-driven personalisation techniques are squarely within its enforcement scope.
What is an AI governance framework and does my company need one?
An AI governance framework is an internal policy and procedural structure that governs how an organisation develops, deploys, monitors, and retires AI systems. It typically includes: (a) AI ethics principles adopted by the board; (b) risk classification of AI use cases (high-risk, medium-risk, low-risk); (c) pre-deployment review process including bias testing, security assessment, and legal compliance check; (d) human oversight requirements for high-risk AI decisions; (e) monitoring and audit protocols for deployed models; (f) incident response procedures for AI system failures; (g) documentation standards for model development and validation. Every company that uses AI to make decisions affecting individuals — credit decisions, insurance underwriting, hiring, pricing, medical diagnosis, legal risk assessment — needs a governance framework. This is not a statutory requirement yet, but regulators (RBI, SEBI, IRDAI) are increasingly expecting documented AI governance during inspections, and the DPDP Act's "reasonable security safeguards" requirement effectively mandates it for AI systems processing personal data.
Can Unified Chambers advise on AI compliance across multiple regulators?
Yes. The challenge with AI compliance in India is that no single regulator has comprehensive jurisdiction. A bank using AI for credit scoring must satisfy RBI, the DPBI (under DPDP), and potentially SEBI (if the bank also uses AI for investment advisory). An InsurTech company using AI for underwriting must satisfy IRDAI, the DPBI, and the Consumer Protection Authority (if AI-driven pricing is challenged as unfair). Unified Chambers provides integrated advisory that maps AI compliance obligations across all applicable regulators. Our differentiation: we already represent banks, NBFCs, and financial institutions before DRTs, DRAT, NCLT, and High Courts. The same institutions are the largest deployers of AI in India's regulated sectors. We understand both the financial regulation architecture and the data protection framework. Minimum engagement: Rs 50 lakhs.
AI Governance Starts With
Understanding Your Exposure
WhatsApp Advocate Subodh Bajpai directly. Describe your AI use cases, your industry, and your regulatory landscape. We will map the compliance matrix across DPDP, RBI, SEBI, IRDAI, and the Consumer Protection Act — and build a governance framework that satisfies all of them. Minimum engagement: Rs 50 lakhs.