India AI Governance Bill
What to Expect & How to Prepare [2026]
India AI governance legislation is inevitable — the question is timing and form. Indian companies deploying artificial intelligence face a regulatory landscape that is fragmented but rapidly evolving: the MeitY advisory of March 2024, the DPDP Act’s automated decision-making provisions, RBI and SEBI sectoral directions, and the proposed Digital India Act all contribute to an emerging AI regulatory framework. Understanding the trajectory allows companies to prepare before legislation is enacted.
This analysis maps India’s AI regulatory trajectory, identifies provisions already in force, predicts likely legislative content based on global trends, and provides a corporate preparation roadmap. By Advocate Subodh Bajpai.
India’s AI Regulatory Trajectory
India’s approach to AI regulation has evolved through four phases. Phase one (2018-2021) was the study and principles phase — NITI Aayog published the National Strategy for AI (2018), the Responsible AI for All report (2021), and the Approach Document on AI Ethics (2021). These documents established principles (safety, fairness, transparency, accountability, privacy) but created no legal obligations.
Phase two (2022-2023) was the indirect regulation phase. The DPDP Act 2023 introduced automated decision-making rights (Section 11), which directly impact AI systems processing personal data. The IT Act intermediary guidelines were amended to address AI-generated content. Sectoral regulators — RBI, SEBI, IRDAI — began issuing directions applicable to AI in their domains.
Phase three (2024) was the advisory phase. MeitY’s March 2024 advisory on AI governance signalled a shift from principles to action, requiring AI labelling and pre-deployment conditions for certain AI models. The advisory was non-binding but demonstrated regulatory intent.
Phase four (2025-2026) is the legislative phase. The proposed Digital India Act — a comprehensive replacement for the IT Act 2000 — is expected to include AI governance provisions. Whether AI regulation comes as a chapter within the Digital India Act or as a standalone AI governance bill depends on the legislative calendar and political priorities. Either way, comprehensive AI legislation is in the pipeline. Companies have a 12 to 24 month preparation window.
Laws That Already Apply to AI in India
DPDP Act 2023 — Personal Data Processing by AI
The most significant existing regulation of AI. Sections 4-6 (lawful processing and consent), Section 8 (security), Section 9 (children’s data), and Section 11 (automated decision-making rights) collectively regulate how AI systems can process personal data. Any company using AI to process personal data of individuals in India must comply with the DPDP Act. This covers: AI credit scoring, recommendation engines, fraud detection, chatbots, automated underwriting, and predictive analytics.
Information Technology Act 2000 — Liability and Content
Section 43A imposes liability for failure to protect sensitive personal data (applicable until fully superseded by DPDP). Section 66 covers computer-related offences — relevant to AI systems that cause damage. Section 69A empowers government to block content — applicable to AI-generated content. The Intermediary Guidelines (IT Rules 2021) impose obligations on platforms that host AI-generated content, including labelling requirements and takedown obligations for deepfakes.
Consumer Protection Act 2019 — AI Product Liability
AI-driven products and services that cause harm to consumers are subject to product liability under the Consumer Protection Act. The Central Consumer Protection Authority (CCPA) can issue directions against unfair trade practices involving AI. AI-driven pricing algorithms that engage in price gouging, AI recommendation systems that mislead consumers, and AI-driven services that fail to perform as advertised all create CPA liability.
Indian Contract Act 1872 — AI in Contract Formation
AI systems that negotiate, form, or execute contracts raise questions under the Indian Contract Act. An AI chatbot that makes a contractual commitment on behalf of a company creates a binding obligation if the customer reasonably relied on it. Companies deploying transactional AI must ensure their AI systems operate within authorised parameters and that contractual terms are clear.
MeitY Advisory and Its Implications
The MeitY advisory of March 2024 was the first direct regulatory intervention on AI by the Indian government. It directed that AI platforms must label AI-generated content to prevent misinformation; that AI models deployed to Indian users must not generate content that violates Indian law; and that under-tested or unreliable AI models must either obtain government permission before deployment or carry clear labelling of their limitations.
The pre-deployment permission requirement generated significant industry pushback. It was subsequently clarified that the requirement applies only to AI models deployed without adequate labelling — not to all AI models. The practical effect is a labelling mandate: AI platforms must clearly indicate where content is AI-generated and where AI models have known limitations.
The advisory is non-binding — it does not carry the force of law and there is no penalty provision. However, it signals three regulatory priorities that any future legislation is likely to incorporate: (1) AI content labelling and provenance tracking; (2) safety testing before deployment; and (3) accountability for AI-generated content that violates existing laws. Companies that align with these three priorities now will be better positioned for compliance when they become legally mandatory.
Expected Provisions — Based on Global Trends
India’s AI legislation, whenever it arrives, will draw from global precedents while adapting to Indian regulatory architecture. Based on the EU AI Act, US executive orders, China’s AI regulations, and India’s own policy documents, the following provisions are expected:
Risk-based classification of AI systems. High-risk categories will likely include: AI in healthcare diagnosis, AI in financial services (credit, insurance, investment), AI in employment decisions, AI in law enforcement, AI in education assessment, and AI in critical infrastructure. High-risk AI systems will face mandatory requirements — transparency, explainability, human oversight, accuracy testing, and bias auditing.
AI transparency and labelling requirements. AI-generated content (text, images, audio, video) will require machine-readable labels indicating AI generation. Deepfake-specific provisions are expected, building on the existing IT Act framework. AI systems interacting with humans (chatbots, virtual assistants) will be required to disclose their AI nature.
Accountability frameworks. AI deployers will bear primary liability for harm caused by AI systems. This includes: liability for biased AI decisions (hiring, lending, insurance), liability for AI errors in high-stakes domains (healthcare, autonomous vehicles), and liability for AI-generated content that violates existing laws. The liability framework may follow a negligence standard (failure to implement reasonable AI governance) rather than strict liability.
Sectoral enforcement. Rather than creating a new AI regulator, India is likely to empower existing sectoral regulators to enforce AI governance within their domains. RBI will regulate banking AI, SEBI will regulate financial markets AI, IRDAI will regulate insurance AI, and the Data Protection Board will handle AI-related data protection violations. A coordinating body — possibly under MeitY — may set horizontal standards.
Sectoral AI Regulations Already in Force
RBI — Banking and Finance
IT Governance Direction (AI risk management), Fair Practices Code (explainable AI credit decisions), Digital Lending Guidelines (AI disclosure), Outsourcing Direction (vendor AI governance)
SEBI — Securities Markets
Algorithmic trading regulations (pre-approval, risk controls), AI in market surveillance, forthcoming framework on AI advisory services, disclosure requirements for AI-driven investment products
IRDAI — Insurance
Guidelines on AI underwriting models, telematics data governance, AI claims processing transparency, policyholder disclosure requirements for AI-driven decisions
TRAI — Telecom
Recommendations on AI in network management, AI-driven customer service standards, data handling in AI-powered telecom services
Corporate Preparation Roadmap
AI System Inventory and Risk Classification
Catalogue every AI system deployed across the organisation. Classify each by risk level (high, medium, low) based on: the domain of application (healthcare, finance, employment = high risk), the type of decision (automated decisions affecting individuals = high risk), and the data sensitivity (personal data, children’s data, health data = elevated risk). This inventory becomes the foundation for all AI governance activities.
DPDP Compliance for AI Systems
Every AI system processing personal data must comply with the DPDP Act. Map data flows through AI systems. Ensure consent mechanisms cover AI processing purposes specifically. Implement Section 11 human review for automated decisions. Document lawful bases for AI training data. This is not future preparation — it is a current legal obligation.
AI Governance Framework
Establish an AI governance framework covering: AI ethics principles, model development standards, bias testing requirements, validation and deployment protocols, ongoing monitoring, incident response, and model retirement. Assign AI governance ownership — whether to a dedicated AI ethics officer, the DPO, or a cross-functional governance committee.
Sectoral Compliance Mapping
For regulated entities (banks, NBFCs, insurers, listed companies), map every AI system against applicable sectoral regulations. Ensure that each AI system meets the specific requirements of the relevant regulator. Maintain separate compliance documentation for each sectoral framework. Engage with regulators proactively on AI governance.
Documentation and Audit Trail
For every AI system, maintain: a model card (purpose, training data, performance metrics, limitations, bias testing), deployment approval records, monitoring logs (accuracy, drift, fairness metrics), incident reports, and human oversight records. This documentation serves both regulatory compliance and legal defence. Companies with comprehensive AI documentation will be better positioned than those without — regardless of what specific legislation requires.
FAQs — India AI Regulation
Does India currently have a dedicated AI law?
What did the MeitY AI advisory of March 2024 say?
Will India follow the EU AI Act risk-based model?
Which Indian regulators already regulate AI in their sectors?
How should Indian companies prepare for AI regulation?
What AI applications might be prohibited or restricted in India?
Does the DPDP Act already regulate AI?
AI Governance Advisory for Indian Corporates
Unified Chambers advises companies on AI governance frameworks, DPDP compliance for AI systems, and sectoral AI regulation. Advocate Subodh Bajpai available for boardroom briefings.