AI Regulation · MeitY · IT Act · DPDP Act 2023 · Sectoral Compliance

India AI Governance Bill
What to Expect & How to Prepare [2026]

India AI governance legislation is inevitable — the question is timing and form. Indian companies deploying artificial intelligence face a regulatory landscape that is fragmented but rapidly evolving: the MeitY advisory of March 2024, the DPDP Act’s automated decision-making provisions, RBI and SEBI sectoral directions, and the proposed Digital India Act all contribute to an emerging AI regulatory framework. Understanding the trajectory allows companies to prepare before legislation is enacted.

This analysis maps India’s AI regulatory trajectory, identifies provisions already in force, predicts likely legislative content based on global trends, and provides a corporate preparation roadmap. By Advocate Subodh Bajpai.

WhatsApp ConsultationSchedule a Call
Subject:India’s AI regulatory trajectory and corporate preparation
Current Framework:IT Act 2000, DPDP Act 2023, MeitY advisory (March 2024), sectoral directions (RBI, SEBI, IRDAI)
Expected Legislation:Standalone AI governance bill or Digital India Act with AI chapter
Likely Model:Modified risk-based approach with sectoral enforcement
Preparation Window:12-24 months before comprehensive AI legislation is operational

Table of Contents

  1. India’s AI Regulatory Trajectory
  2. Laws That Already Apply to AI in India
  3. MeitY Advisory and Its Implications
  4. Expected Provisions — Based on Global Trends
  5. Sectoral AI Regulations Already in Force
  6. Corporate Preparation Roadmap
  7. FAQs — India AI Regulation
Regulatory Evolution

India’s AI Regulatory Trajectory

India’s approach to AI regulation has evolved through four phases. Phase one (2018-2021) was the study and principles phase — NITI Aayog published the National Strategy for AI (2018), the Responsible AI for All report (2021), and the Approach Document on AI Ethics (2021). These documents established principles (safety, fairness, transparency, accountability, privacy) but created no legal obligations.

Phase two (2022-2023) was the indirect regulation phase. The DPDP Act 2023 introduced automated decision-making rights (Section 11), which directly impact AI systems processing personal data. The IT Act intermediary guidelines were amended to address AI-generated content. Sectoral regulators — RBI, SEBI, IRDAI — began issuing directions applicable to AI in their domains.

Phase three (2024) was the advisory phase. MeitY’s March 2024 advisory on AI governance signalled a shift from principles to action, requiring AI labelling and pre-deployment conditions for certain AI models. The advisory was non-binding but demonstrated regulatory intent.

Phase four (2025-2026) is the legislative phase. The proposed Digital India Act — a comprehensive replacement for the IT Act 2000 — is expected to include AI governance provisions. Whether AI regulation comes as a chapter within the Digital India Act or as a standalone AI governance bill depends on the legislative calendar and political priorities. Either way, comprehensive AI legislation is in the pipeline. Companies have a 12 to 24 month preparation window.

Current Legal Framework

Laws That Already Apply to AI in India

DPDP Act 2023 — Personal Data Processing by AI

The most significant existing regulation of AI. Sections 4-6 (lawful processing and consent), Section 8 (security), Section 9 (children’s data), and Section 11 (automated decision-making rights) collectively regulate how AI systems can process personal data. Any company using AI to process personal data of individuals in India must comply with the DPDP Act. This covers: AI credit scoring, recommendation engines, fraud detection, chatbots, automated underwriting, and predictive analytics.

Information Technology Act 2000 — Liability and Content

Section 43A imposes liability for failure to protect sensitive personal data (applicable until fully superseded by DPDP). Section 66 covers computer-related offences — relevant to AI systems that cause damage. Section 69A empowers government to block content — applicable to AI-generated content. The Intermediary Guidelines (IT Rules 2021) impose obligations on platforms that host AI-generated content, including labelling requirements and takedown obligations for deepfakes.

Consumer Protection Act 2019 — AI Product Liability

AI-driven products and services that cause harm to consumers are subject to product liability under the Consumer Protection Act. The Central Consumer Protection Authority (CCPA) can issue directions against unfair trade practices involving AI. AI-driven pricing algorithms that engage in price gouging, AI recommendation systems that mislead consumers, and AI-driven services that fail to perform as advertised all create CPA liability.

Indian Contract Act 1872 — AI in Contract Formation

AI systems that negotiate, form, or execute contracts raise questions under the Indian Contract Act. An AI chatbot that makes a contractual commitment on behalf of a company creates a binding obligation if the customer reasonably relied on it. Companies deploying transactional AI must ensure their AI systems operate within authorised parameters and that contractual terms are clear.

Government Direction

MeitY Advisory and Its Implications

The MeitY advisory of March 2024 was the first direct regulatory intervention on AI by the Indian government. It directed that AI platforms must label AI-generated content to prevent misinformation; that AI models deployed to Indian users must not generate content that violates Indian law; and that under-tested or unreliable AI models must either obtain government permission before deployment or carry clear labelling of their limitations.

The pre-deployment permission requirement generated significant industry pushback. It was subsequently clarified that the requirement applies only to AI models deployed without adequate labelling — not to all AI models. The practical effect is a labelling mandate: AI platforms must clearly indicate where content is AI-generated and where AI models have known limitations.

The advisory is non-binding — it does not carry the force of law and there is no penalty provision. However, it signals three regulatory priorities that any future legislation is likely to incorporate: (1) AI content labelling and provenance tracking; (2) safety testing before deployment; and (3) accountability for AI-generated content that violates existing laws. Companies that align with these three priorities now will be better positioned for compliance when they become legally mandatory.

Legislative Forecast

Expected Provisions — Based on Global Trends

India’s AI legislation, whenever it arrives, will draw from global precedents while adapting to Indian regulatory architecture. Based on the EU AI Act, US executive orders, China’s AI regulations, and India’s own policy documents, the following provisions are expected:

Risk-based classification of AI systems. High-risk categories will likely include: AI in healthcare diagnosis, AI in financial services (credit, insurance, investment), AI in employment decisions, AI in law enforcement, AI in education assessment, and AI in critical infrastructure. High-risk AI systems will face mandatory requirements — transparency, explainability, human oversight, accuracy testing, and bias auditing.

AI transparency and labelling requirements. AI-generated content (text, images, audio, video) will require machine-readable labels indicating AI generation. Deepfake-specific provisions are expected, building on the existing IT Act framework. AI systems interacting with humans (chatbots, virtual assistants) will be required to disclose their AI nature.

Accountability frameworks. AI deployers will bear primary liability for harm caused by AI systems. This includes: liability for biased AI decisions (hiring, lending, insurance), liability for AI errors in high-stakes domains (healthcare, autonomous vehicles), and liability for AI-generated content that violates existing laws. The liability framework may follow a negligence standard (failure to implement reasonable AI governance) rather than strict liability.

Sectoral enforcement. Rather than creating a new AI regulator, India is likely to empower existing sectoral regulators to enforce AI governance within their domains. RBI will regulate banking AI, SEBI will regulate financial markets AI, IRDAI will regulate insurance AI, and the Data Protection Board will handle AI-related data protection violations. A coordinating body — possibly under MeitY — may set horizontal standards.

Sector-Specific Compliance

Sectoral AI Regulations Already in Force

RBI — Banking and Finance

IT Governance Direction (AI risk management), Fair Practices Code (explainable AI credit decisions), Digital Lending Guidelines (AI disclosure), Outsourcing Direction (vendor AI governance)

SEBI — Securities Markets

Algorithmic trading regulations (pre-approval, risk controls), AI in market surveillance, forthcoming framework on AI advisory services, disclosure requirements for AI-driven investment products

IRDAI — Insurance

Guidelines on AI underwriting models, telematics data governance, AI claims processing transparency, policyholder disclosure requirements for AI-driven decisions

TRAI — Telecom

Recommendations on AI in network management, AI-driven customer service standards, data handling in AI-powered telecom services

Action Plan

Corporate Preparation Roadmap

01

AI System Inventory and Risk Classification

Catalogue every AI system deployed across the organisation. Classify each by risk level (high, medium, low) based on: the domain of application (healthcare, finance, employment = high risk), the type of decision (automated decisions affecting individuals = high risk), and the data sensitivity (personal data, children’s data, health data = elevated risk). This inventory becomes the foundation for all AI governance activities.

02

DPDP Compliance for AI Systems

Every AI system processing personal data must comply with the DPDP Act. Map data flows through AI systems. Ensure consent mechanisms cover AI processing purposes specifically. Implement Section 11 human review for automated decisions. Document lawful bases for AI training data. This is not future preparation — it is a current legal obligation.

03

AI Governance Framework

Establish an AI governance framework covering: AI ethics principles, model development standards, bias testing requirements, validation and deployment protocols, ongoing monitoring, incident response, and model retirement. Assign AI governance ownership — whether to a dedicated AI ethics officer, the DPO, or a cross-functional governance committee.

04

Sectoral Compliance Mapping

For regulated entities (banks, NBFCs, insurers, listed companies), map every AI system against applicable sectoral regulations. Ensure that each AI system meets the specific requirements of the relevant regulator. Maintain separate compliance documentation for each sectoral framework. Engage with regulators proactively on AI governance.

05

Documentation and Audit Trail

For every AI system, maintain: a model card (purpose, training data, performance metrics, limitations, bias testing), deployment approval records, monitoring logs (accuracy, drift, fairness metrics), incident reports, and human oversight records. This documentation serves both regulatory compliance and legal defence. Companies with comprehensive AI documentation will be better positioned than those without — regardless of what specific legislation requires.

Frequently Asked Questions

FAQs — India AI Regulation

Does India currently have a dedicated AI law?

No. As of April 2026, India does not have a standalone AI legislation. AI is currently governed through a patchwork of existing laws: the Information Technology Act 2000 (which covers computer-related offences and intermediary liability), the DPDP Act 2023 (which governs personal data processing including by AI systems), sectoral regulations from RBI, SEBI, and IRDAI (which apply to AI in their respective sectors), and the Consumer Protection Act 2019 (which applies to AI-driven product and service defects). The MeitY advisory of March 2024 on AI governance was a non-binding policy direction, not legislation. A comprehensive AI governance bill is expected but has not been tabled.

What did the MeitY AI advisory of March 2024 say?

The Ministry of Electronics and Information Technology issued an advisory in March 2024 directing that AI platforms and large language models (LLMs) deployed in India must (a) label AI-generated content to prevent misinformation, (b) obtain government approval before deploying under-tested or unreliable AI models to Indian users, and (c) ensure that AI outputs do not violate existing laws. The advisory was initially controversial for the pre-deployment approval requirement, and was subsequently revised to apply only to AI models that are deployed without clear labelling of their limitations. The advisory is not legally binding but signals the government’s regulatory intent.

Will India follow the EU AI Act risk-based model?

India is likely to adopt a modified risk-based approach. The EU AI Act classifies AI systems into four risk tiers (unacceptable, high, limited, minimal) with corresponding obligations. India’s approach is expected to be less prescriptive — focusing on high-risk AI applications (healthcare, financial services, criminal justice, critical infrastructure) while adopting lighter-touch oversight for general-purpose AI. The NITI Aayog’s Responsible AI principles and the proposed Digital India Act both suggest a sectoral approach where existing regulators (RBI for banking AI, SEBI for financial markets AI, IRDAI for insurance AI) take the lead in their sectors, with a horizontal framework setting baseline requirements.

Which Indian regulators already regulate AI in their sectors?

Multiple sectoral regulators have already issued directions applicable to AI: (a) RBI — Master Direction on IT Governance covers AI in banking; Fair Practices Code requires explainability in AI credit decisions; Digital Lending Guidelines regulate AI-driven lending; (b) SEBI — regulations on algorithmic trading, AI in market surveillance, and forthcoming framework on AI in advisory services; (c) IRDAI — guidelines on AI-driven underwriting and claims processing, telematics data usage; (d) TRAI — recommendations on AI in telecom network management; (e) FSSAI — framework for AI in food safety inspection. Companies deploying AI must comply with applicable sectoral regulations even in the absence of a horizontal AI law.

How should Indian companies prepare for AI regulation?

Companies should prepare by: (1) inventorying all AI systems deployed and classifying them by risk level; (2) implementing AI governance frameworks covering model development, validation, deployment, monitoring, and retirement; (3) ensuring DPDP compliance for AI systems processing personal data; (4) complying with applicable sectoral regulations (RBI, SEBI, IRDAI); (5) adopting responsible AI principles — fairness, transparency, accountability, safety, privacy; (6) maintaining documentation including model cards, bias testing results, and impact assessments; (7) establishing human oversight mechanisms for high-risk AI decisions; and (8) training employees on AI ethics and compliance. Companies that build governance infrastructure now will be ahead when legislation arrives.

What AI applications might be prohibited or restricted in India?

Based on global trends and Indian policy directions, the following AI applications are likely to face prohibition or severe restriction: social scoring systems (assigning scores to citizens based on social behaviour); real-time biometric identification in public spaces (without judicial authorisation); AI-generated deepfakes for electoral manipulation (already partially covered under IT Act); AI systems that exploit cognitive vulnerabilities of specific groups; and subliminal AI manipulation techniques. AI in criminal sentencing and law enforcement may face heightened scrutiny and mandatory human oversight requirements. The exact prohibitions will depend on the final legislation, but companies deploying AI in these sensitive areas should exercise extreme caution.

Does the DPDP Act already regulate AI?

The DPDP Act 2023 regulates AI indirectly but significantly. Section 4 (lawful purpose) requires that AI data processing have a lawful basis. Section 5 (notice) requires disclosure of AI processing in consent notices. Section 6 (consent) requires specific consent for AI-driven profiling. Section 8 (security) requires reasonable security for AI-processed data. Section 9 (children) prohibits AI-driven tracking and behavioural monitoring of minors. Section 11 (automated decisions) gives individuals the right to challenge AI-only decisions. For companies deploying AI that processes personal data — which is most enterprise AI — the DPDP Act is effectively an AI regulation already in force.
Related Resources
RBI AI Guidelines for BanksDPDP Consent ManagementSignificant Data FiduciaryDPB Penalty Orders

AI Governance Advisory for Indian Corporates

Unified Chambers advises companies on AI governance frameworks, DPDP compliance for AI systems, and sectoral AI regulation. Advocate Subodh Bajpai available for boardroom briefings.

WhatsApp Now+91 84008 60008
More on DPDP
DPDP Penalties GuideCompliance ChecklistData Breach 72-Hour RuleDPDP vs GDPRDPDP Lawyer — OverviewDPDP Compliance GuideData Protection Board
Free ConsultWhatsAppCall Now
WhatsApp