AI Regulation in India: What to Expect in 2026
- Jan 29
- 3 min read
Updated: Feb 6
The landscape of Indian AI regulation is no longer speculative: it is rapidly evolving, with 2025 being a pivotal year, marked by the release of the India AI Governance Guidelines in November by MeitY (The Ministry of Electronics and Information Technology). AI has now gained prominence not just among businesses, but also millions of users who are incorporating it in their daily lives. AI systems are based on complex models, and how these models arrive at specific outcomes is not easily explainable. This raises questions about the reliability of AI-generated output.
These issues are particularly relevant in regulated sectors, where AI systems are increasingly used as part of decision-making processes rather than being confined to internal productivity functions. In such deployments, AI-assisted outputs may influence outcomes with legal or financial consequences, engaging compliance obligations under sector-specific regulations including the Information Technology Act, 2000 (IT Act), and, where personal data is involved, the Digital Personal Data Protection Act, 2023.
Separately, the development and operation of AI models rely on the use of large volumes of training data, which raises legal issues relating to the lawful sourcing of data, licensing terms, authorship, and ownership under the Copyright Act, 1957. In addition, the generation and dissemination of AI-generated content, including synthetic media, has brought increased focus on content provenance and the allocation of liability for unlawful or harmful content under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
For banks and fintech companies, recent regulatory guidance shows that authorities are no longer looking only at whether an algorithm produces accurate results. Regulators are increasingly interested in how these systems are designed, monitored, and controlled within an organisation. In August 2025, the Reserve Bank of India released the Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report, which provides a first-time, sector-specific roadmap for AI adoption in the financial sector. The framework identifies model risk and governance as core concerns, linking AI risks with the need for monitoring and assurance mechanisms across the AI lifecycle. This signals that regulated entities will need governance frameworks that support transparency, oversight, and auditability of AI systems in addition to technical performance measures.
The India AI Governance Guidelines also considered many of these risks and proposed 7 principles for AI governance: (1) trust as the foundation, (2) people-first approach, (3) prioritising innovation over restraint, (4) promoting inclusion and avoiding discrimination, (5) accountability, (6) transparency and disclosures, and (7) safety, security and sustainability of systems. The guidelines provide some indication for AI regulation in 2026:
No standalone AI law
India is unlikely to introduce a single, overarching AI statute. Instead, existing frameworks, such as the Information Technology Act, are likely to be amended to address risks arising from large scale uses of AI.
Changes in the copyright regime
Industry estimates suggest that more than 60 percent of Indian enterprises experimenting with generative AI rely on third-party or scraped data. This is prompting regulators to tighten policies and explore avenues to mitigate ownership risks around AI usage. The Department for Promotion of Industry and Internal Trade (DPIIT) has already released a working paper proposing a mandatory licensing and royalty regime for text and data mining for AI training. If this comes into force, industry may need to disclose sufficiently detailed summaries of the datasets they use to a centralised royalty collection agency and creators may lose the ability to exclude access to their works for AI training.
Harmful deepfakes
Content authentication measures and provenance checks for AI generated content are likely to be mandated to address this risk. Amendments along these lines have been proposed to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021 (Intermediary Guidelines).
Accountability mechanisms
Measures such as maintaining internal AI policy, mandatory publishing of red-teaming reports or impact assessments, and procuring certifications from auditors or standard-setting bodies may be enforced. Algorithmic decision-making will increasingly be evaluated not only on outcomes, but on governance. Accuracy alone may not be sufficient - transparency, auditability, and defensible decision logic may also be required.
Conclusion
Taken together, the above pointers indicate a gradual but deliberate shift from principle-based guidance to enforceable governance expectations. While framed as guidelines today, they signal the direction of future supervisory scrutiny, particularly for organisations deploying AI at scale or in sensitive use cases.
The broader insight for founders, boards, and senior leadership is this: India is not banning AI. It is conditioning trust around it. In 2026, regulatory compliance will depend less on whether AI is deployed, and more on how it is trained, governed, documented, and explained. AI strategy can no longer sit exclusively with technology teams. Legal, risk, compliance, and boardroom conversations are required.
Comments