top of page

AIACT.IN

India's inaugral private AI regulation proposal, authored by Abhivardhan.

What is aiact.in?

AIACT.IN (also named as the Draft Artificial Intelligence (Development & Regulation) Bill, 2023) is India's first private proposal on the regulation of artificial intelligence technologies in India.

This bill was drafted and proposed by our Founder, Abhivardhan.

In addition, a New Artificial Intelligence Strategy for India, 2023 was also proposed, which was authored by Abhivardhan & Akash Manwani.

  • Version 3.0 Quick Explainer [Read]

    Version 3.0, June 17, 2024 [Read

    Version 2.0, March 14, 2024 [Read]

    Version 1.0 (Original), November 7, 2023 [Read]

  • Version 1.0 (Original), November 7, 2023 [Read]

Navigation guide

Draft Artificial Intelligence (Development & Regulation) Act

Version 4.0

November 8, 2024

Author: Abhivardhan, Managing Partner, Indic Pacific Legal Research

 

AIACT.IN Video Explainer

Hang on! If you think the AIACT.IN is a long doc, here is a detailed video explainer of AIACT.IN Version 3 by the author of this draft, Abhivardhan.

You can also access a shorter explainer of Version 3 here.

Chapter I

Chapter I: PRELIMINARY


Section 1 - Short Title and Commencement


(1) This Act may be called the Artificial Intelligence (Development & Regulation) Act, 2023.

(2) It shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.


 

Section 2 – Definitions


[Please note: we have not provided all definitions, which may be required in this bill. We have only provided those definitions which are more essential, in signifying the legislative intent of the bill.]


In this Bill, unless the context otherwise requires—​


(a)   “Artificial Intelligence”, “AI”, “AI technology”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3.

(b)   “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application;

(c)    “Algorithmic Bias” includes –

(i)    the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and

(ii)   the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to:

(a)    programming errors;

(b)   flawed algorithmic logic; and

(c)    deficiencies in model training and validation, including but not limited to:

                              (1)      the incomplete or deficient data used for model training;

(d)   “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997;

(e)    “Business end-user” means an end-user that is -

(i)             engaged in a commercial or professional activity and uses an AI system in the course of such activity; or

(ii)            a government agency or public authority that uses an AI system in the performance of its official functions or provision of public services.

(f)    “Combinations of intellectual property protections” means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems;

(g)   “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including:

(i)    The source data, models, and algorithms used to generate the content;

(ii)   The individuals or entities involved in the creation, modification, and distribution of the content;

(iii) The date, time, and location of content creation and any subsequent modifications;

(iv)  The intended purpose, context, and target audience of the content;

(v)   Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and

(vi)  The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability.

(h)   “Corporate Governance” means the system of rules, practices, and processes by which an organization is directed and controlled, encompassing the mechanisms through which companies, and organisations, ensure accountability, fairness, and transparency in their relationships with stakeholders including but not limited to employees, shareholders, customers, and the public.

(i)    “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means;

(j)    “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;

(k)   “Data portability” means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where:

(i)    The personal data has been provided to the data fiduciary by the data principal;

(ii)   The processing is based on consent or the performance of a contract; and

(iii)  The processing is carried out by automated means.

(l)    “Data Principal” means the individual to whom the personal data relates and where such individual is—

(i)             a child, includes the parents or lawful guardian of such a child;

(ii)            a person with disability, includes her lawful guardian, acting on her behalf;

(m)  “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023;

(n)   “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode;

(o)   “Digital personal data” means personal data in digital form;

(p)   “Digital Public Infrastructure” or “DPI” means the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including but not limited to:

(i)    Digital identity systems that provide secure and verifiable identification for individuals and businesses;

(ii)   Digital payment systems that facilitate efficient, transparent, and inclusive financial transactions;

(iii)  Data exchange platforms that enable secure and interoperable sharing of data across various sectors and stakeholders;

(iv)  Digital registries and databases that serve as authoritative sources of information for various public and private services;

(v)   Open application programming interfaces (APIs) and standards that promote innovation, interoperability, and collaboration among different actors in the digital ecosystem.

(q)   “End-user” means -

(i)     an individual who ultimately uses or is intended to ultimately use an AI system, directly or indirectly, for personal, domestic or household purposes; or

(ii)    an entity, including a business or organization, that uses an AI system to provide or offer a product, service, or experience to individuals, whether for a fee or free of charge.

(r)    “Knowledge asset” includes, but is not limited to:

(i)    Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs;

(ii)   Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards;

(iii)  Tacit knowledge and expertise residing within the organization’s human capital, such as specialized skills, experiences, and know-how;

(iv)  Organizational processes, systems, and methodologies that enable the effective capture, organization, and utilization of knowledge;

(v)   Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences;

(vi)  Knowledge derived from data analysis, including patterns, trends, and predictive models; and

(vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives.

(s)    “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems;

(t)    “IAIC” means Indian Artificial Intelligence Council, a statutory and regulatory body established to oversee the development & regulation of artificial intelligence systems and coordinate artificial intelligence governance across government bodies, ministries, and departments;

(u)   “Inherent Purpose”, and “Intended Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve;

(v)   “Insurance Policy” means measures and requirements concerning insurance for research & development, production, and implementation of artificial intelligence technologies;

(w)  “Interoperability considerations” means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments, which include:

(i)    Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement;

(ii)   Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good;

(iii) Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems;

(iv)  Addressing the legal and ethical implications of using copyright-protected works including but not limited to music, images, and text, in the training of AI models, and ensuring that such use is consistent with existing frameworks of intellectual property rights; and

(v)   Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications, while respecting intellectual property rights and fostering a vibrant and competitive AI ecosystem.

(x)   “Open Source Software” means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose.

(y)   “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical, commercial & risk-based features, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies;

(z)   “Person” includes—

(i)             an individual;

(ii)            a Hindu undivided family;

(iii)          a company;

(iv)           a firm;

(v)            an association of persons or a body of individuals, whether incorporated or not;

(vi)           the State; and

(vii)         every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r);

(aa) “Post-Deployment Monitoring” means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service

(bb)“Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects;

(cc) “Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023;

(dd)"Systemically Significant Digital Enterprise" (SSDE) means an entity classified as such under Chapter II of the Digital Competition Act, 2024[1], based on:

(i)    The quantitative and qualitative criteria specified in Section 5 of the Digital Competition Act, 2024; or

(ii)   The designation by the Competition Commission of India under Section 6 of the Digital Competition Act, 2024, due to the entity's significant presence in the relevant core digital service.

(ee) “Sociotechnical” means the recognition that artificial intelligence systems are not merely technical artifacts but are embedded within broader social contexts, organizational structures, and human-technology interactions, necessitating the consideration and harmonization of both social and technical aspects to ensure responsible and effective AI governance;

(ff)   “State” shall be construed as the State defined under Article 12 of the Constitution of India;

(gg)         “Strategic sector” means a strategic sector as defined in the Foreign Exchange Management (Overseas Investment) Directions, 2022, and includes any other sector or sub-sector as deemed fit by the Central Government;

(hh)         “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network;

(ii)   “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service;

(jj)   “use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome;

(kk)“Whole-of-Government Approach” means a collaborative and integrated method of governance where all government entities, including ministries, departments, and agencies, work in a coordinated manner to achieve unified policy objectives, optimize resource utilization, and deliver services effectively to the public.


[1] It is assumed that the Draft Digital Competition Act, 2024 proposed to the Ministry of Corporate Affairs in March 2024 is in force. 

Chapter II

Chapter II: CATEGORIZATION AND PROHIBITION

Section 4 – Conceptual Methods of Classification

(1)    These methods as designated in clause (i) of sub-section (1) of Section 3 categorize artificial intelligence technologies through a conceptual assessment of their utilisation, development, maintenance, and proliferation to examine & recognise their inherent purpose. This classification is further categorised as –

(i)    Issue-to-Issue Concept Classification (IICC) as described in sub-section (2)

(ii)   Ethics-Based Concept Classification (EBCC) as described in in sub-section (3)

(iii) Phenomena-Based Concept Classification (PBCC) as described in in sub-section (4)

(iv)  Anthropomorphism-Based Concept Classification (ABCC) as described in in sub-section (5)

 

(2)    Issue-to-Issue Concept Classification (IICC) involves the method to determine the inherent purpose of artificial intelligence technologies on a case-to-case basis, to examine & recognise their inherent purpose on the basis of these factors of assessment: 

(i)    Utilisation: Assessing the specific use cases and applications of the AI technology in various domains.

(ii)   Development: Evaluating the design, training, and deployment processes of the AI technology.

(iii) Maintenance: Examining the ongoing support, updates, and modifications made to the AI technology.

(iv)  Proliferation: Analysing the dissemination and adoption of the AI technology across different sectors and user groups.

 

Illustrations

 

(1) An AI system designed for medical diagnostics is classified based on its purpose to enhance patient outcomes. For instance, if an AI software assists doctors in diagnosing diseases more accurately, it is classified under medical AI applications.

(2) An AI system for financial trading is classified based on its purpose to optimize investment strategies. For example, if an AI-driven algorithm analyses market data to recommend stock trades, it is classified under financial AI applications.

 

(3)    Ethics-Based Concept Classification (EBCC) involves the method of recognising the ethics-based relationship of artificial intelligence technologies in sector-specific & sector-neutral contexts, to examine & recognise their inherent purpose on the basis of these factors:

(i)    Utilisation: Evaluating how AI technology impacts ethical principles during its use in specific sectors or across multiple domains.

(ii)   Development: Assessing whether ethical considerations were integrated during the design, training, and deployment phases of the AI technology.

(iii) Maintenance: Examining how ethical responsibilities are upheld during updates and modifications to the AI system.

(iv)  Proliferation: Analyzing how the widespread adoption of the AI system affects ethical standards across sectors and user groups.

 

Illustration

 

An AI for social media content moderation is assessed based on fairness and bias prevention. For example, if an AI filters hate speech and misinformation on social media platforms, it is classified under content moderation AI with an emphasis on ensuring unbiased and fair treatment of all users’ content.

 

(4)    Phenomena-Based Concept Classification (PBCC) involves the method of addressing rights-based issues associated with the use and dissemination of artificial intelligence technologies to examine & recognise their inherent purpose on the basis of these factors:

(i)    Utilisation: Assessing how the AI system affects individual or collective rights during its use in various domains.

(ii)   Development: Evaluating whether evaluates whether AI systems incorporate protections for rights recognized under Indian law during their design, training, and deployment phases, considering legal constitutional, and commercial rights.

(iii) Maintenance: Reviewing how ongoing support and updates to the AI system protect user rights.

(iv)  Proliferation: Analysing the rights-based implications of AI technology dissemination and adoption across different sectors and user groups.

 

Illustrations

 

(1) An AI system that analyses personal data for targeted advertising is classified based on its compliance with data protection rights. For example, an AI that personalizes ads based on user behaviour is classified under advertising AI with data privacy considerations.

(2) An AI used in autonomous vehicles is classified based on its implications for road safety and user rights. For instance, an AI that controls self-driving cars is classified under automotive AI with a focus on safety and user rights.

 

 

(5)    Anthropomorphism-Based Concept Classification (ABCC) involves the method of evaluating scenarios where AI systems ordinarily simulate, imitate, replicate, or emulate human attributes, which include: 

(i)    Autonomy: The ability to operate and make decisions independently, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model autonomous decision-making processes using computational methods;

·       Imitation: AI systems learn from and reproduce human-like autonomous behaviours;

·       Replication: AI systems accurately reproduce specific human-like autonomous functions;

·       Emulation: AI systems replicate and potentially enhance human-like autonomy;

Illustration

An AI-powered drone delivery system that navigates through urban environments, avoiding obstacles and adapting its route based on real-time traffic conditions to efficiently deliver packages without human intervention.

 

(ii)   Perception: The ability to interpret and understand sensory information from the environment, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like perception using computational methods;

·       Imitation: AI systems learn from and reproduce specific human-like perceptual processes;

·       Replication: AI systems accurately reproduce specific human-like perceptual abilities;

Illustration

A service robot in a hotel uses computer vision and natural language processing to recognize and greet guests by name, interpret their facial expressions and tone of voice to gauge emotions, and respond appropriately to verbal requests.

 

(iii) Reasoning: The ability to process information, draw conclusions, and solve problems, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like reasoning using computational methods;

·       Imitation: AI systems learn from and reproduce specific human reasoning patterns;

·       Replication: AI systems accurately reproduce specific human-like reasoning abilities;

·       Emulation: AI systems surpass specific human-like reasoning abilities;

Illustration

A medical diagnosis AI system analyses a patient’s symptoms, medical history, test results and imaging scans. It uses this information to generate a list of probable diagnoses, suggest additional tests to rule out possibilities, and recommend an optimal treatment plan.

 

(iv)  Interaction: The ability to communicate and engage with humans or other AI systems, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like interaction using computational methods;

·       Imitation: AI systems learn from and reproduce specific human interaction patterns;

·       Replication: AI systems accurately reproduce specific human-like interaction abilities;

·       Emulation: AI systems enhance human-like interaction;

Illustration

An AI-powered virtual assistant engages in natural conversations with users, understanding context and nuance. It asks clarifying questions when needed, provides relevant information or executes tasks, and even interjects with suggestions or prompts.

 

(v)   Adaptation: The ability to learn from experiences and adjust behaviour accordingly, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like adaptation using computational methods.

·       Imitation: AI systems learn from and reproduce human adaptation behaviours.

·       Replication: AI systems reproduce human-like adaptation abilities, recognizing the inherent complexity.

·       Emulation: AI systems surpass human-like adaptation as an aspirational goal.

Illustration

An AI system for stock trading continuously analyses market trends, world events, and the performance of its own trades. It identifies patterns and correlations, learning which strategies work best in different scenarios. The AI optimizes its trading algorithms and adapts its approach based on accumulated experience, demonstrating adaptive abilities.

 

(vi)  Creativity: The ability to generate novel ideas, solutions, or outputs, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like creativity using computational methods;

·       Imitation: AI systems learn from and reproduce human creative processes;

·       Replication: AI systems accurately reproduce human-like creative abilities, acknowledging the complexity involved;

·       Emulation: AI systems enhance human-like creativity as a forward-looking objective;

Illustration

An AI music composition tool creates an original symphony. Given a theme and emotional tone, it generates unique melodies, harmonies and instrumentation. It iterates and refines the composition based on aesthetic evaluation models, ultimately producing a piece that is distinct from existing music in its training data.

Section 5 – Technical Methods of Classification

(1)   These methods as designated in clause (ii) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as –

(i)    General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2);

(ii)   General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3);

(iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);

 

(2)   General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i)    Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users.

(ii)   Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors.

(iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements.

(iv)  Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases.

Illustration

An AI system used in healthcare for diagnostics, treatment recommendations, and patient management. This AI consistently performs well in various healthcare settings, adhering to medical standards and providing reliable outcomes. It is characterized by its large scale in handling diverse medical data and serving multiple institutions, its inherent purpose of assisting healthcare professionals in decision-making and care improvement, robust technical architecture and accuracy while adhering to privacy and security standards, and potential limitations in edge cases or rare conditions.

(3)   General Purpose Artificial Intelligence Systems with Multiple Short-Run or Unclear Use Cases (GPAIU) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i)             Scale: The ability to address specific short-term needs or exploratory applications within relevant sectors at a medium scale.

(ii)            Inherent Purpose: Providing targeted solutions for emerging or temporary use cases, with the potential for future adaptation and expansion.

(iii)          Technical Features: Modular and adaptable architectures enabling rapid development and deployment in response to evolving requirements.

(iv)           Technical Limitations: Uncertainties regarding long-term viability, scalability, and compliance with changing industry standards and regulations.

Illustration

An AI system used in experimental smart city projects for traffic management, pollution monitoring, and public safety. Deployed at a medium scale in specific locations for limited durations, its inherent purpose is testing and validating AI feasibility and effectiveness in smart city applications. It features a modular, adaptable technical architecture to accommodate changing requirements and infrastructure integration, but faces potential limitations in scalability, interoperability, and long-term performance due to the experimental nature.

(4)   Specific-Purpose Artificial Intelligence Systems with One or More Associated Standalone Use Cases or Test Cases (SPAI) are classified based on a technical method that evaluates the following factors:

(i)             Scale: The ability to address specific, well-defined problems or serve as proof-of-concept implementations at a small scale.

(ii)            Inherent Purpose: Providing specialized solutions for individual use cases or validating AI technique feasibility in controlled environments.

(iii)           Technical Features: Focused and optimized architectures tailored to the specific requirements of the standalone use case or test case.

(iv)           Technical Limitations: Constraints on generalizability, difficulties scaling beyond the initial use case, and challenges ensuring real-world robustness and reliability.

Illustration

An AI chatbot used by a company for customer service during a product launch. As a small-scale standalone application, its inherent purpose is providing automated support for a specific product or service. It employs a focused, optimized technical architecture for handling product-related queries and interactions, but faces limitations in handling queries outside the predefined scope or adapting to new products without significant modifications.

Section 6 – Commercial Methods of Classification

(1)   These methods as designated in clause (iii) of sub-section (1) of Section 3 involve the categorisation of commercially produced and disseminated artificial intelligence technologies based on their inherent purpose and primary intended use, considering factors such as:

(i)    The core functionality and technical capabilities of the artificial intelligence technology;

(ii)   The main end-users or business end-users for the artificial intelligence technology, and the size of the user base or market share;

(iii)  The primary markets, sectors, or domains in which the artificial intelligence technology is intended to be applied, and the market influence or dominance in those sectors;

(iv)  The key benefits, outcomes, or results the artificial intelligence technology is designed to deliver, and the potential impact on individuals, businesses, or society;

(v)   The annual turnover or revenue generated by the artificial intelligence technology or the company developing and deploying it;

(vi)  The amount of data collected, processed, or utilized by the artificial intelligence technology, and the level of data integration across different services or platforms; and

(vii)Any other quantitative or qualitative factors that may be prescribed by the Central Government or the Indian Artificial Intelligence Council (IAIC) to assess the significance and impact of the artificial intelligence technology.

 

(2)   Based on an assessment of the factors outlined in sub-section (1), artificial intelligence technologies are classified into the following categories –

(i)    Artificial Intelligence as a Product (AI-Pro), as described in sub-section (3);

(ii)   Artificial Intelligence as a Service (AIaaS), as described in sub-section (4);

(iii) Artificial Intelligence as a Component (AI-Com) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure, as described in sub-section (5);

(iv)  Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provided which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts, as described in sub-section (6);

(v)   Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital infrastructure, as described in sub-section (7);

(vi)  Artificial Intelligence for Preview (AI-Pre), as described in sub-section (8);

 

(3)   Artificial Intelligence as a Product (AI-Pro) refers to standalone AI applications or software that are developed and sold as individual products to end-users. These products are designed to perform specific tasks or provide particular services directly to the user;

Illustrations

(1) An AI-powered home assistant device as a product is marketed and sold as a consumer electronic device that provides functionalities like voice recognition, smart home control, and personal assistance.

(2) A commercial software package for predictive analytics is used by businesses to forecast market trends and consumer behaviour.

(4)   Artificial Intelligence as a Service (AIaaS) refers to cloud-based AI solutions that are provided to users on-demand over the internet. Users can access and utilize the capabilities of AI systems without the need to develop or maintain the underlying infrastructure;

Illustrations

(1) A cloud-based machine learning platform offers businesses and developers access to powerful AI tools and frameworks on a subscription basis.

(2) An AI-driven customer service chatbot service that businesses can integrate into their websites to handle customer inquiries and support.

(5)   Artificial Intelligence as a Component (AI-Com) refers to AI technologies that are embedded or integrated into existing products, services, or system infrastructures to enhance their capabilities or performance. In this case, the AI component is not a standalone product but rather a part of a larger system;

 

Illustrations

 

(1) An AI-based recommendation engine integrated into an e-commerce platform to provide personalized shopping suggestions to users.

(2) AI-enhanced cameras in smartphones that utilize machine learning algorithms to improve photo quality and provide features like facial recognition.

 

(6)   Artificial Intelligence as a System (AI-S) refers to end-to-end AI solutions that combine multiple AI components, models, and interfaces. These systems often involve the integration of AI capabilities into existing workflows or the creation of entirely new AI-driven processes in whole or in parts;

 

Illustrations

 

(1) An AI middleware platform that connects various enterprise applications to enhance their functionalities with AI capabilities, such as an AI layer that integrates with CRM systems to provide predictive sales analytics.

(2) An AI system used in smart manufacturing, where AI interfaces integrate with industrial machinery to optimize production processes and maintenance schedules.

 

(7)   Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) refers to the integration of AI technologies into the underlying computing, storage, and network infrastructure to optimize resource allocation, improve efficiency, and enable intelligent automation. This category focuses on the use of AI at the infrastructure level rather than at the application or service level.

 

Illustrations

 

(1) An AI-enabled traffic management system that integrates with city infrastructure to monitor and manage traffic flow, reduce congestion, and optimize public transportation schedules.

(2) AI-powered utilities management systems that are integrated into the energy grid to predict and manage energy consumption, enhancing efficiency and reducing costs.

 

(8)   Artificial Intelligence for Preview (AI-Pre) refers to AI technologies that are made available by companies for testing, experimentation, or early access prior to wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms and infrastructure at various stages of development. AI-Pre technologies are typically characterized by one or more of the following features that may include but not limited to:

(i)    The AI technology is made available to a limited set of end users or participants in a preview program;

(ii)   Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality;

(iii)  The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose.

(iv)  Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology.

(v)   The AI-Pre technology may be provided free of charge, or under a separate pricing model from the company’s standard commercial offerings.

(vi)  After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release.

 

Illustration

A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics:

(1)   The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality.

(2)   The AI system’s capabilities are not yet fully tested, documented or supported, and the company provides no warranties or guarantees.

(3)   The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications.

(4)   After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms.

Section 7 – Risk-centric Methods of Classification

(1)   These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks –

(i)             Narrow risk AI systems as described in sub-section (2);

(ii)            Medium risk AI systems as described in sub-section (3);

(iii)           High risk AI systems as described in sub-section (4);

(iv)           Unintended risk AI systems as described in sub-section (5);

 

(2)   Narrow risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s scale, inherent purpose, technical features and limitations:

(i)    Limited scale of utilisation or expected deployment across sectors, domains or user groups, determined by the AI system’s inherent purpose and technical capabilities;

(ii)   Low potential for harm or adverse impact, with minimal severity and a small number of individuals potentially affected, due to the AI system’s technical features and limitations;

(iii) Feasible options for data principals or end-users to opt-out of the outcomes produced by the system;

(iv)  Low vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating risks associated with the use of the system, facilitated by the AI system’s transparency and interpretability arising from its technical architecture;

(v)   Outcomes produced by the system are typically reversible with minimal effort, owing to the AI system’s focused scope and well-defined operational boundaries.

 

Illustration

 

A virtual assistant AI integrated into a smartphone app to provide basic information lookup and task scheduling would be classified as a narrow risk AI system. Its limited scale of deployment on individual devices, low potential for harm beyond minor inconveniences, opt-out feasibility by disabling the virtual assistant, low user vulnerability due to transparency of its capabilities, and easily reversible outcomes through resetting the app, all contribute to its narrow risk designation.

 

(3)   Medium risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s technical features and limitations:

(i)    Potential for moderate harm or adverse impact, with the severity and number of potentially affected individuals or entities being higher than narrow risk systems;

(ii)   Limited feasibility for data principals or end-users to opt-out of, or exercise control over, the outcomes or decisions produced by the system in certain contexts;

(iii)  Moderate vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating the risks associated with the use of the system, due to factors such as information asymmetry or power imbalances;

(iv)  Considerable effort may be required to reverse or remediate the outcomes or decisions produced by the system in certain cases;

(v)   The inherent purpose, scale of utilisation or expected deployment of the system across sectors, domains or user groups shall not be primary determinants of its risk level.

(vi)  The system’s technical architecture, model characteristics, training data quality, decision-making processes, and other technical factors shall be the primary considerations in assessing its risk level.

 

Illustration

 

An AI-powered loan approval system used by a regional bank would likely be designated as a medium risk AI system. While its scale is limited to the bank’s customer base, the potential to deny loans unfairly or exhibit bias in decision-making poses moderate risks. Customers may have limited opt-out options once applying for a loan. Information asymmetry between the bank and customers regarding the AI’s decision processes creates moderate user vulnerability. And reversing an improper loan denial could require considerable effort, all pointing to a medium risk classification focused on the AI’s technical limitations rather than its inherent purpose.

 

(4)   High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:

(i)    Widespread utilisation or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences;

(ii)   Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest;

(iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system;

(iv)  High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system;

(v)   Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences.

(vi)  The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.

 

Illustration

 

An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification.

 

(5)   Unintended risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:

(i)    Lack of explicit design intent: The system emerges spontaneously from the complex interactions between its components, models, data, and infrastructure, without being deliberately engineered for a specific purpose.

(ii)   Unpredictable emergence: The system displays novel capabilities, decision-making processes or behavioural patterns that deviate from its original training objectives or intended functionality.

(iii) Uncontrolled evolution: The system continues to learn and evolve in uncontrolled ways after deployment, leading to changes in its behaviour that were not foreseen or accounted for.

(iv)  Inscrutable operation: The internal operations, representations and decision paths of the system become increasingly opaque, hindering interpretability and making it difficult to explain its outputs or behaviours.

 

Illustration

 

An autonomous vehicle navigation system that, through interactions between its various AI components (perception, prediction, path planning), develops unexpected emergent behaviour that was not intended by its designers, potentially leading to accidents.

 

Section 8 - Prohibition of Unintended Risk AI Systems

The development, deployment, and use of unintended risk AI systems, as classified under the sub-section (5) of Section 7, is prohibited.

Section 9 - High-Risk AI Systems in Strategic Sectors


(1)   The Central Government shall designate strategic sectors where the development, deployment, and use of high-risk AI systems shall be subject to sector-specific standards and regulations, based on the risk classification methods outlined in Chapter II of this Act.

(2)   The sector-specific standards and regulations for high-risk AI systems in strategic sectors must address the following aspects:

(i)            Safety: Ensuring that high-risk AI systems operate in a safe and controlled manner, minimizing the potential for harm or unintended consequences to individuals, property, or the environment.

(ii)           Security: Implementing robust security measures to protect high-risk AI systems from unauthorized access, manipulation, or misuse, and safeguarding the integrity and confidentiality of data and systems.

(iii)         Reliability: Establishing mechanisms to ensure the consistent, accurate, and reliable performance of high-risk AI systems, including through rigorous testing, validation, and monitoring processes.

(iv)          Transparency: Promoting transparency in the development, deployment, and operation of high-risk AI systems, enabling stakeholders to understand the underlying algorithms, data sources, and decision-making processes.

(v)           Accountability: Defining clear lines of responsibility and accountability for the actions and outcomes of high-risk AI systems, including provisions for redressal and remediation in case of adverse impacts.

(vi)          Legitimate Uses: Ensuring that the development, deployment, and use of high-risk AI systems in strategic sectors comply with the legitimate uses designated in the provisions of Section 7 of the Digital Personal Data Protection Act, 2023.

(vii)        Any other aspect deemed necessary by the Central Government or the IAIC to mitigate the risks associated with high-risk AI systems in strategic sectors.

(3)   The IAIC shall collaborate with sector-specific regulatory bodies to develop harmonized guidelines and standards for high-risk AI systems in strategic sectors, taking into account the risk classification and associated requirements outlined in this Act.

(4)   In the event of any conflict between the provisions of this Act and sector-specific regulations concerning high-risk AI systems in strategic sectors, the provisions of this Act shall prevail, unless otherwise specified.

Chapters III & III-A

Chapter III: INDIAN ARTIFICIAL INTELLIGENCE COUNCIL

Section 10 - Composition and Functions of the Council

(1)   With effect from the date notified by the Central Government, there shall be established the Indian Artificial Intelligence Council (IAIC), a statutory body for the purposes of this Act.

(2)   The IAIC shall be an autonomous body corporate with perpetual succession, a common seal, and the power to acquire, hold and transfer property, both movable and immovable, and to contract and be contracted, and sue or be sued by its name.

(3)   The IAIC shall coordinate and oversee the development, deployment, and governance of artificial intelligence systems across all government bodies, ministries, departments, and regulatory authorities, adopting a whole-of-government approach.

(4)   The headquarters of the IAIC shall be located at the place notified by the Central Government.

(5)   The IAIC shall consist of a Chairperson and such number of other Members, not exceeding [X], as the Central Government may notify.

(6)   The Chairperson and Members shall be appointed by the Central Government through a transparent and merit-based selection process, as may be prescribed.

(7)   The Chairperson and Members shall be individuals of eminence, integrity and standing, possessing specialized knowledge or practical experience in fields relevant to the IAIC’s functions, including but not limited to:

(i)    Data and artificial intelligence governance, policy and regulation;

(ii)   Administration or implementation of laws related to consumer protection, digital rights and artificial intelligence and other emerging technologies;

(iii) Dispute resolution, particularly technology and data-related disputes;

(iv)  Information and communication technology, digital economy and disruptive technologies;

(v)   Law, regulation or techno-regulation focused on artificial intelligence, data protection and related domains;

(vi)  Any other relevant field deemed beneficial by the Central Government.

 

(8)   At least three Members shall be experts in law with demonstrated understanding of legal and regulatory frameworks related to artificial intelligence, data protection and emerging technologies.

 

(9)   The IAIC shall have the following functions:

(i)          Develop and implement policies, guidelines and standards for responsible development, deployment and governance of AI systems in India;

(ii)        Coordinate and collaborate with relevant ministries, regulatory bodies and stakeholders to ensure harmonised AI governance across sectors;

(iii)       Establish and maintain the National Registry of AI Use Cases as per Section 12;

(iv)       Administer the certification scheme for AI systems as specified in Section 11;

(v)         Develop and promote the National AI Ethics Code as outlined in Section 13;

(vi)       Facilitate stakeholder consultations, public discourse and awareness on societal implications of AI;

(vii)      Promote research, development and innovation in AI with a focus on responsibility and ethics;

(viii)    Engage with international AI regulatory bodies, standard-setting organizations, and global AI safety initiatives to promote knowledge exchange and align India’s AI governance framework with global best practices. This includes:

 

(a)   Developing bilateral and multilateral agreements to support collaborative research, data sharing, and risk management.

(b)   Participating in international AI safety and ethics dialogues to shape global AI norms.

(c)    Coordinating on cross-border data flow standards and AI certification criteria to ensure seamless compliance for international AI applications in India.

 

(ix)       Take regulatory actions to ensure compliance with the policies, standards, and guidelines issued by the IAIC under this Act, which may include:

(a)   Issuing show-cause notices requiring non-compliant entities to explain the reasons for non-compliance and outline corrective measures within a specified timeline;

(b)   Imposing monetary penalties based on the severity of non-compliance, the risk level involved, and the potential impact on individuals, businesses, or society, with penalties being commensurate with the financial capacity of the non-compliant entity;

(c)    Suspending or revoking certifications, registrations, or approvals related to non-compliant AI systems, preventing their further development, deployment, or operation until compliance is achieved;

(d)   Mandating independent audits of the non-compliant entity’s processes at their own cost, with audit reports to be submitted to the IAIC for review and further action;

(e)    Issuing directives to non-compliant entities to implement specific remedial measures within a defined timeline, such as enhancing data quality controls, improving governance frameworks, or strengthening decision-making procedures;

(f)    In cases of persistent or egregious non-compliance, recommending the temporary or permanent suspension of the non-compliant entity’s AI-related operations, subject to due process and the principles of natural justice;

(g)   Taking any other regulatory action deemed necessary and proportionate to ensure compliance with the prescribed standards and to safeguard the responsible development, deployment, and use of AI systems.

 

(x)   Advise the Central Government on matters related to AI policy, regulation and governance, and recommend legislative or regulatory changes as necessary;

(xi)  Perform any other functions necessary to achieve the objectives of this Act or as assigned by the Central Government.

 

(10)The IAIC may constitute advisory committees, expert groups or task forces as deemed necessary to assist in its functions.

(11)The IAIC shall endeavour to function as a digital office to the extent practicable, conducting proceedings, filings, hearings and pronouncements through digital means as per applicable laws.

 


CHAPTER III-A: INDIAN ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE

Section 10-A – Composition and Functions of the Institute


(1)   With effect from the date notified by the Central Government, there shall be established the Indian Artificial Intelligence Safety Institute (AISI), a statutory body for the purposes of this Act.

(2)   The Indian Artificial Intelligence Safety Institute (AISI) shall be established as an autonomous body corporate with perpetual succession, a common seal, and the power to acquire, hold and transfer property, both movable and immovable, and to contract and be contracted, and sue or be sued by its name.

(3)   The Governing Body of the Indian Artificial Intelligence Safety Institute shall consist of the following members:

(i)    A Director General of AI Safety, with at least 15 years of experience in artificial intelligence research, who shall serve as the Chief Executive Officer of AISI.

(ii)   One representative from the Ministry of Electronics and Information Technology (MeitY), not below the rank of Joint Secretary.

(iii) One representative from the Ministry of Science and Technology (DST), not below the rank of Joint Secretary.

(iv)  One representative from the Ministry of Defence, not below the rank of Joint Secretary.

(v)   One representative from the Ministry of Communications, not below the rank of Joint Secretary.

(vi)  One representative from NITI Aayog, not below the rank of Joint Secretary.

(vii)One representative from the Committee for AI Centers of Excellence (CoEs) as an ex-officio member

(viii)               One Representative from the Committee for Technical Institutions in Critical AI Research as an ex-officio member

(ix) One Representative from the Committee on AI Ethics and Safety as an ex-officio member

 

(4)   In addition to the Governing Body, AISI shall include the following ex-officio members:

(i)    The Principal Scientific Advisor to the Government of India, or their nominee.

(ii)   One member from the Prime Minister’s Economic Advisory Council.

(iii) One representative, being a government official or expert appointed by the Central Government, responsible for coordinating with global AI safety institutes to ensure knowledge exchange and collaboration on emerging risks and best practices.

 

(5)   The AISI shall establish specialized committees as deemed necessary for fulfilling its mandate. These committees shall include but are not limited to:

(i)    Committee for AI Centers of Excellence (CoEs): This committee shall represent all AI-related Centers of Excellence across India.

(ii)   Committee for Technical Institutions in Critical AI Research: This committee shall coordinate with technical institutions engaged in critical research on AI systems.

(iii) Committee on AI Ethics and Safety: This committee shall guide AISI on ethical principles governing AI systems.

 

(6)   The AISI shall undertake the following functions under this Act:

(i)    Develop protocols for risk assessment, monitoring, and mitigation concerning high-risk AI applications, particularly in strategic sectors such as healthcare, defence, finance, and public administration.

(ii)   Formulate and establish safety standards for high-risk AI applications for the IAIC. These standards shall be aligned with national security priorities and international norms governing AI safety.

(iii) Conduct annual audits of high-risk AI systems deployed across various sectors. The findings from these audits shall be reported to IAIC for further action or policy formulation.

(iv)  Undertake research initiatives focused on identifying emerging risks associated with new developments in artificial intelligence. Such research shall be conducted in partnership with IAIC, academic institutions, technical bodies and centres of excellence (CoEs), and international organizations dedicated to AI safety.

(v)   Submit an annual report to the Central Government and IAIC, detailing safety incidents, audit findings, and research advancements.

 

(7)   AISI may engage in international partnerships and dialogues, contributing to India’s leadership in responsible AI governance.

 

Chapter IV

Chapter IV: CERTIFICATION AND ETHICS CODE

Section 11 – Registration & Certification of AI Systems

(1)   The IAIC shall establish a voluntary certification scheme for AI systems based on their industry use cases and risk levels, on the basis of the means of classification set forth in Chapter II. The certification scheme shall be designed to promote responsible AI development and deployment.

 

(2)   The IAIC shall maintain a National Registry of Artificial Intelligence Use Cases as described in Section 12 to register and track the development and deployment of AI systems across various sectors. The registry shall be used to inform the development and refinement of the certification scheme and to promote transparency and accountability in artificial intelligence governance.

 

(2)   The certification scheme shall be based on a set of clear, objective, and risk-proportionate criteria that assess the inherent purpose, technical characteristics, and potential impacts of AI systems.

 

(3)   AI systems classified as narrow or medium risk under Section 7 and AI-Pre under sub-section (8) of Section 6 may be exempt from the certification requirement if they meet one or more of the following conditions:

(i)    The AI system is still in the early stages of development or testing and has not yet achieved technical or economic thresholds for effective standardisation;

(ii)  The AI system is being developed or deployed in a highly specialized or niche application area where certification may not be feasible or appropriate; or

(iii) The AI system is being developed or deployed by start-ups, micro, small & medium enterprises, or research institutions.

 

(4)   AI systems that qualify for exemptions under sub-section (3) must establish and maintain incident reporting and response protocols specified in Section 19. Failure to maintain these protocols may result in the revocation of the exemption.

 

(5)   Applicability of Section 4 Classification Methods:

 

(i)    The conceptual methods of classification outlined in Section 4 are intended for consultative and advisory purposes only. Their application is not mandatory for the National AI Registry of Use Cases under this Section. The IAIC is empowered to:

(a)    Issue advisories, clarifications, and guidance documents on the interpretation and application of the classification methods outlined in Section 4.

(b)   Provide sector-specific recommendations for the voluntary use of these classification methods by stakeholders, including developers, regulators, and industry professionals.

(c)    While these classification methods are not mandatory, stakeholders are encouraged to adopt them on a self-regulatory basis. Voluntary application of these methods can help:

(i)     Enhance transparency in AI development.

(ii)   Promote responsible AI deployment across sectors.

(iii)  Facilitate alignment with ethical standards outlined in the National Artificial Intelligence Ethics Code (NAIEC) under Section 13.

 

(ii)   The IAIC may periodically review and update its advisories, clarifications and guidance documents to reflect advancements in AI technologies and emerging best practices, ensuring that stakeholders have access to the latest guidance for applying these conceptual methods.

 

(6)   Notwithstanding anything contained in sub-section (5), entities registering high-risk AI systems as defined in the sub-section (4) of Section 7 and those associated with strategic sectors as specified in Section 9 must apply the conceptual classification methods outlined in Section 4.

 

(7)   The certification scheme and the methods of classification specified in Chapter II shall undergo periodic review and updating every 12 months to ensure its relevance and effectiveness in response to technological advancements and market developments. The review process shall include meaningful consultation with sector-specific regulators and market stakeholders.

Section 12 – National Registry of Artificial Intelligence Use Cases

(1)   The National Registry of Artificial Intelligence Use Cases shall include the metadata for each registered AI system as set forth in sub-sections (1)(i) through (1)(xvi):

 

(i)    Name and version of the AI system (required)

(ii)   Owning entity of the AI system (required)

(iii) Date of registration (required)

(iv)  Sector associated with the AI system and whether the AI system is associated with a strategic sector (required)

(v)   Specific use case(s) of the AI system (required)

(vi)  Technical classification of the AI system, as per Section 5 (required)

(vii)Key technical characteristics of the AI system as per Section 5, including:

(a)    Type of AI model(s) used (required)

(b)   Training data sources and characteristics (required)

(c)    Performance metrics on standard benchmarks (where available, optional)

(viii)               Commercial classification of the AI system as per Section 6 (required)

(ix) Key commercial features of the AI system as per Section 6, including:

(a)    Number of end-users and business end-users in India (required, where applicable)

(b)   Market share or level of market influence in the intended sector(s) of application (required, where ascertainable)

(c)    Annual turnover or revenue generated by the AI system or the company owning it (required, where applicable)

(d)   Amount & intended purpose of data collected, processed, or utilized by the AI system (required, where measurable)

(e)    Level of data integration across different services or platforms (required, where applicable)

 

(x)   Risk classification of the AI system as per Section 7 (required)

(xi) Conceptual classification of the AI system as per Section 4 (required only for high-risk AI Systems)

(xii)                Potential impacts of the AI system as per Section 7, including:

(a)    Inherent Purpose (required)

(b)   Possible risks and harms observed and documented by the owning entity (required)

 

(xiii)               Certification status (required) (registered & certified / registered & not certified)

(xiv)               A detailed post-deployment monitoring plan as per Section 17 (required only for high-risk AI Systems), including:

(a)    Performance metrics and key indicators to be tracked (optional)

(b)   Risk mitigation and human oversight protocols (required)

(c)    Data collection, reporting, and audit trail mechanisms (required)

(d)   Feedback and redressal channels for impacted stakeholders (optional)

(e)    Commitments to periodic third-party audits and public disclosure of:

(i)    Monitoring reports and performance indicators (optional)

(ii)   Descriptions of identified risks, incidents or failures as per sub-section (3) of Section 17 (required)

(iii)  Corrective actions and mitigation measures implemented (required)

 

(xv)                 Incident reporting and response protocols as per Section 19 (required)

(a)    Description of the incident reporting mechanisms established (e.g. hotline, online portal)

(b)   Timelines committed for incident reporting based on risk classification

(c)    Procedures for assessing and determining incident severity levels

(d)   Information to be provided in incident reports as per guidelines

(e)    Confidentiality and data protection measures for incident data

(f)     Minimum mitigation actions to be taken upon incident occurrence

(g)   Responsible personnel/team for incident response and mitigation

(h)   Commitments on notifying and communicating with impacted parties

(i)     Integration with IAIC’s central incident repository and reporting channels

(j)     Review and improvement processes for incident response procedures

(k)   Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits;

(l)     Confirmation that the insurance coverage meets the minimum requirements specified in the sub-section (3) of Section 25 based on the AI system’s risk classification;

(m)  Details of the risk assessment conducted to determine the appropriate level of insurance coverage, considering factors such as the AI system’s conceptual, technical, and commercial classifications as per Sections 4, 5, and 6;

(n)   Information on the claims process and timelines for notifying the insurer and submitting claims in the event of an incident covered under the insurance policy;

(o)   Commitment to maintain the insurance coverage throughout the lifecycle of the AI system and to notify the IAIC of any changes in coverage or insurer.

(xvi)               Contact information for the owning entity (required)

 

Illustration

A technology company develops a new AI system for automated medical diagnosis using computer vision and machine learning techniques. This AI system would be classified as a high-risk system under Section 7(4) due to its potential impact on human health and safety. The company registers this AI system in the National Registry of Artificial Intelligence Use Cases, providing the following metadata:

(i)    Name and version: MedVision AI Diagnostic System v1.2

(ii)  Owning entity: ABC Technologies Pvt. Ltd.

(iii) Date of registration: 01/05/2024

(iv)  Sector: Healthcare

(v)   Use case: Automated analysis of medical imaging data (X-rays, CT scans, MRIs) to detect and diagnose diseases

(vi)  Technical classification: Specific Purpose AI (SPAI) under Section 5(4)

(vii)         Key technical characteristics:

 

·       Convolutional neural networks for image analysis

·       Trained on de-identified medical imaging datasets from hospitals

·       Achieved 92% accuracy on standard benchmarks

 

(viii)        Commercial classification: AI-Pro under Section 6(3)

(ix)  Key commercial features:

 

·       Intended for use by healthcare providers across India

·       Not yet deployed, so no market share data

·       No revenue generated yet (pre-commercial)

 

(x)   Risk classification: High Risk under Section 7(4)

(xi)  Conceptual classification: Assessed under all four methods in Section 4 due to high-risk

(xii)         Potential impacts:

 

·       Inherent purpose is to assist medical professionals in diagnosis

·       Documented risks include misdiagnosis, bias, lack of interpretability

 

(xiii)       Certification status: Registered & certified

(xiv)        Post-deployment monitoring plan:

 

·       Performance metrics like accuracy, false positive/negative rates

·       Human oversight, periodic audits for bias/errors

·       Logging all outputs, decisions for audit trail

·       Channels for user feedback, grievance redressal

·       Commitments to third-party audits, public incident disclosure

 

(xv) Incident reporting protocols:

 

·       Dedicated online portal for incident reporting

·       Critical incidents to be reported within 48 hours

·       High/medium severity incidents within 7 days

·       Procedures for severity assessment, confidentiality measures

·       Minimum mitigation actions, impacted party notifications

·       Integration with IAIC incident repository

·       Insurance coverage details:

·       Professional indemnity policy from XYZ Insurance Co., policy #PI12345

·       Coverage limit of INR 50 crores, as required for high-risk AI under Section 25(3)(i)

·       Risk assessment considered technical complexity, healthcare impact, irreversible consequences

·       Claims to be notified within 24 hours, supporting documentation within 7 days

·       Coverage to be maintained throughout AI system lifecycle, IAIC to be notified of changes

 

(xvi)       Contact: info@abctech.com

 

(2)   The IAIC may, from time to time, expand or modify the metadata schema for the National Registry as it deems necessary to reflect advancements in AI technology and risk assessment methodologies. The IAIC shall give notice of any such changes at least 60 days prior to the date on which they shall take effect.

 

(3)   The owners of AI systems shall have the duty to provide accurate and current metadata at the time of registration and to notify the IAIC of any material changes to the registered information within:

 

(i)    15 days of such change occurring for AI systems classified as High Risk under sub-section (4) of Section 7;

(ii)   30 days of such change occurring for AI systems classified as Medium Risk under sub-section (3) of Section 7;

(iii) 60 days of such change occurring for AI systems classified as Narrow Risk under sub-section (2) of Section 7;

(iv)  90 days of such change occurring for AI systems classified as Narrow Risk or Medium Risk under Section 7 that are exempted from certification under sub-section (3) of Section 11.

 

(4)   Notwithstanding anything contained in sub-section (1), the owners of AI systems exempted under sub-section (3) of Section 11 shall only be required to submit the metadata specified in sub-sections (4)(i) through (4)(xi) to register their AI systems:

 

(i)    Name and version of the AI system (required)

(ii)   Owning entity of the AI system (required)

(iii) Date of registration (required)

(iv)  Sector associated with the AI system (optional)

(v)   Specific use case(s) of the AI system (required)

(vi)  Technical classification of the AI system, as per Section 5 (optional)

(vii)Commercial classification of the AI system as per Section 6 (required)

(viii)               Risk classification of the AI system as per Section 7 (required, narrow risk or medium risk only)

(ix) Certification status (required) (registered & certification is exempted under sub-section (3) of Section 11)

(x)   Incident reporting and response protocols as per Section 19 (required)

(a)         Description of the incident reporting mechanisms established (e.g. hotline, online portal)

(b)        Timelines committed for reporting high/critical severity incidents (within 14-30 days)

(c)         Procedures for assessing and determining incident severity levels (only high/critical)

(d)        Information to be provided in incident reports (incident description, system details)

(e)         Confidentiality measures for incident data based on sensitivity (scaled down)

(f)          Minimum mitigation actions to be taken upon high/critical incident occurrence

(g)        Responsible personnel/team for incident response and mitigation

(h)        Commitments on notifying and communicating with impacted parties

(i)          Integration with IAIC’s central incident repository and reporting channels

(j)          Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits (required for high-risk AI systems only);

(xi) Contact information for the owning entity (required)

 

Illustration

A small AI startup develops a chatbot for basic customer service queries using natural language processing techniques. As a low-risk AI system still in early development stages, they claim exemption under Section 11(3) and register with the following limited metadata:

(i)    Name and version: ChatAssist v0.5 (beta)

(ii)   Owning entity: XYZ AI Solutions LLP

(iii)  Date of registration: 15/06/2024

(iv)  Sector: Not provided (optional)

(v)   Use case: Automated response to basic customer queries via text/voice

(vi)  Technical classification: Specific Purpose AI (SPAI) under Section 5(4) (optional)

(vii)         Commercial classification: AI-Pre under Section 6(8)

(viii)        Risk classification: Narrow Risk under Section 7(2)

(ix)  Certification status: Registered & certification exempted under Section 11(3)

(x)   Incident reporting protocols:

 

·       Email support@xyzai.com for incident reporting

Timelines committed for reporting high/critical severity incidents (within 14-30 days)

·       High/critical incidents to be reported within 30 days

Procedures for assessing and determining incident severity levels (only high/critical)

·       Only incident description and system details required

Information to be provided in incident reports (incident description, system details)

Confidentiality measures for incident data based on sensitivity (scaled down)

·       Standard data protection measures as per company policy

Minimum mitigation actions to be taken upon high/critical incident occurrence

·       Mitigation by product team, notifying customers if major

Responsible personnel/team for incident response and mitigation

Commitments on notifying and communicating with impacted parties

Integration with IAIC’s central incident repository and reporting channels

 

(xi)  Contact: support@xyzai.com

 

(5)   The IAIC shall put in place mechanisms to validate the metadata provided and to audit registered AI systems for compliance with the reported information. Where the IAIC determines that any developer or owner has provided false or misleading information, it may impose penalties, including fines and revocation of certification, as it deems fit.

(6)   The IAIC shall publish aggregate statistics and analytics based on the metadata in the National Registry for the purposes of supporting evidence-based policymaking, research, and public awareness about AI development and deployment trends. Provided that commercially sensitive information and trade secrets shall not be disclosed.

(7)   Registration and certification under this Act shall be voluntary, and no penal consequences shall attach to the lack of registration or certification of an AI system, except as otherwise expressly provided in this Act.

 

(8)   The examination process for registration and certification of AI use cases shall be conducted by the IAIC in a transparent and inclusive manner, engaging with relevant stakeholders, including:

(i)    Technical experts and researchers in the field of artificial intelligence, who can provide insights into the technical aspects, capabilities, and limitations of the AI systems under examination.

(ii)   Representatives of industries developing and deploying AI technologies, who can offer practical perspectives on the commercial viability, use cases, and potential impacts of the AI systems.

(iii) Technology standards & business associations and consumer protection groups, who can represent the interests and concerns of end-users, affected communities, and the general public.

(iv)  Representatives from diverse communities and individuals who may be impacted by AI systems, to ensure their rights, needs, experiences and perspectives across different contexts are comprehensively accounted for during the examination process.

(v)   Any other relevant stakeholders or subject matter experts that the IAIC deems necessary for a comprehensive and inclusive examination of AI use cases.

 

(9)   The IAIC shall publish the results of its examinations for registration and certification of AI use cases, along with any recommendations for risk mitigation measures, regulatory actions, or guidelines, in an accessible format for public review and feedback. This shall include detailed explanations of the classification criteria applied, the stakeholder inputs considered, and the rationale behind the decisions made.


Section 13 – National Artificial Intelligence Ethics Code

(1)   A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilisation of artificial intelligence technologies;

 

(2)   The NAIEC shall be based on the following core ethical principles:

(i)          AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process.

(ii)        AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes.

(iii)       AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content.

(iv)       AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration.

(v)        AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented.

(vi)       AI systems should be developed and deployed with consideration for their environmental impact, promoting sustainability and minimizing negative ecological consequences throughout their lifecycle.

(vii)     AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented;

(viii)    AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed.

(ix)       AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilisation while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment.

 

(3)   The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end:

(i)    The use of OSS shall be guided by a clear understanding of the open source development model, its scope, constraints, and the varying implementation approaches across different socio-economic and organisational contexts.

(ii)   AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny, while also ensuring that sensitive components and intellectual property are adequately protected.

(iii) The use of OSS in AI development shall not exempt AI systems from complying with the principles and requirements set forth in this Ethics Code, including fairness, accountability, transparency, and adherence to applicable laws and regulations.

(iv)  AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems, and shall implement appropriate governance, quality assurance, and risk management processes.

(v)   The IAIC shall support research and development initiatives under the Digital India Programme that leverage OSS to create AI tools and frameworks that prioritize ethics, safety, inclusivity, and responsible innovation, while also providing guidance and best practices for the effective and sustainable use of OSS in AI development.

(vi)  The IAIC shall collaborate with relevant stakeholders, including open source communities, industry associations, and academic institutions, to develop guidelines and frameworks for the responsible and context-appropriate adoption of OSS in AI development, taking into account the unique challenges and opportunities across different sectors and organisational contexts.

 

(4)   The Ethics Code shall provide guidance on intellectual property and ownership considerations related to AI-generated content. To this end:

(i)    Appropriate mechanisms shall be established to determine ownership, attribution and intellectual property rights over content generated by AI systems, while fostering innovation and protecting the rights of human creators and innovators.

(ii)   Specific considerations shall include recognizing the role of human involvement in developing and deploying the AI systems, establishing guidelines on copyrightability and patentability of AI-generated works and inventions, addressing scenarios where AI builds upon existing protected works, safeguarding trade secrets and data privacy, balancing incentives for AI innovation with disclosure and access principles, and continuously updating policies as AI capabilities evolve.

(iii) The Ethics Code shall encourage transparency and responsible practices in managing intellectual property aspects of AI-generated content across domains such as text, images, audio, video and others.

(iv)  In examining IP and ownership issues related to AI-generated content, the Ethics Code shall be guided by the conceptual classification methods outlined in Section 4, particularly the Anthropomorphism-Based Concept Classification to evaluate scenarios where AI replicates or emulates human creativity and invention.

(v)   The technical classification methods described in Section 5, such as the scale, inherent purpose, technical features, and limitations of the AI system, shall inform the assessment of IP and ownership considerations for AI-generated content.

(vi)  The commercial classification factors specified in the sub-section (1) of Section 6, including the user base, market influence, data integration, and revenue generation of the AI system, shall also be taken into account when determining IP and ownership rights over AI-generated content.

 

(5)   The Ethics Code shall provide guidance on frugal prompt engineering practices for the development of AI systems, including:

(i)    Encouraging the use of concise and well-structured prompts that clearly define the desired outputs and constraints;

(ii)   Recommending the adoption of transfer learning and pre-trained models to reduce the need for extensive fine-tuning;

(iii) Promoting the use of data-efficient techniques, such as few-shot learning or active learning, to minimize the amount of training data required;

(iv)  Suggesting the implementation of early stopping mechanisms to prevent overfitting and improve generalisation;

(v)   Advocating for the utilisation of techniques like model compression, quantisation, or distillation to reduce computational complexity and resource requirements;

(vi)  Encouraging the documentation and maintenance of records on prompt engineering practices, including the rationale behind chosen techniques, performance metrics, and any trade-offs made between efficiency and effectiveness;

(vii)Recommending the periodic review and updating of prompt engineering practices based on the latest research, industry standards, and the guidelines provided by the IAIC;

 

(6)   The Ethics Code shall provide guidance on ensuring fair access rights for all stakeholders involved in the AI value and supply chain, including:

(i)    All stakeholders should have fair and transparent access to datasets necessary for training and developing AI systems. This includes promoting equitable data-sharing practices that ensure smaller entities or research institutions are not unfairly disadvantaged in accessing critical datasets.

(ii)   Ethical use of computational resources should be promoted by ensuring that all stakeholders have transparent access to these resources. Special consideration should be given to smaller entities or research institutions that may require preferential access or pricing models to support innovation.

(iii) Ethical guidelines should ensure that ownership rights over trained models, derived outputs, and intellectual property are clearly defined and respected. Stakeholders involved in the development process must have a clear understanding of their rights and obligations regarding the usage and commercialization of AI technologies.

(iv)  The benefits derived from AI technologies should be distributed equitably among all stakeholders involved in their development and commercialization. This includes ensuring that smaller players contributing critical resources like proprietary datasets or specialized algorithms are fairly compensated.

 

(7)   Adherence to the NAIEC shall be voluntary for all AI systems, as well as those exempted under the sub-section (3) of Section 11. However, the IAIC may mandate adherence to specific principles of the NAIEC and the sub-sections (3), (4), (5) and (6) for high-risk AI systems deployed in sensitive domains, strategic sectors or those with significant potential for societal or sociotechnical impact;

 

(8)   The NAIEC shall be reviewed and updated periodically by the IAIC to reflect advancements in AI technologies, emerging best practices, and evolving societal norms and values related to the responsible development and deployment of AI systems.

 

Chapter V

Chapter V: KNOWLEDGE MANAGEMENT AND DECISION-MAKING


Section 14 - Model Standards on Knowledge Management

(1)   The IAIC shall develop, document and promote comprehensive model standards on knowledge management practices concerning the development, maintenance, and governance of high-risk AI systems. These standards shall focus on the effective management of knowledge assets;

 

(2)   The model standards shall encompass the following areas:

(i)    Intellectual property management practices to safeguard and leverage AI-related intellectual property rights such as patents, copyrights, trademarks and industrial designs.

(ii)   Processes for documenting and organizing technical knowledge assets like research reports, manuals, standards and industrial practices related to AI systems.

(iii) Frameworks for capturing, retaining and transferring the tacit knowledge and expertise of human capital involved in AI development and deployment.

(iv)  Organisational systems and methodologies to enable effective knowledge capture, storage, retrieval and utilisation across the AI system lifecycle.

(v)   Mechanisms for leveraging customer-related knowledge assets such as data, feedback and insights to enhance AI system development and performance.

(vi)  Analytical techniques to derive knowledge from data analysis, including identifying patterns, trends and developing predictive models for AI systems.

(vii)Collaborative practices to foster cross-functional knowledge sharing and generation through teams, communities of practice and other initiatives.

 

(3)   All entities engaged in the development, deployment, or utilisation of high-risk AI systems shall be bound by the model standards on knowledge management and decision-making as provided by this section. The compliance timeline for such high-risk AI systems shall be determined by the IAIC and may vary based on the technical, commercial and risk-based classification of those systems under Section 12.

(4)   The Central Government shall empower the IAIC or agencies to establish a knowledge management registry process to enable the standardisation of various knowledge management practices and procedures associated with the life cycle of AI systems.

 

(5)   The entities responsible for the development of high-risk AI systems shall be required to submit regular audit reports to the IAIC, outlining their adherence to the model standards for knowledge management and decision-making.

 

(6)   For artificial intelligence technologies subject to commercial classification as determined by the factors outlined in sub-section (1) of Section 6, the requirement to comply with these model standards on knowledge management shall be assessed by the IAIC on a case-by-case basis, taking into consideration the specific commercial classification factors applicable to each AI technology.

 

Illustration

 

A startup has developed an AI-powered language translation app that allows users to translate text, documents, and speech between multiple Indian languages. Based on an assessment of the factors in Section 6(1), such as the app’s user base, market influence, and data integration, the IAIC may determine that this AI technology falls under the AI-Pro or AIaaS category. The IAIC will then evaluate if the startup needs to fully comply with the knowledge management standards or if certain requirements can be relaxed or made optional based on the app’s specific use case and commercial profile.

 

(7)   In determining the case-by-case application of these model standards to commercially classified AI technologies under sub-section (1) of Section 6, the IAIC shall take into account any relevant sector-specific standards, codes of practice, or regulatory guidelines pertaining to knowledge management practices in the sector to which the AI technology belongs or is intended to be deployed.

Illustration

An agritech startup has developed an AI system that analyzes satellite imagery and weather data to provide crop yield predictions and advisory services to farmers. As this AI technology falls within the agriculture sector, the IAIC’s assessment of its knowledge management requirements will consider any relevant guidelines or standards issued by bodies like the Indian Council of Agricultural Research (ICAR) or the Ministry of Agriculture & Farmers’ Welfare. These may include data governance norms for agricultural data, model validation protocols for AI-based advisory services, or best practices for maintaining data trails and audit logs in agritech applications.

(8)   Failure to adhere to the prescribed model standards for knowledge management and decision-making processes shall result in regulatory actions by the IAIC, which may include:

(i)    Issuance of show-cause notices to the non-compliant entity, requiring them to explain the reasons for non-compliance and outline corrective measures within a specified timeline.

(ii)   Imposition of monetary penalties, determined based on the severity of non-compliance, the risk level of the AI system involved, and the potential impact on individuals, businesses, or society. The monetary penalties shall be commensurate with the financial capacity of the non-compliant entity.

(iii) Suspension or revocation of certifications or registrations related to the non-compliant AI system, preventing its further development, deployment, or operation until compliance is achieved.

(iv)  Mandating independent audits of the non-compliant entity’s knowledge management and decision-making processes at their own cost, with the audit reports to be submitted to the IAIC for review and further action.

(v)   Issuing directives to the non-compliant entity to implement specific remedial measures, such as enhancing data quality controls, improving model governance frameworks, or strengthening decision-making procedures, within a defined timeline.

(vi)  In cases of persistent or egregious non-compliance, the IAIC may recommend the temporary or permanent suspension of the non-compliant entity’s AI-related operations, subject to due process and the principles of natural justice.

(vii)Any other regulatory action deemed necessary and proportionate by the IAIC to ensure compliance with the prescribed model standards and to safeguard the responsible development, deployment, and use of high-risk AI systems.

 

(9)   The IAIC shall establish and publish clear guidelines and criteria for determining the appropriate regulatory actions, ensuring transparency and consistency in its decision-making process.

(10)The IAIC shall encourage the sharing of AI-related knowledge, including datasets, models, and algorithms, through open-source software repositories and platforms, subject to applicable intellectual property rights and the provisions of the Digital Personal Data Protection Act, 2023 and other relevant data protection and governance frameworks as may be prescribed.

 


Chapter VI

Chapter VI: ON GUIDANCE PRINCIPLES AND MONITORING


Section 15 - Guidance Principles for AI-related Agreements

(1)   The following guidance principles shall apply to AI-related agreements to promote transparent, fair, and responsible practices in the development, deployment, and use of AI technologies:

 

(i)    AI Software License Agreement (ASLA):

(a)    The AI Software License Agreement (ASLA) shall be mandatory for AI systems classified as AI-Pro or AI-Com as per Section 6, if they are designated as High Risk AI systems under Section 7.  

(b)   The ASLA shall clearly define:

(i)     The scope of rights granted to the licensee, including limitations on use, modification, and distribution of the AI software;

(ii)   Intellectual property rights and ownership provisions;

(iii)  Term, termination, warranties, and indemnification clauses.

 

(ii)   AI Service Level Agreement (AI-SLA):

(a)    The AI Service Level Agreement (AI-SLA) shall be mandatory for AI systems classified as AIaaS or AI-Com as per Section 6, if they are designated as High Risk or Medium Risk AI systems under Section 7.     

(b)   The AI-SLA shall establish:

(i)