top of page

Chapter I

PUBLISHED

Chapter I: PRELIMINARY


Section 1 - Short Title and Commencement


(1) This Act may be called the Artificial Intelligence (Development & Regulation) Act, 2023.

(2) It shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.


 

Section 2 – Definitions


[Please note: we have not provided all definitions, which may be required in this bill. We have only provided those definitions which are more essential, in signifying the legislative intent of the bill.]


In this Bill, unless the context otherwise requires—​


(a)   “Artificial Intelligence”, “AI”, “AI technology”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3.

(b)   “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application;

(c)    “Algorithmic Bias” includes –

(i)    the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and

(ii)   the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to:

(a)    programming errors;

(b)   flawed algorithmic logic; and

(c)    deficiencies in model training and validation, including but not limited to:

                              (1)      the incomplete or deficient data used for model training;

(d)   “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997;

(e)    “Business end-user” means an end-user that is -

(i)             engaged in a commercial or professional activity and uses an AI system in the course of such activity; or

(ii)            a government agency or public authority that uses an AI system in the performance of its official functions or provision of public services.

(f)    “Combinations of intellectual property protections” means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems;

(g)   “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including:

(i)    The source data, models, and algorithms used to generate the content;

(ii)   The individuals or entities involved in the creation, modification, and distribution of the content;

(iii) The date, time, and location of content creation and any subsequent modifications;

(iv)  The intended purpose, context, and target audience of the content;

(v)   Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and

(vi)  The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability.

(h)   “Corporate Governance” means the system of rules, practices, and processes by which an organization is directed and controlled, encompassing the mechanisms through which companies, and organisations, ensure accountability, fairness, and transparency in their relationships with stakeholders including but not limited to employees, shareholders, customers, and the public.

(i)    “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means;

(j)    “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;

(k)   “Data portability” means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where:

(i)    The personal data has been provided to the data fiduciary by the data principal;

(ii)   The processing is based on consent or the performance of a contract; and

(iii)  The processing is carried out by automated means.

(l)    “Data Principal” means the individual to whom the personal data relates and where such individual is—

(i)             a child, includes the parents or lawful guardian of such a child;

(ii)            a person with disability, includes her lawful guardian, acting on her behalf;

(m)  “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023;

(n)   “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode;

(o)   “Digital personal data” means personal data in digital form;

(p)   “Digital Public Infrastructure” or “DPI” means the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including but not limited to:

(i)    Digital identity systems that provide secure and verifiable identification for individuals and businesses;

(ii)   Digital payment systems that facilitate efficient, transparent, and inclusive financial transactions;

(iii)  Data exchange platforms that enable secure and interoperable sharing of data across various sectors and stakeholders;

(iv)  Digital registries and databases that serve as authoritative sources of information for various public and private services;

(v)   Open application programming interfaces (APIs) and standards that promote innovation, interoperability, and collaboration among different actors in the digital ecosystem.

(q)   “End-user” means -

(i)     an individual who ultimately uses or is intended to ultimately use an AI system, directly or indirectly, for personal, domestic or household purposes; or

(ii)    an entity, including a business or organization, that uses an AI system to provide or offer a product, service, or experience to individuals, whether for a fee or free of charge.

(r)    “Knowledge asset” includes, but is not limited to:

(i)    Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs;

(ii)   Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards;

(iii)  Tacit knowledge and expertise residing within the organization’s human capital, such as specialized skills, experiences, and know-how;

(iv)  Organizational processes, systems, and methodologies that enable the effective capture, organization, and utilization of knowledge;

(v)   Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences;

(vi)  Knowledge derived from data analysis, including patterns, trends, and predictive models; and

(vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives.

(s)    “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems;

(t)    “IAIC” means Indian Artificial Intelligence Council, a statutory and regulatory body established to oversee the development & regulation of artificial intelligence systems and coordinate artificial intelligence governance across government bodies, ministries, and departments;

(u)   “Inherent Purpose”, and “Intended Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve;

(v)   “Insurance Policy” means measures and requirements concerning insurance for research & development, production, and implementation of artificial intelligence technologies;

(w)  “Interoperability considerations” means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments, which include:

(i)    Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement;

(ii)   Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good;

(iii) Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems;

(iv)  Addressing the legal and ethical implications of using copyright-protected works including but not limited to music, images, and text, in the training of AI models, and ensuring that such use is consistent with existing frameworks of intellectual property rights; and

(v)   Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications, while respecting intellectual property rights and fostering a vibrant and competitive AI ecosystem.

(x)   “Open Source Software” means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose.

(y)   “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical, commercial & risk-based features, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies;

(z)   “Person” includes—

(i)             an individual;

(ii)            a Hindu undivided family;

(iii)          a company;

(iv)           a firm;

(v)            an association of persons or a body of individuals, whether incorporated or not;

(vi)           the State; and

(vii)         every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r);

(aa) “Post-Deployment Monitoring” means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service

(bb)“Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects;

(cc) “Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023;

(dd)"Systemically Significant Digital Enterprise" (SSDE) means an entity classified as such under Chapter II of the Digital Competition Act, 2024[1], based on:

(i)    The quantitative and qualitative criteria specified in Section 5 of the Digital Competition Act, 2024; or

(ii)   The designation by the Competition Commission of India under Section 6 of the Digital Competition Act, 2024, due to the entity's significant presence in the relevant core digital service.

(ee) “Sociotechnical” means the recognition that artificial intelligence systems are not merely technical artifacts but are embedded within broader social contexts, organizational structures, and human-technology interactions, necessitating the consideration and harmonization of both social and technical aspects to ensure responsible and effective AI governance;

(ff)   “State” shall be construed as the State defined under Article 12 of the Constitution of India;

(gg)         “Strategic sector” means a strategic sector as defined in the Foreign Exchange Management (Overseas Investment) Directions, 2022, and includes any other sector or sub-sector as deemed fit by the Central Government;

(hh)         “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network;

(ii)   “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service;

(jj)   “use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome;

(kk)“Whole-of-Government Approach” means a collaborative and integrated method of governance where all government entities, including ministries, departments, and agencies, work in a coordinated manner to achieve unified policy objectives, optimize resource utilization, and deliver services effectively to the public.


[1] It is assumed that the Draft Digital Competition Act, 2024 proposed to the Ministry of Corporate Affairs in March 2024 is in force. 

bottom of page