What is aiact.in?
AIACT.IN (also named as the Draft Artificial Intelligence (Development & Regulation) Bill, 2023) is India's first private proposal on the regulation of artificial intelligence technologies in India.
This bill was drafted and proposed by our Founder, Abhivardhan.
In addition, a New Artificial Intelligence Strategy for India, 2023 was also proposed, which was authored by Abhivardhan & Akash Manwani.
Navigation guide
AIACT.IN Video Explainer
Hang on! If you think the AIACT.IN is a long doc, here is a detailed video explainer of AIACT.IN Version 3 by the author of this draft, Abhivardhan.
You can also access a shorter explainer of Version 3 here.
Chapter I
Chapter I: PRELIMINARY
Section 1 - Short Title and Commencement
(1) This Act may be called the Artificial Intelligence (Development & Regulation) Act, 2023.
(2) It shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.
Section 2 – Definitions
[Please note: we have not provided all definitions, which may be required in this bill. We have only provided those definitions which are more essential, in signifying the legislative intent of the bill.]
In this Bill, unless the context otherwise requires—
(a) “Artificial Intelligence”, “AI”, “AI technology”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3.
(b) “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application;
(c) “Algorithmic Bias” includes –
(i) the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and
(ii) the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to:
(a) programming errors;
(b) flawed algorithmic logic; and
(c) deficiencies in model training and validation, including but not limited to:
(1) the incomplete or deficient data used for model training;
(d) “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997;
(e) “Business end-user” means an end-user that is -
(i) engaged in a commercial or professional activity and uses an AI system in the course of such activity; or
(ii) a government agency or public authority that uses an AI system in the performance of its official functions or provision of public services.
(f) “Combinations of intellectual property protections” means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems;
(g) “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including:
(i) The source data, models, and algorithms used to generate the content;
(ii) The individuals or entities involved in the creation, modification, and distribution of the content;
(iii) The date, time, and location of content creation and any subsequent modifications;
(iv) The intended purpose, context, and target audience of the content;
(v) Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and
(vi) The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability.
(h) “Corporate Governance” means the system of rules, practices, and processes by which an organization is directed and controlled, encompassing the mechanisms through which companies, and organisations, ensure accountability, fairness, and transparency in their relationships with stakeholders including but not limited to employees, shareholders, customers, and the public.
(i) “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means;
(j) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;
(k) “Data portability” means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where:
(i) The personal data has been provided to the data fiduciary by the data principal;
(ii) The processing is based on consent or the performance of a contract; and
(iii) The processing is carried out by automated means.
(l) “Data Principal” means the individual to whom the personal data relates and where such individual is—
(i) a child, includes the parents or lawful guardian of such a child;
(ii) a person with disability, includes her lawful guardian, acting on her behalf;
(m) “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023;
(n) “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode;
(o) “Digital personal data” means personal data in digital form;
(p) “Digital Public Infrastructure” or “DPI” means the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including but not limited to:
(i) Digital identity systems that provide secure and verifiable identification for individuals and businesses;
(ii) Digital payment systems that facilitate efficient, transparent, and inclusive financial transactions;
(iii) Data exchange platforms that enable secure and interoperable sharing of data across various sectors and stakeholders;
(iv) Digital registries and databases that serve as authoritative sources of information for various public and private services;
(v) Open application programming interfaces (APIs) and standards that promote innovation, interoperability, and collaboration among different actors in the digital ecosystem.
(q) “End-user” means -
(i) an individual who ultimately uses or is intended to ultimately use an AI system, directly or indirectly, for personal, domestic or household purposes; or
(ii) an entity, including a business or organization, that uses an AI system to provide or offer a product, service, or experience to individuals, whether for a fee or free of charge.
(r) “Knowledge asset” includes, but is not limited to:
(i) Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs;
(ii) Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards;
(iii) Tacit knowledge and expertise residing within the organization’s human capital, such as specialized skills, experiences, and know-how;
(iv) Organizational processes, systems, and methodologies that enable the effective capture, organization, and utilization of knowledge;
(v) Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences;
(vi) Knowledge derived from data analysis, including patterns, trends, and predictive models; and
(vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives.
(s) “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems;
(t) “IAIC” means Indian Artificial Intelligence Council, a statutory and regulatory body established to oversee the development & regulation of artificial intelligence systems and coordinate artificial intelligence governance across government bodies, ministries, and departments;
(u) “Inherent Purpose”, and “Intended Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve;
(v) “Insurance Policy” means measures and requirements concerning insurance for research & development, production, and implementation of artificial intelligence technologies;
(w) “Interoperability considerations” means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments, which include:
(i) Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement;
(ii) Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good;
(iii) Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems;
(iv) Addressing the legal and ethical implications of using copyright-protected works including but not limited to music, images, and text, in the training of AI models, and ensuring that such use is consistent with existing frameworks of intellectual property rights; and
(v) Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications, while respecting intellectual property rights and fostering a vibrant and competitive AI ecosystem.
(x) “Open Source Software” means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose.
(y) “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical, commercial & risk-based features, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies;
(z) “Person” includes—
(i) an individual;
(ii) a Hindu undivided family;
(iii) a company;
(iv) a firm;
(v) an association of persons or a body of individuals, whether incorporated or not;
(vi) the State; and
(vii) every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r);
(aa) “Post-Deployment Monitoring” means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service
(bb)“Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects;
(cc) “Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023;
(dd)"Systemically Significant Digital Enterprise" (SSDE) means an entity classified as such under Chapter II of the Digital Competition Act, 2024[1], based on:
(i) The quantitative and qualitative criteria specified in Section 5 of the Digital Competition Act, 2024; or
(ii) The designation by the Competition Commission of India under Section 6 of the Digital Competition Act, 2024, due to the entity's significant presence in the relevant core digital service.
(ee) “Sociotechnical” means the recognition that artificial intelligence systems are not merely technical artifacts but are embedded within broader social contexts, organizational structures, and human-technology interactions, necessitating the consideration and harmonization of both social and technical aspects to ensure responsible and effective AI governance;
(ff) “State” shall be construed as the State defined under Article 12 of the Constitution of India;
(gg) “Strategic sector” means a strategic sector as defined in the Foreign Exchange Management (Overseas Investment) Directions, 2022, and includes any other sector or sub-sector as deemed fit by the Central Government;
(hh) “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network;
(ii) “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service;
(jj) “use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome;
(kk)“Whole-of-Government Approach” means a collaborative and integrated method of governance where all government entities, including ministries, departments, and agencies, work in a coordinated manner to achieve unified policy objectives, optimize resource utilization, and deliver services effectively to the public.
[1] It is assumed that the Draft Digital Competition Act, 2024 proposed to the Ministry of Corporate Affairs in March 2024 is in force.
Chapter II
Chapter II: CATEGORIZATION AND PROHIBITION
Section 3 - Classification of Artificial Intelligence
(1) All artificial intelligence technologies are categorised on the basis of the means of classification provided as follows –
(a) Conceptual methods of classification: These methods as described in Section 4 categorize artificial intelligence technologies through a conceptual assessment of their utilization, development, maintenance, and proliferation to examine & recognise their inherent purpose. These methods include:
(1) Issue-to-Issue Concept Classification (IICC)
(2) Ethics-Based Concept Classification (EBCC)
(3) Phenomena-Based Concept Classification (PBCC)
(4) Anthropomorphism-Based Concept Classification (ABCC)
(b) Technical methods of classification: These methods as described in Section 5 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations. These methods include:
(1) General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS)
(2) General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU)
(3) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI)
(c) Commercial methods of classification: These methods as described in Section 6 involve the categorisation of commercially and industrially produced and disseminated artificial intelligence technologies subject to their inherent purpose.
(1) Artificial Intelligence as a Product (AI-Pro)
(2) Artificial Intelligence as a Service (AIaaS)
(3) Artificial Intelligence as a Component (AI-Com)
(4) Artificial Intelligence as a System (AI-S)
(5) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS)
(6) Artificial Intelligence for Preview (AI-Pre)
(d) Risk-centric methods of classification: These methods as described in Section 7 classify artificial intelligence technologies based on their outcome and impact-based risks.
(1) Narrow Risk AI Systems
(2) Medium Risk AI Systems
(3) High Risk AI Systems
(4) Unintended Risk AI Systems
Section 4 – Conceptual Methods of Classification
(1) These methods as designated in clause (a) of sub-section (1) of Section 3 categorize artificial intelligence technologies through a conceptual assessment of their utilization, development, maintenance, and proliferation to examine & recognise their inherent purpose. This classification is further categorised as –
(i) Issue-to-Issue Concept Classification (IICC) as described in sub-section (2)
(ii) Ethics-Based Concept Classification (EBCC) as described in in sub-section (3)
(iii) Phenomena-Based Concept Classification (PBCC) as described in in sub-section (4)
(iv) Anthropomorphism-Based Concept Classification (ABCC) as described in in sub-section (5)
(2) Issue-to-Issue Concept Classification (IICC) involves the method to determine the inherent purpose of artificial intelligence technologies on a case-to-case basis, to examine & recognise their inherent purpose on the basis of these factors of assessment:
(i) Utilization: Assessing the specific use cases and applications of the AI technology in various domains.
(ii) Development: Evaluating the design, training, and deployment processes of the AI technology.
(iii) Maintenance: Examining the ongoing support, updates, and modifications made to the AI technology.
(iv) Proliferation: Analysing the dissemination and adoption of the AI technology across different sectors and user groups.
Illustrations
(1) An AI system designed for medical diagnostics is classified based on its purpose to enhance patient outcomes. For instance, if an AI software assists doctors in diagnosing diseases more accurately, it is classified under medical AI applications.
(2) An AI system for financial trading is classified based on its purpose to optimize investment strategies. For example, if an AI-driven algorithm analyses market data to recommend stock trades, it is classified under financial AI applications.
(3) Ethics-Based Concept Classification (EBCC) involves the method of recognising the ethics-based relationship of artificial intelligence technologies in sector-specific & sector-neutral contexts, to examine & recognise their inherent purpose on the basis of these factors:
(i) Utilization: Assessing the ethical implications of AI technology use in specific sectors and across different domains.
(ii) Development: Evaluating the ethical considerations in the design, training, and deployment of AI technologies.
(iii) Maintenance: Examining the ongoing ethical responsibilities in supporting, updating, and modifying AI technologies.
(iv) Proliferation: Analyzing the ethical impact of AI technology dissemination and adoption across various sectors and user groups.
Illustration
An AI for social media content moderation is assessed based on fairness and bias prevention. For example, if an AI filters hate speech and misinformation on social media platforms, it is classified under content moderation AI with an emphasis on ensuring unbiased and fair treatment of all users’ content.
(4) Phenomena-Based Concept Classification (PBCC) involves the method of addressing rights-based issues associated with the use and dissemination of artificial intelligence technologies to examine & recognise their inherent purpose on the basis of these factors:
(i) Utilization: Assessing the impact of AI technology use on individual and collective rights in various domains.
(ii) Development: Evaluating the incorporation of rights-based considerations in the design, training, and deployment of AI technologies.
(iii) Maintenance: Examining the ongoing efforts to protect and uphold rights in the support, updates, and modifications of AI technologies.
(iv) Proliferation: Analysing the rights-based implications of AI technology dissemination and adoption across different sectors and user groups.
Illustrations
(1) An AI system that analyses personal data for targeted advertising is classified based on its compliance with data protection rights. For example, an AI that personalizes ads based on user behaviour is classified under advertising AI with data privacy considerations.
(2) An AI used in autonomous vehicles is classified based on its implications for road safety and user rights. For instance, an AI that controls self-driving cars is classified under automotive AI with a focus on safety and user rights.
(5) Anthropomorphism-Based Concept Classification (ABCC) involves the method of evaluating scenarios where AI systems ordinarily simulate, imitate, replicate, or emulate human attributes, which include:
(i) Autonomy: The ability to operate and make decisions independently, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model autonomous decision-making processes using computational methods;
· Imitation: AI systems learn from and reproduce human-like autonomous behaviours;
· Replication: AI systems accurately reproduce specific human-like autonomous functions;
· Emulation: AI systems replicate and potentially enhance human-like autonomy;
Illustration
An AI-powered drone delivery system that navigates through urban environments, avoiding obstacles and adapting its route based on real-time traffic conditions to efficiently deliver packages without human intervention.
(ii) Perception: The ability to interpret and understand sensory information from the environment, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model human-like perception using computational methods;
· Imitation: AI systems learn from and reproduce specific human-like perceptual processes;
· Replication: AI systems accurately reproduce specific human-like perceptual abilities;
Illustration
A service robot in a hotel uses computer vision and natural language processing to recognize and greet guests by name, interpret their facial expressions and tone of voice to gauge emotions, and respond appropriately to verbal requests.
(iii) Reasoning: The ability to process information, draw conclusions, and solve problems, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model human-like reasoning using computational methods;
· Imitation: AI systems learn from and reproduce specific human reasoning patterns;
· Replication: AI systems accurately reproduce specific human-like reasoning abilities;
· Emulation: AI systems surpass specific human-like reasoning abilities;
Illustration
A medical diagnosis AI system analyses a patient’s symptoms, medical history, test results and imaging scans. It uses this information to generate a list of probable diagnoses, suggest additional tests to rule out possibilities, and recommend an optimal treatment plan.
(iv) Interaction: The ability to communicate and engage with humans or other AI systems, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model human-like interaction using computational methods;
· Imitation: AI systems learn from and reproduce specific human interaction patterns;
· Replication: AI systems accurately reproduce specific human-like interaction abilities;
· Emulation: AI systems enhance human-like interaction;
Illustration
An AI-powered virtual assistant engages in natural conversations with users, understanding context and nuance. It asks clarifying questions when needed, provides relevant information or executes tasks, and even interjects with suggestions or prompts.
(v) Adaptation: The ability to learn from experiences and adjust behaviour accordingly, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model human-like adaptation using computational methods.
· Imitation: AI systems learn from and reproduce human adaptation behaviours.
· Replication: AI systems reproduce human-like adaptation abilities, recognizing the inherent complexity.
· Emulation: AI systems surpass human-like adaptation as an aspirational goal.
Illustration
An AI system for stock trading continuously analyses market trends, world events, and the performance of its own trades. It identifies patterns and correlations, learning which strategies work best in different scenarios. The AI optimizes its trading algorithms and adapts its approach based on accumulated experience, demonstrating adaptive abilities.
(vi) Creativity: The ability to generate novel ideas, solutions, or outputs, based on a set of corresponding scenarios including but not limited to:
· Simulation: AI systems model human-like creativity using computational methods;
· Imitation: AI systems learn from and reproduce human creative processes;
· Replication: AI systems accurately reproduce human-like creative abilities, acknowledging the complexity involved;
· Emulation: AI systems enhance human-like creativity as a forward-looking objective;
Illustration
An AI music composition tool creates an original symphony. Given a theme and emotional tone, it generates unique melodies, harmonies and instrumentation. It iterates and refines the composition based on aesthetic evaluation models, ultimately producing a piece that is distinct from existing music in its training data.
(6) Application of Conceptual Methods of Classification: The methods of classification as described in sub-sections (2) to (5) in this Section may be applied in the following aspects of artificial intelligence governance within the scope of the Act:
Section 5 – Technical Methods of Classification
(1) These methods as designated in clause (b) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as –
(i) General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2);
(ii) General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3);
(iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);
(2) General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:
(i) Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users.
(ii) Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors.
(iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements.
(iv) Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases.
Illustration
An AI system used in healthcare for diagnostics, treatment recommendations, and patient management. This AI consistently performs well in various healthcare settings, adhering to medical standards and providing reliable outcomes. It is characterized by its large scale in handling diverse medical data and serving multiple institutions, its inherent purpose of assisting healthcare professionals in decision-making and care improvement, robust technical architecture and accuracy while adhering to privacy and security standards, and potential limitations in edge cases or rare conditions.
(3) General Purpose Artificial Intelligence Systems with Multiple Short-Run or Unclear Use Cases (GPAIU) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:
(i) Scale: The ability to address specific short-term needs or exploratory applications within relevant sectors at a medium scale.
(ii) Inherent Purpose: Providing targeted solutions for emerging or temporary use cases, with the potential for future adaptation and expansion.
(iii) Technical Features: Modular and adaptable architectures enabling rapid development and deployment in response to evolving requirements.
(iv) Technical Limitations: Uncertainties regarding long-term viability, scalability, and compliance with changing industry standards and regulations.
Illustration
An AI system used in experimental smart city projects for traffic management, pollution monitoring, and public safety. Deployed at a medium scale in specific locations for limited durations, its inherent purpose is testing and validating AI feasibility and effectiveness in smart city applications. It features a modular, adaptable technical architecture to accommodate changing requirements and infrastructure integration, but faces potential limitations in scalability, interoperability, and long-term performance due to the experimental nature.
(4) Specific-Purpose Artificial Intelligence Systems with One or More Associated Standalone Use Cases or Test Cases (SPAI) are classified based on a technical method that evaluates the following factors:
(i) Scale: The ability to address specific, well-defined problems or serve as proof-of-concept implementations at a small scale.
(ii) Inherent Purpose: Providing specialized solutions for individual use cases or validating AI technique feasibility in controlled environments.
(iii) Technical Features: Focused and optimized architectures tailored to the specific requirements of the standalone use case or test case.
(iv) Technical Limitations: Constraints on generalizability, difficulties scaling beyond the initial use case, and challenges ensuring real-world robustness and reliability.
Illustration
An AI chatbot used by a company for customer service during a product launch. As a small-scale standalone application, its inherent purpose is providing automated support for a specific product or service. It employs a focused, optimized technical architecture for handling product-related queries and interactions, but faces limitations in handling queries outside the predefined scope or adapting to new products without significant modifications.
Section 6 – Commercial Methods of Classification
(1) These methods as designated in clause (c) of sub-section (1) of Section 3 involve the categorisation of commercially produced and disseminated artificial intelligence technologies based on their inherent purpose and primary intended use, considering factors such as:
(i) The core functionality and technical capabilities of the artificial intelligence technology;
(ii) The main end-users or business end-users for the artificial intelligence technology, and the size of the user base or market share;
(iii) The primary markets, sectors, or domains in which the artificial intelligence technology is intended to be applied, and the market influence or dominance in those sectors;
(iv) The key benefits, outcomes, or results the artificial intelligence technology is designed to deliver, and the potential impact on individuals, businesses, or society;
(v) The annual turnover or revenue generated by the artificial intelligence technology or the company developing and deploying it;
(vi) The amount of data collected, processed, or utilized by the artificial intelligence technology, and the level of data integration across different services or platforms; and
(vii)Any other quantitative or qualitative factors that may be prescribed by the Central Government or the Indian Artificial Intelligence Council (IAIC) to assess the significance and impact of the artificial intelligence technology.
(2) Based on an assessment of the factors outlined in sub-section (1), artificial intelligence technologies are classified into the following categories –
(i) Artificial Intelligence as a Product (AI-Pro), as described in sub-section (3);
(ii) Artificial Intelligence as a Service (AIaaS), as described in sub-section (4);
(iii) Artificial Intelligence as a Component (AI-Com) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure, as described in sub-section (5);
(iv) Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provided which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts, as described in sub-section (6);
(v) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital infrastructure, as described in sub-section (7);
(vi) Artificial Intelligence for Preview (AI-Pre), as described in sub-section (8);
(3) Artificial Intelligence as a Product (AI-Pro) refers to standalone AI applications or software that are developed and sold as individual products to end-users. These products are designed to perform specific tasks or provide particular services directly to the user;
Illustrations
(1) An AI-powered home assistant device as a product is marketed and sold as a consumer electronic device that provides functionalities like voice recognition, smart home control, and personal assistance.
(2) A commercial software package for predictive analytics is used by businesses to forecast market trends and consumer behaviour.
(4) Artificial Intelligence as a Service (AIaaS) refers to cloud-based AI solutions that are provided to users on-demand over the internet. Users can access and utilize the capabilities of AI systems without the need to develop or maintain the underlying infrastructure;
Illustrations
(1) A cloud-based machine learning platform offers businesses and developers access to powerful AI tools and frameworks on a subscription basis.
(2) An AI-driven customer service chatbot service that businesses can integrate into their websites to handle customer inquiries and support.
(5) Artificial Intelligence as a Component (AI-Com) refers to AI technologies that are embedded or integrated into existing products, services, or system infrastructures to enhance their capabilities or performance. In this case, the AI component is not a standalone product but rather a part of a larger system;
Illustrations
(1) An AI-based recommendation engine integrated into an e-commerce platform to provide personalized shopping suggestions to users.
(2) AI-enhanced cameras in smartphones that utilize machine learning algorithms to improve photo quality and provide features like facial recognition.
(6) Artificial Intelligence as a System (AI-S) refers to end-to-end AI solutions that combine multiple AI components, models, and interfaces. These systems often involve the integration of AI capabilities into existing workflows or the creation of entirely new AI-driven processes in whole or in parts;
Illustrations
(1) An AI middleware platform that connects various enterprise applications to enhance their functionalities with AI capabilities, such as an AI layer that integrates with CRM systems to provide predictive sales analytics.
(2) An AI system used in smart manufacturing, where AI interfaces integrate with industrial machinery to optimize production processes and maintenance schedules.
(7) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) refers to the integration of AI technologies into the underlying computing, storage, and network infrastructure to optimize resource allocation, improve efficiency, and enable intelligent automation. This category focuses on the use of AI at the infrastructure level rather than at the application or service level.
Illustrations
(1) An AI-enabled traffic management system that integrates with city infrastructure to monitor and manage traffic flow, reduce congestion, and optimize public transportation schedules.
(2) AI-powered utilities management systems that are integrated into the energy grid to predict and manage energy consumption, enhancing efficiency and reducing costs.
(8) Artificial Intelligence for Preview (AI-Pre) refers to AI technologies that are made available by companies for testing, experimentation, or early access prior to wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms and infrastructure at various stages of development. AI-Pre technologies are typically characterized by one or more of the following features that may include but not limited to:
(i) The AI technology is made available to a limited set of end users or participants in a preview program;
(ii) Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality;
(iii) The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose.
(iv) Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology.
(v) The AI-Pre technology may be provided free of charge, or under a separate pricing model from the company’s standard commercial offerings.
(vi) After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release.
Illustration
A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics:
(1) The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality.
(2) The AI system’s capabilities are not yet fully tested, documented or supported, and the company provides no warranties or guarantees.
(3) The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications.
(4) After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms.
Section 7 – Risk-centric Methods of Classification
(1) These methods as designated in clause (d) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks –
(i) Narrow risk AI systems as described in sub-section (2);
(ii) Medium risk AI systems as described in sub-section (3);
(iii) High risk AI systems as described in sub-section (4);
(iv) Unintended risk AI systems as described in sub-section (5);
(2) Narrow risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s scale, inherent purpose, technical features and limitations:
(i) Limited scale of utilization or expected deployment across sectors, domains or user groups, determined by the AI system’s inherent purpose and technical capabilities;
(ii) Low potential for harm or adverse impact, with minimal severity and a small number of individuals potentially affected, due to the AI system’s technical features and limitations;
(iii) Feasible options for data principals or end-users to opt-out of the outcomes produced by the system;
(iv) Low vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating risks associated with the use of the system, facilitated by the AI system’s transparency and interpretability arising from its technical architecture;
(v) Outcomes produced by the system are typically reversible with minimal effort, owing to the AI system’s focused scope and well-defined operational boundaries.
Illustration
A virtual assistant AI integrated into a smartphone app to provide basic information lookup and task scheduling would be classified as a narrow risk AI system. Its limited scale of deployment on individual devices, low potential for harm beyond minor inconveniences, opt-out feasibility by disabling the virtual assistant, low user vulnerability due to transparency of its capabilities, and easily reversible outcomes through resetting the app, all contribute to its narrow risk designation.
(3) Medium risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s technical features and limitations:
(i) Potential for moderate harm or adverse impact, with the severity and number of potentially affected individuals or entities being higher than narrow risk systems;
(ii) Limited feasibility for data principals or end-users to opt-out of, or exercise control over, the outcomes or decisions produced by the system in certain contexts;
(iii) Moderate vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating the risks associated with the use of the system, due to factors such as information asymmetry or power imbalances;
(iv) Considerable effort may be required to reverse or remediate the outcomes or decisions produced by the system in certain cases;
(v) The inherent purpose, scale of utilization or expected deployment of the system across sectors, domains or user groups shall not be primary determinants of its risk level.
(vi) The system’s technical architecture, model characteristics, training data quality, decision-making processes, and other technical factors shall be the primary considerations in assessing its risk level.
Illustration
An AI-powered loan approval system used by a regional bank would likely be designated as a medium risk AI system. While its scale is limited to the bank’s customer base, the potential to deny loans unfairly or exhibit bias in decision-making poses moderate risks. Customers may have limited opt-out options once applying for a loan. Information asymmetry between the bank and customers regarding the AI’s decision processes creates moderate user vulnerability. And reversing an improper loan denial could require considerable effort, all pointing to a medium risk classification focused on the AI’s technical limitations rather than its inherent purpose.
(4) High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:
(i) Widespread utilization or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences;
(ii) Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest;
(iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system;
(iv) High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system;
(v) Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences.
(vi) The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.
Illustration
An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification.
(5) Unintended risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:
(i) Lack of explicit design intent: The system emerges spontaneously from the complex interactions between its components, models, data, and infrastructure, without being deliberately engineered for a specific purpose.
(ii) Unpredictable emergence: The system displays novel capabilities, decision-making processes or behavioural patterns that deviate from its original training objectives or intended functionality.
(iii) Uncontrolled evolution: The system continues to learn and evolve in uncontrolled ways after deployment, leading to changes in its behaviour that were not foreseen or accounted for.
(iv) Inscrutable operation: The internal operations, representations and decision paths of the system become increasingly opaque, hindering interpretability and making it difficult to explain its outputs or behaviours.
Illustration
An autonomous vehicle navigation system that, through interactions between its various AI components (perception, prediction, path planning), develops unexpected emergent behaviour that was not intended by its designers, potentially leading to accidents.
Section 8 - Prohibition of Unintended Risk AI Systems
The development, deployment, and use of unintended risk AI systems, as classified under Section 7(5), is prohibited.
Section 9 - High-Risk AI Systems in Strategic Sectors
(1) The Central Government shall designate strategic sectors where the development, deployment, and use of high-risk AI systems shall be subject to sector-specific standards and regulations, based on the risk classification methods outlined in Chapter II of this Act.
(2) The sector-specific standards and regulations for high-risk AI systems in strategic sectors shall address the following aspects:
(i) Safety: Ensuring that high-risk AI systems operate in a safe and controlled manner, minimizing the potential for harm or unintended consequences to individuals, property, or the environment.
(ii) Security: Implementing robust security measures to protect high-risk AI systems from unauthorized access, manipulation, or misuse, and safeguarding the integrity and confidentiality of data and systems.
(iii) Reliability: Establishing mechanisms to ensure the consistent, accurate, and reliable performance of high-risk AI systems, including through rigorous testing, validation, and monitoring processes.
(iv) Transparency: Promoting transparency in the development, deployment, and operation of high-risk AI systems, enabling stakeholders to understand the underlying algorithms, data sources, and decision-making processes.
(v) Accountability: Defining clear lines of responsibility and accountability for the actions and outcomes of high-risk AI systems, including provisions for redressal and remediation in case of adverse impacts.
(vi) Ethical Considerations: Incorporating ethical principles and guidelines to ensure that high-risk AI systems respect human rights, promote fairness and non-discrimination, and align with societal values and norms.
(vii) Legitimate Uses: Ensuring that the development, deployment, and use of high-risk AI systems in strategic sectors comply with the legitimate uses designated in the provisions of Section 7 of the Digital Personal Data Protection Act, 2023.
(viii) Any other aspect deemed necessary by the Central Government or the IAIC to mitigate the risks associated with high-risk AI systems in strategic sectors.
(3) The IAIC shall collaborate with sector-specific regulatory bodies to develop harmonized guidelines and standards for high-risk AI systems in strategic sectors, taking into account the risk classification and associated requirements outlined in this Act.
(4) In the event of any conflict between the provisions of this Act and sector-specific regulations concerning high-risk AI systems in strategic sectors, the provisions of this Act shall prevail, unless otherwise specified.
Chapter III
Chapter III: INDIAN ARTIFICIAL INTELLIGENCE COUNCIL
Section 10 - Composition and Functions
(1) With effect from the date notified by the Central Government, there shall be established the Indian Artificial Intelligence Council (IAIC), a statutory body for the purposes of this Act.
(2) The IAIC shall be an autonomous body corporate with perpetual succession, a common seal, and the power to acquire, hold and transfer property, both movable and immovable, and to contract and be contracted, and sue or be sued by its name.
(3) The IAIC shall coordinate and oversee the development, deployment, and governance of artificial intelligence systems across all government bodies, ministries, departments, and regulatory authorities, adopting a whole-of-government approach.
(4) The headquarters of the IAIC shall be located at the place notified by the Central Government.
(5) The IAIC shall consist of a Chairperson and such number of other Members, not exceeding [X], as the Central Government may notify.
(6) The Chairperson and Members shall be appointed by the Central Government through a transparent and merit-based selection process, as may be prescribed.
(7) The Chairperson and Members shall be individuals of eminence, integrity and standing, possessing specialized knowledge or practical experience in fields relevant to the IAIC’s functions, including but not limited to:
(i) Data and artificial intelligence governance, policy and regulation;
(ii) Administration or implementation of laws related to consumer protection, digital rights and artificial intelligence and other emerging technologies;
(iii) Dispute resolution, particularly technology and data-related disputes;
(iv) Information and communication technology, digital economy and disruptive technologies;
(v) Law, regulation or techno-regulation focused on artificial intelligence, data protection and related domains;
(vi) Any other relevant field deemed beneficial by the Central Government.
(8) At least three Members shall be experts in law with demonstrated understanding of legal and regulatory frameworks related to artificial intelligence, data protection and emerging technologies.
(9) The IAIC shall have the following functions:
(i) Develop and implement policies, guidelines and standards for responsible development, deployment and governance of AI systems in India;
(ii) Coordinate and collaborate with relevant ministries, regulatory bodies and stakeholders to ensure harmonized AI governance across sectors;
(iii) Establish and maintain the National Registry of AI Use Cases as per Section 12;
(iv) Administer the certification scheme for AI systems as specified in Section 11;
(v) Develop and promote the National AI Ethics Code as outlined in Section 13;
(vi) Facilitate stakeholder consultations, public discourse and awareness on societal implications of AI;
(vii) Promote research, development and innovation in AI with a focus on responsibility and ethics;
(viii) Take regulatory actions to ensure compliance with the policies, standards, and guidelines issued by the IAIC under this Act, which may include:
(a) Issuing show-cause notices requiring non-compliant entities to explain the reasons for non-compliance and outline corrective measures within a specified timeline;
(b) Imposing monetary penalties based on the severity of non-compliance, the risk level involved, and the potential impact on individuals, businesses, or society, with penalties being commensurate with the financial capacity of the non-compliant entity;
(c) Suspending or revoking certifications, registrations, or approvals related to non-compliant AI systems, preventing their further development, deployment, or operation until compliance is achieved;
(d) Mandating independent audits of the non-compliant entity’s processes at their own cost, with audit reports to be submitted to the IAIC for review and further action;
(e) Issuing directives to non-compliant entities to implement specific remedial measures within a defined timeline, such as enhancing data quality controls, improving governance frameworks, or strengthening decision-making procedures;
(f) In cases of persistent or egregious non-compliance, recommending the temporary or permanent suspension of the non-compliant entity’s AI-related operations, subject to due process and the principles of natural justice;
(g) Taking any other regulatory action deemed necessary and proportionate to ensure compliance with the prescribed standards and to safeguard the responsible development, deployment, and use of AI systems.
(ix) Advise the Central Government on matters related to AI policy, regulation and governance, and recommend legislative or regulatory changes as necessary;
(x) Perform any other functions necessary to achieve the objectives of this Act or as assigned by the Central Government.
(10) The IAIC may constitute advisory committees, expert groups or task forces as deemed necessary to assist in its functions.
(11) The IAIC shall endeavour to function as a digital office to the extent practicable, conducting proceedings, filings, hearings and pronouncements through digital means as per applicable laws.
Chapter IV
Chapter IV: CERTIFICATION AND ETHICS CODE
Section 11 – Registration & Certification of AI Systems
(1) The IAIC shall establish a voluntary certification scheme for AI systems based on their industry use cases and risk levels, on the basis of the means of classification set forth in Chapter II. The certification scheme shall be designed to promote responsible AI development and deployment.
(2) The IAIC shall maintain a National Registry of Artificial Intelligence Use Cases as described in Section 12 to register and track the development and deployment of AI systems across various sectors. The registry shall be used to inform the development and refinement of the certification scheme and to promote transparency and accountability in artificial intelligence governance.
(2) The certification scheme shall be based on a set of clear, objective, and risk-proportionate criteria that assess the inherent purpose, technical characteristics, and potential impacts of AI systems.
(3) AI systems classified as narrow or medium risk under Section 7 and AI-Pre under sub-section (8) of Section 6 may be exempt from the certification requirement if they meet one or more of the following conditions:
(a) The AI system is still in the early stages of development or testing and has not yet achieved technical or economic thresholds for effective standardization;
(b) The AI system is being developed or deployed in a highly specialized or niche application area where certification may not be feasible or appropriate; or
(c) The AI system is being developed or deployed by start-ups, micro, small & medium enterprises, or research institutions.
(4) AI systems that qualify for exemptions under sub-section (3) must establish and maintain incident reporting and response protocols specified in Section 19. Failure to maintain these protocols may result in the revocation of the exemption.
(5) The Issue-to-Issue Concept Classification (IICC), Ethics-Based Concept Classification (EBCC), Phenomena-Based Concept Classification (PBCC), and Anthropomorphism-Based Concept Classification (ABCC) outlined in Section 4 are intended for consultative and advisory purposes only, and their application is not mandatory for the National AI Registry of Use Cases under this Section.
(6) Notwithstanding anything contained in sub-section (5), the conceptual classification methods under Section 4 shall be mandatory for high-risk AI systems as defined in Section 7(4) and high-risk AI systems associated with strategic sectors as specified in Section 9.
(7) For AI systems not covered under sub-section (6), the conceptual classification methods shall serve as a framework to guide discussions, assessments, and decision-making related to AI systems, with the primary purpose of providing a structured approach for examining the inherent purpose, ethical implications, rights-based considerations, and anthropomorphic characteristics of such systems, which can inform policy development, stakeholder consultations, and adjudicatory processes.
(8) The certification scheme and the methods of classification specified in Chapter II shall undergo periodic review and updating every 12 months to ensure its relevance and effectiveness in response to technological advancements and market developments. The review process shall include meaningful consultation with sector-specific regulators and market stakeholders.
Section 12 – National Registry of Artificial Intelligence Use Cases
(1) The National Registry of Artificial Intelligence Use Cases shall include the metadata for each registered AI system as set forth in sub-sections (1)(a) through (1)(p):
(a) Name and version of the AI system (required)
(b) Owning entity of the AI system (required)
(c) Date of registration (required)
(d) Sector associated with the AI system and whether the AI system is associated with a strategic sector (required)
(e) Specific use case(s) of the AI system (required)
(f) Technical classification of the AI system, as per Section 5 (required)
(g) Key technical characteristics of the AI system as per Section 5, including:
(i) Type of AI model(s) used (required)
(ii) Training data sources and characteristics (required)
(iii) Performance metrics on standard benchmarks (where available, optional)
(h) Commercial classification of the AI system as per Section 6 (required)
(i) Key commercial features of the AI system as per Section 6, including:
(i) Number of end-users and business end-users in India (required, where applicable)
(ii) Market share or level of market influence in the intended sector(s) of application (required, where ascertainable)
(iii) Annual turnover or revenue generated by the AI system or the company owning it (required, where applicable)
(iv) Amount & intended purpose of data collected, processed, or utilized by the AI system (required, where measurable)
(v) Level of data integration across different services or platforms (required, where applicable)
(j) Risk classification of the AI system as per Section 7 (required)
(k) Conceptual classification of the AI system as per Section 4 (required only for high-risk AI Systems)
(l) Potential impacts of the AI system as per Section 7, including:
(i) Inherent Purpose (required)
(ii) Possible risks and harms observed and documented by the owning entity (required)
(m) Certification status (required) (registered & certified / registered & not certified)
(n) A detailed post-deployment monitoring plan as per Section 17 (required only for high-risk AI Systems), including:
(i) Performance metrics and key indicators to be tracked (optional)
(ii) Risk mitigation and human oversight protocols (required)
(iii) Data collection, reporting, and audit trail mechanisms (required)
(iv) Feedback and redressal channels for impacted stakeholders (optional)
(v) Commitments to periodic third-party audits and public disclosure of:
(a) Monitoring reports and performance indicators (optional)
(b) Descriptions of identified risks, incidents or failures as per sub-section (3) of Section 17 (required)
(c) Corrective actions and mitigation measures implemented (required)
(o) Incident reporting and response protocols as per Section 19 (required)
(i) Description of the incident reporting mechanisms established (e.g. hotline, online portal)
(ii) Timelines committed for incident reporting based on risk classification
(iii) Procedures for assessing and determining incident severity levels
(iv) Information to be provided in incident reports as per guidelines
(v) Confidentiality and data protection measures for incident data
(vi) Minimum mitigation actions to be taken upon incident occurrence
(vii) Responsible personnel/team for incident response and mitigation
(viii) Commitments on notifying and communicating with impacted parties
(ix) Integration with IAIC’s central incident repository and reporting channels
(x) Review and improvement processes for incident response procedures
(xi) Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits;
(xii) Confirmation that the insurance coverage meets the minimum requirements specified in Section 25(3) based on the AI system’s risk classification;
(xiii) Details of the risk assessment conducted to determine the appropriate level of insurance coverage, considering factors such as the AI system’s conceptual, technical, and commercial classifications as per Sections 4, 5, and 6;
(xiv) Information on the claims process and timelines for notifying the insurer and submitting claims in the event of an incident covered under the insurance policy;
(xv) Commitment to maintain the insurance coverage throughout the lifecycle of the AI system and to notify the IAIC of any changes in coverage or insurer.
(p) Contact information for the owning entity (required)
Illustration
A technology company develops a new AI system for automated medical diagnosis using computer vision and machine learning techniques. This AI system would be classified as a high-risk system under Section 7(4) due to its potential impact on human health and safety. The company registers this AI system in the National Registry of Artificial Intelligence Use Cases, providing the following metadata:
(a) Name and version: MedVision AI Diagnostic System v1.2
(b) Owning entity: ABC Technologies Pvt. Ltd.
(c) Date of registration: 01/05/2024
(d) Sector: Healthcare
(e) Use case: Automated analysis of medical imaging data (X-rays, CT scans, MRIs) to detect and diagnose diseases
(f) Technical classification: Specific Purpose AI (SPAI) under Section 5(4)
(g) Key technical characteristics:
· Convolutional neural networks for image analysis
· Trained on de-identified medical imaging datasets from hospitals
· Achieved 92% accuracy on standard benchmarks
(h) Commercial classification: AI-Pro under Section 6(3)
(i) Key commercial features:
· Intended for use by healthcare providers across India
· Not yet deployed, so no market share data
· No revenue generated yet (pre-commercial)
(j) Risk classification: High Risk under Section 7(4)
(k) Conceptual classification: Assessed under all four methods in Section 4 due to high-risk
(l) Potential impacts:
· Inherent purpose is to assist medical professionals in diagnosis
· Documented risks include misdiagnosis, bias, lack of interpretability
(m) Certification status: Registered & certified
(n) Post-deployment monitoring plan:
· Performance metrics like accuracy, false positive/negative rates
· Human oversight, periodic audits for bias/errors
· Logging all outputs, decisions for audit trail
· Channels for user feedback, grievance redressal
· Commitments to third-party audits, public incident disclosure
(o) Incident reporting protocols:
· Dedicated online portal for incident reporting
· Critical incidents to be reported within 48 hours
· High/medium severity incidents within 7 days
· Procedures for severity assessment, confidentiality measures
· Minimum mitigation actions, impacted party notifications
· Integration with IAIC incident repository
· Insurance coverage details:
· Professional indemnity policy from XYZ Insurance Co., policy #PI12345
· Coverage limit of INR 50 crores, as required for high-risk AI under Section 25(3)(i)
· Risk assessment considered technical complexity, healthcare impact, irreversible consequences
· Claims to be notified within 24 hours, supporting documentation within 7 days
· Coverage to be maintained throughout AI system lifecycle, IAIC to be notified of changes
(p) Contact: info@abctech.com
(2) The IAIC may, from time to time, expand or modify the metadata schema for the National Registry as it deems necessary to reflect advancements in AI technology and risk assessment methodologies. The IAIC shall give notice of any such changes at least 60 days prior to the date on which they shall take effect.
(3) The owners of AI systems shall have the duty to provide accurate and current metadata at the time of registration and to notify the IAIC of any material changes to the registered information within:
(i) 15 days of such change occurring for AI systems classified as High Risk under sub-section (4) of Section 7;
(ii) 30 days of such change occurring for AI systems classified as Medium Risk under sub-section (3) of Section 7;
(iii) 60 days of such change occurring for AI systems classified as Narrow Risk under sub-section (2) of Section 7;
(iv) 90 days of such change occurring for AI systems classified as Narrow Risk or Medium Risk under Section 7 that are exempted from certification under sub-section (3) of Section 11.
(4) Notwithstanding anything contained in sub-section (1), the owners of AI systems exempted under sub-section (3) of Section 11 shall only be required to submit the metadata specified in sub-sections (4)(a) through (4)(k) to register their AI systems:
(a) Name and version of the AI system (required)
(b) Owning entity of the AI system (required)
(c) Date of registration (required)
(d) Sector associated with the AI system (optional)
(e) Specific use case(s) of the AI system (required)
(f) Technical classification of the AI system, as per Section 5 (optional)
(g) Commercial classification of the AI system as per Section 6 (required)
(h) Risk classification of the AI system as per Section 7 (required, narrow risk or medium risk only)
(i) Certification status (required) (registered & certification is exempted under sub-section (3) of Section 11)
(j) Incident reporting and response protocols as per Section 19 (required)
(i) Description of the incident reporting mechanisms established (e.g. hotline, online portal)
(ii) Timelines committed for reporting high/critical severity incidents (within 14-30 days)
(iii) Procedures for assessing and determining incident severity levels (only high/critical)
(iv) Information to be provided in incident reports (incident description, system details)
(v) Confidentiality measures for incident data based on sensitivity (scaled down)
(vi) Minimum mitigation actions to be taken upon high/critical incident occurrence
(vii) Responsible personnel/team for incident response and mitigation
(viii) Commitments on notifying and communicating with impacted parties
(ix) Integration with IAIC’s central incident repository and reporting channels
(x) Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits (required for high-risk AI systems only);
(k) Contact information for the owning entity (required)
Illustration
A small AI startup develops a chatbot for basic customer service queries using natural language processing techniques. As a low-risk AI system still in early development stages, they claim exemption under Section 11(3) and register with the following limited metadata:
(a) Name and version: ChatAssist v0.5 (beta)
(b) Owning entity: XYZ AI Solutions LLP
(c) Date of registration: 15/06/2024
(d) Sector: Not provided (optional)
(e) Use case: Automated response to basic customer queries via text/voice
(f) Technical classification: Specific Purpose AI (SPAI) under Section 5(4) (optional)
(g) Commercial classification: AI-Pre under Section 6(8)
(h) Risk classification: Narrow Risk under Section 7(2)
(i) Certification status: Registered & certification exempted under Section 11(3)
(j) Incident reporting protocols:
· Email support@xyzai.com for incident reporting
Timelines committed for reporting high/critical severity incidents (within 14-30 days)
· High/critical incidents to be reported within 30 days
Procedures for assessing and determining incident severity levels (only high/critical)
· Only incident description and system details required
Information to be provided in incident reports (incident description, system details)
Confidentiality measures for incident data based on sensitivity (scaled down)
· Standard data protection measures as per company policy
Minimum mitigation actions to be taken upon high/critical incident occurrence
· Mitigation by product team, notifying customers if major
Responsible personnel/team for incident response and mitigation
Commitments on notifying and communicating with impacted parties
Integration with IAIC’s central incident repository and reporting channels
(k) Contact: support@xyzai.com
(5) The IAIC shall put in place mechanisms to validate the metadata provided and to audit registered AI systems for compliance with the reported information. Where the IAIC determines that any developer or owner has provided false or misleading information, it may impose penalties, including fines and revocation of certification, as it deems fit.
(6) The IAIC shall publish aggregate statistics and analytics based on the metadata in the National Registry for the purposes of supporting evidence-based policymaking, research, and public awareness about AI development and deployment trends. Provided that commercially sensitive information and trade secrets shall not be disclosed.
(7) Registration and certification under this Act shall be voluntary, and no penal consequences shall attach to the lack of registration or certification of an AI system, except as otherwise expressly provided in this Act.
(8) The examination process for registration and certification of AI use cases shall be conducted by the IAIC in a transparent and inclusive manner, engaging with relevant stakeholders, including:
(i) Technical experts and researchers in the field of artificial intelligence, who can provide insights into the technical aspects, capabilities, and limitations of the AI systems under examination.
(ii) Representatives of industries developing and deploying AI technologies, who can offer practical perspectives on the commercial viability, use cases, and potential impacts of the AI systems.
(iii) Technology standards & business associations and consumer protection groups, who can represent the interests and concerns of end-users, affected communities, and the general public.
(iv) Representatives from diverse communities and individuals who may be impacted by AI systems, to ensure their rights, needs, experiences and perspectives across different contexts are comprehensively accounted for during the examination process.
(v) Any other relevant stakeholders or subject matter experts that the IAIC deems necessary for a comprehensive and inclusive examination of AI use cases.
(9) The IAIC shall publish the results of its examinations for registration and certification of AI use cases, along with any recommendations for risk mitigation measures, regulatory actions, or guidelines, in an accessible format for public review and feedback. This shall include detailed explanations of the classification criteria applied, the stakeholder inputs considered, and the rationale behind the decisions made.
Section 13 – National Artificial Intelligence Ethics Code
(1) A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilization of artificial intelligence technologies;
(2) The NAIEC shall be based on the following core ethical principles:
(i) AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process.
(ii) AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes.
(iii) AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content.
(iv) AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration.
(v) AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented.
(vi) AI systems should be developed and deployed with consideration for their environmental impact, promoting sustainability and minimizing negative ecological consequences throughout their lifecycle.
(vii) AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented;
(viii) AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed.
(ix) AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilization while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment.
(3) The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency,