top of page

Chapter II

PUBLISHED

CHAPTER II: CATEGORISATION AND PROHIBITION

Section 3 - Classification of Artificial Intelligence

(1) All artificial intelligence technologies are categorised on the basis of the means of classification provided as follows –


(i) Conceptual methods of classification: These methods as described in Section 4 categorize artificial intelligence technologies through a conceptual assessment of their utilisation, development, maintenance, and proliferation to examine & recognise their inherent purpose. These methods include:

(a) Issue-to-Issue Concept Classification (IICC)

(b) Ethics-Based Concept Classification (EBCC)

(c) Phenomena-Based Concept Classification (PBCC)

(d) Anthropomorphism-Based Concept Classification (ABCC)


(ii) Technical methods of classification: These methods as described in Section 5 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations. These methods include:

(a) General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS)

(b) General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU)

(c) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI)


(iii) Commercial methods of classification: These methods as described in Section 6 involve the categorisation of commercially and industrially produced and disseminated artificial intelligence technologies subject to their inherent purpose.

(a) Artificial Intelligence as a Product (AI-Pro)

(b) Artificial Intelligence as a Service (AIaaS)

(c) Artificial Intelligence as a Component (AI-Com)

(d) Artificial Intelligence as a System (AI-S)

(e) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS)

(f) Artificial Intelligence for Preview (AI-Pre)


(iv) Risk-centric methods of classification: These methods as described in Section 7 classify artificial intelligence technologies based on their outcome and impact-based risks.

(a) Narrow Risk AI Systems

(b) Medium Risk AI Systems

(c) High Risk AI Systems

(d) Unintended Risk AI Systems


Section 4 – Conceptual Methods of Classification

(1) These methods as designated in clause (i) of sub-section (1) of Section 3 categorize artificial intelligence technologies through a conceptual assessment of their utilisation, development, maintenance, and proliferation to examine & recognise their inherent purpose. This classification is further categorised as –

(i) Issue-to-Issue Concept Classification (IICC) as described in sub-section (2)

(ii) Ethics-Based Concept Classification (EBCC) as described in in sub-section (3)

(iii) Phenomena-Based Concept Classification (PBCC) as described in in sub-section (4)

(iv) Anthropomorphism-Based Concept Classification (ABCC) as described in in sub-section (5)


(2) Issue-to-Issue Concept Classification (IICC) involves the method to determine the inherent purpose of artificial intelligence technologies on a case-to-case basis, to examine & recognise their inherent purpose on the basis of these factors of assessment:

(i) Utilisation: Assessing the specific use cases and applications of the AI technology in various domains.

(ii) Development: Evaluating the design, training, and deployment processes of the AI technology.

(iii) Maintenance: Examining the ongoing support, updates, and modifications made to the AI technology.

(iv) Proliferation: Analysing the dissemination and adoption of the AI technology across different sectors and user groups.


Illustrations


(1) An AI system designed for medical diagnostics is classified based on its purpose to enhance patient outcomes. For instance, if an AI software assists doctors in diagnosing diseases more accurately, it is classified under medical AI applications.

(2) An AI system for financial trading is classified based on its purpose to optimize investment strategies. For example, if an AI-driven algorithm analyses market data to recommend stock trades, it is classified under financial AI applications.


(3) Ethics-Based Concept Classification (EBCC) involves the method of recognising the ethics-based relationship of artificial intelligence technologies in sector-specific & sector-neutral contexts, to examine & recognise their inherent purpose on the basis of these factors:

(i) Utilisation: Evaluating how AI technology impacts ethical principles during its use in specific sectors or across multiple domains.

(ii) Development: Assessing whether ethical considerations were integrated during the design, training, and deployment phases of the AI technology.

(iii) Maintenance: Examining how ethical responsibilities are upheld during updates and modifications to the AI system.

(iv) Proliferation: Analyzing how the widespread adoption of the AI system affects ethical standards across sectors and user groups.


Illustration


An AI for social media content moderation is assessed based on fairness and bias prevention. For example, if an AI filters hate speech and misinformation on social media platforms, it is classified under content moderation AI with an emphasis on ensuring unbiased and fair treatment of all users’ content.


(4) Phenomena-Based Concept Classification (PBCC) involves the method of addressing rights-based issues associated with the use and dissemination of artificial intelligence technologies to examine & recognise their inherent purpose on the basis of these factors:

(i) Utilisation: Assessing how the AI system affects individual or collective rights during its use in various domains.

(ii) Development: Evaluating whether evaluates whether AI systems incorporate protections for rights recognized under Indian law during their design, training, and deployment phases, considering legal constitutional, and commercial rights.

(iii) Maintenance: Reviewing how ongoing support and updates to the AI system protect user rights.

(iv) Proliferation: Analysing the rights-based implications of AI technology dissemination and adoption across different sectors and user groups.


Illustrations


(1) An AI system that analyses personal data for targeted advertising is classified based on its compliance with data protection rights. For example, an AI that personalizes ads based on user behaviour is classified under advertising AI with data privacy considerations.

(2) An AI used in autonomous vehicles is classified based on its implications for road safety and user rights. For instance, an AI that controls self-driving cars is classified under automotive AI with a focus on safety and user rights.



(5) Anthropomorphism-Based Concept Classification (ABCC) involves the method of evaluating scenarios where AI systems ordinarily simulate, imitate, replicate, or emulate human attributes, which include:

(i) Autonomy: The ability to operate and make decisions independently, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model autonomous decision-making processes using computational methods;

• Imitation: AI systems learn from and reproduce human-like autonomous behaviours;

• Replication: AI systems accurately reproduce specific human-like autonomous functions;

• Emulation: AI systems replicate and potentially enhance human-like autonomy;

Illustration

An AI-powered drone delivery system that navigates through urban environments, avoiding obstacles and adapting its route based on real-time traffic conditions to efficiently deliver packages without human intervention.


(ii) Perception: The ability to interpret and understand sensory information from the environment, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model human-like perception using computational methods;

• Imitation: AI systems learn from and reproduce specific human-like perceptual processes;

• Replication: AI systems accurately reproduce specific human-like perceptual abilities;

Illustration

A service robot in a hotel uses computer vision and natural language processing to recognize and greet guests by name, interpret their facial expressions and tone of voice to gauge emotions, and respond appropriately to verbal requests.


(iii) Reasoning: The ability to process information, draw conclusions, and solve problems, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model human-like reasoning using computational methods;

• Imitation: AI systems learn from and reproduce specific human reasoning patterns;

• Replication: AI systems accurately reproduce specific human-like reasoning abilities;

• Emulation: AI systems surpass specific human-like reasoning abilities;

Illustration

A medical diagnosis AI system analyses a patient’s symptoms, medical history, test results and imaging scans. It uses this information to generate a list of probable diagnoses, suggest additional tests to rule out possibilities, and recommend an optimal treatment plan.


(iv) Interaction: The ability to communicate and engage with humans or other AI systems, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model human-like interaction using computational methods;

• Imitation: AI systems learn from and reproduce specific human interaction patterns;

• Replication: AI systems accurately reproduce specific human-like interaction abilities;

• Emulation: AI systems enhance human-like interaction;

Illustration

An AI-powered virtual assistant engages in natural conversations with users, understanding context and nuance. It asks clarifying questions when needed, provides relevant information or executes tasks, and even interjects with suggestions or prompts.


(v) Adaptation: The ability to learn from experiences and adjust behaviour accordingly, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model human-like adaptation using computational methods.

• Imitation: AI systems learn from and reproduce human adaptation behaviours.

• Replication: AI systems reproduce human-like adaptation abilities, recognizing the inherent complexity.

• Emulation: AI systems surpass human-like adaptation as an aspirational goal.

Illustration

An AI system for stock trading continuously analyses market trends, world events, and the performance of its own trades. It identifies patterns and correlations, learning which strategies work best in different scenarios. The AI optimizes its trading algorithms and adapts its approach based on accumulated experience, demonstrating adaptive abilities.


(vi) Creativity: The ability to generate novel ideas, solutions, or outputs, based on a set of corresponding scenarios including but not limited to:

• Simulation: AI systems model human-like creativity using computational methods;

• Imitation: AI systems learn from and reproduce human creative processes;

• Replication: AI systems accurately reproduce human-like creative abilities, acknowledging the complexity involved;

• Emulation: AI systems enhance human-like creativity as a forward-looking objective;

Illustration

An AI music composition tool creates an original symphony. Given a theme and emotional tone, it generates unique melodies, harmonies and instrumentation. It iterates and refines the composition based on aesthetic evaluation models, ultimately producing a piece that is distinct from existing music in its training data.

Section 5 – Technical Methods of Classification

(1) These methods as designated in clause (ii) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as –

(i) General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2);

(ii) General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3);

(iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);


(2) General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i) Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users.

(ii) Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors.

(iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements.

(iv) Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases.

Illustration

An AI system used in healthcare for diagnostics, treatment recommendations, and patient management. This AI consistently performs well in various healthcare settings, adhering to medical standards and providing reliable outcomes. It is characterized by its large scale in handling diverse medical data and serving multiple institutions, its inherent purpose of assisting healthcare professionals in decision-making and care improvement, robust technical architecture and accuracy while adhering to privacy and security standards, and potential limitations in edge cases or rare conditions.

(3) General Purpose Artificial Intelligence Systems with Multiple Short-Run or Unclear Use Cases (GPAIU) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i) Scale: The ability to address specific short-term needs or exploratory applications within relevant sectors at a medium scale.

(ii) Inherent Purpose: Providing targeted solutions for emerging or temporary use cases, with the potential for future adaptation and expansion.

(iii) Technical Features: Modular and adaptable architectures enabling rapid development and deployment in response to evolving requirements.

(iv) Technical Limitations: Uncertainties regarding long-term viability, scalability, and compliance with changing industry standards and regulations.

Illustration

An AI system used in experimental smart city projects for traffic management, pollution monitoring, and public safety. Deployed at a medium scale in specific locations for limited durations, its inherent purpose is testing and validating AI feasibility and effectiveness in smart city applications. It features a modular, adaptable technical architecture to accommodate changing requirements and infrastructure integration, but faces potential limitations in scalability, interoperability, and long-term performance due to the experimental nature.

(4) Specific-Purpose Artificial Intelligence Systems with One or More Associated Standalone Use Cases or Test Cases (SPAI) are classified based on a technical method that evaluates the following factors:

(i) Scale: The ability to address specific, well-defined problems or serve as proof-of-concept implementations at a small scale.

(ii) Inherent Purpose: Providing specialized solutions for individual use cases or validating AI technique feasibility in controlled environments.

(iii) Technical Features: Focused and optimized architectures tailored to the specific requirements of the standalone use case or test case.

(iv) Technical Limitations: Constraints on generalizability, difficulties scaling beyond the initial use case, and challenges ensuring real-world robustness and reliability.

Illustration

An AI chatbot used by a company for customer service during a product launch. As a small-scale standalone application, its inherent purpose is providing automated support for a specific product or service. It employs a focused, optimized technical architecture for handling product-related queries and interactions, but faces limitations in handling queries outside the predefined scope or adapting to new products without significant modifications.

Section 6 – Commercial Methods of Classification

(1) These methods as designated in clause (iii) of sub-section (1) of Section 3 involve the categorisation of commercially produced and disseminated artificial intelligence technologies based on their inherent purpose and primary intended use, considering factors such as:

(i) The core functionality and technical capabilities of the artificial intelligence technology;

(ii) The main end-users or business end-users for the artificial intelligence technology, and the size of the user base or market share;

(iii) The primary markets, sectors, or domains in which the artificial intelligence technology is intended to be applied, and the market influence or dominance in those sectors;

(iv) The key benefits, outcomes, or results the artificial intelligence technology is designed to deliver, and the potential impact on individuals, businesses, or society;

(v) The annual turnover or revenue generated by the artificial intelligence technology or the company developing and deploying it;

(vi) The amount of data collected, processed, or utilized by the artificial intelligence technology, and the level of data integration across different services or platforms; and

(vii) Any other quantitative or qualitative factors that may be prescribed by the Central Government or the Indian Artificial Intelligence Council (IAIC) to assess the significance and impact of the artificial intelligence technology.


(2) Based on an assessment of the factors outlined in sub-section (1), artificial intelligence technologies are classified into the following categories –

(i) Artificial Intelligence as a Product (AI-Pro), as described in sub-section (3);

(ii) Artificial Intelligence as a Service (AIaaS), as described in sub-section (4);

(iii) Artificial Intelligence as a Component (AI-Com) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure, as described in sub-section (5);

(iv) Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provided which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts, as described in sub-section (6);

(v) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital infrastructure, as described in sub-section (7);

(vi) Artificial Intelligence for Preview (AI-Pre), as described in sub-section (8);

(3) Artificial Intelligence as a Product (AI-Pro) refers to standalone AI applications or software that are developed and sold as individual products to end-users. These products are designed to perform specific tasks or provide particular services directly to the user;

Illustrations

(1) An AI-powered home assistant device as a product is marketed and sold as a consumer electronic device that provides functionalities like voice recognition, smart home control, and personal assistance.

(2) A commercial software package for predictive analytics is used by businesses to forecast market trends and consumer behaviour.

(4) Artificial Intelligence as a Service (AIaaS) refers to cloud-based AI solutions that are provided to users on-demand over the internet. Users can access and utilize the capabilities of AI systems without the need to develop or maintain the underlying infrastructure;

Illustrations

(1) A cloud-based machine learning platform offers businesses and developers access to powerful AI tools and frameworks on a subscription basis.

(2) An AI-driven customer service chatbot service that businesses can integrate into their websites to handle customer inquiries and support.

(5) Artificial Intelligence as a Component (AI-Com) refers to AI technologies that are embedded or integrated into existing products, services, or system infrastructures to enhance their capabilities or performance. In this case, the AI component is not a standalone product but rather a part of a larger system;


Illustrations


(1) An AI-based recommendation engine integrated into an e-commerce platform to provide personalized shopping suggestions to users.

(2) AI-enhanced cameras in smartphones that utilize machine learning algorithms to improve photo quality and provide features like facial recognition.


(6) Artificial Intelligence as a System (AI-S) refers to end-to-end AI solutions that combine multiple AI components, models, and interfaces. These systems often involve the integration of AI capabilities into existing workflows or the creation of entirely new AI-driven processes in whole or in parts;


Illustrations


(1) An AI middleware platform that connects various enterprise applications to enhance their functionalities with AI capabilities, such as an AI layer that integrates with CRM systems to provide predictive sales analytics.

(2) An AI system used in smart manufacturing, where AI interfaces integrate with industrial machinery to optimize production processes and maintenance schedules.


(7) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) refers to the integration of AI technologies into the underlying computing, storage, and network infrastructure to optimize resource allocation, improve efficiency, and enable intelligent automation. This category focuses on the use of AI at the infrastructure level rather than at the application or service level.


Illustrations


(1) An AI-enabled traffic management system that integrates with city infrastructure to monitor and manage traffic flow, reduce congestion, and optimize public transportation schedules.

(2) AI-powered utilities management systems that are integrated into the energy grid to predict and manage energy consumption, enhancing efficiency and reducing costs.


(8) Artificial Intelligence for Preview (AI-Pre) refers to AI technologies that are made available by companies for testing, experimentation, or early access prior to wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms and infrastructure at various stages of development. AI-Pre technologies are typically characterized by one or more of the following features that may include but not limited to:

(i) The AI technology is made available to a limited set of end users or participants in a preview program;

(ii) Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality;

(iii) The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose.

(iv) Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology.

(v) The AI-Pre technology may be provided free of charge, or under a separate pricing model from the company’s standard commercial offerings.

(vi) After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release.


Illustration

A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics:

(1) The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality.

(2) The AI system’s capabilities are not yet fully tested, documented or supported, and the company provides no warranties or guarantees.

(3) The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications.

(4) After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms.

Section 7 – Risk-centric Methods of Classification

(1) These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks –

(i) Narrow risk AI systems as described in sub-section (2);

(ii) Medium risk AI systems as described in sub-section (3);

(iii) High risk AI systems as described in sub-section (4);

(iv) Unintended risk AI systems as described in sub-section (5);

(2) Narrow Risk AI Systems:

(i) Narrow risk AI systems are classified as those with minimal outcome and impact risks, where:

(a) The system is deployed in a limited scope for non-critical functions, so its outcomes do not significantly affect users or systems.

(b) The system causes minimal harm, with impacts limited to temporary inconvenience.

(c) Users can easily opt out of the system’s operations, ensuring they are not forced to accept its outcomes.

(d) The system is fully explainable, allowing users to understand and mitigate any risks from its outcomes.

(e) Errors in the system’s outcomes are easily reversible, with no lasting impact.

(ii) Risk recognition is achieved by assessing the system’s outcomes, such as errors in non-critical tasks, and their impacts, such as temporary inconvenience, ensuring the category provisions directly identify minimal risks without abstract definitions.

Illustration: A virtual assistant on a smartphone app for task scheduling is a narrow risk system. It operates in a non-critical context, causes only temporary inconvenience if it fails, allows users to disable it, is fully explainable, and errors are easily reversible by resetting the app.


(3) Medium Risk AI Systems:

(i) Medium risk AI systems are classified as those with moderate outcome and impact risks, where:

(a) The system causes moderate harm, with outcomes that may lead to incorrect decisions affecting users’ opportunities or resources.

(b) Users have limited ability to opt out or understand the system’s operations, increasing the impact of its outcomes.

(c) The system may produce inconsistent outputs due to technical bias, such as overfitting to training data, affecting the reliability of its outcomes.

(d) Correcting errors in the system’s outcomes requires active intervention, with impacts that may persist until addressed.


(ii) Risk assessment focuses on the system’s technical features, such as model complexity or unverified data, which contribute to its outcome risks.

(iii) Risk recognition is achieved by assessing the system’s outcomes, such as incorrect decisions in resource allocation, and their impacts, such as reduced opportunities for users, ensuring the category provisions directly identify moderate risks without abstract definitions.

Illustration: An AI loan approval system used by a regional bank is a medium risk system. It may lead to incorrect loan denials, limits users’ ability to opt out, may overfit to biased training data, requires intervention to correct errors, and its risks stem from technical features like model complexity.

(4) High Risk AI Systems:

(i) High risk AI systems are classified as those with severe outcome and impact risks, where:

(a) The system is deployed in critical sectors, with outcomes that can disrupt essential services or infrastructure.

(b) The system causes severe harm, with impacts that may lead to physical harm, economic loss, or societal disruption.

(c) Users cannot opt out or control the system’s operations, making its outcomes unavoidable.

(d) The system’s lack of transparency increases the risk of undetected errors, amplifying the impact of its outcomes.

(e) Errors in the system’s outcomes are irreversible or cause permanent harm, with significant long-term impacts.


(ii) Risk recognition is achieved by assessing the system’s outcomes, such as disruptions in critical services, and their impacts, such as economic loss or societal harm, ensuring the category provisions directly identify severe risks without abstract definitions.

Illustration: An AI system controlling a power grid is a high risk system. It operates in a critical sector, can cause outages leading to economic loss, offers no user opt-out, lacks transparency, and failures have irreversible impacts like societal disruption.

(5) Unintended Risk AI Systems:

(i) Unintended risk AI systems are classified as those with emergent and unpredictable outcome and impact risks, where:

(a) The system’s behaviour deviates from its intended design, leading to unexpected outcomes.

(b) The system processes data beyond its intended scope, increasing the risk of unintended impacts.

(c) The system evolves after deployment without oversight, causing outcomes that cannot be predicted or controlled.

(d) The system’s operations are not explainable, making it impossible to understand or mitigate the risks of its outcomes.

(ii) Risk recognition is achieved by assessing the system’s outcomes, such as unexpected behaviours in operation, and their impacts, such as unpredictable harm to users or systems, ensuring the category provisions directly identify emergent risks without abstract definitions.

Illustration: An autonomous vehicle navigation system with emergent behaviour is an unintended risk system. It deviates from its intended design, processes unintended data, evolves without oversight, and its operations are not explainable, leading to unpredictable outcomes like accidents.

Section 8 - Prohibition of Unintended Risk AI Systems

The development, deployment, and use of unintended risk AI systems, as classified under the sub-section (5) of Section 7, is prohibited.

Section 9 - High-Risk AI Systems in Strategic Sectors

(1) The Central Government shall designate strategic sectors where the development, deployment, and use of high-risk AI systems shall be subject to sector-specific standards and regulations, based on the risk classification methods outlined in Chapter II of this Act.

(2) In the event of any conflict between the provisions of this Act and sector-specific regulations concerning high-risk AI systems in strategic sectors, the provisions of this Act shall prevail, unless otherwise specified.

bottom of page