top of page

Chapter II

PUBLISHED

Chapter II: CATEGORIZATION AND PROHIBITION

Section 4 – Conceptual Methods of Classification

(1)    These methods as designated in clause (i) of sub-section (1) of Section 3 categorize artificial intelligence technologies through a conceptual assessment of their utilisation, development, maintenance, and proliferation to examine & recognise their inherent purpose. This classification is further categorised as –

(i)    Issue-to-Issue Concept Classification (IICC) as described in sub-section (2)

(ii)   Ethics-Based Concept Classification (EBCC) as described in in sub-section (3)

(iii) Phenomena-Based Concept Classification (PBCC) as described in in sub-section (4)

(iv)  Anthropomorphism-Based Concept Classification (ABCC) as described in in sub-section (5)

 

(2)    Issue-to-Issue Concept Classification (IICC) involves the method to determine the inherent purpose of artificial intelligence technologies on a case-to-case basis, to examine & recognise their inherent purpose on the basis of these factors of assessment: 

(i)    Utilisation: Assessing the specific use cases and applications of the AI technology in various domains.

(ii)   Development: Evaluating the design, training, and deployment processes of the AI technology.

(iii) Maintenance: Examining the ongoing support, updates, and modifications made to the AI technology.

(iv)  Proliferation: Analysing the dissemination and adoption of the AI technology across different sectors and user groups.

 

Illustrations

 

(1) An AI system designed for medical diagnostics is classified based on its purpose to enhance patient outcomes. For instance, if an AI software assists doctors in diagnosing diseases more accurately, it is classified under medical AI applications.

(2) An AI system for financial trading is classified based on its purpose to optimize investment strategies. For example, if an AI-driven algorithm analyses market data to recommend stock trades, it is classified under financial AI applications.

 

(3)    Ethics-Based Concept Classification (EBCC) involves the method of recognising the ethics-based relationship of artificial intelligence technologies in sector-specific & sector-neutral contexts, to examine & recognise their inherent purpose on the basis of these factors:

(i)    Utilisation: Evaluating how AI technology impacts ethical principles during its use in specific sectors or across multiple domains.

(ii)   Development: Assessing whether ethical considerations were integrated during the design, training, and deployment phases of the AI technology.

(iii) Maintenance: Examining how ethical responsibilities are upheld during updates and modifications to the AI system.

(iv)  Proliferation: Analyzing how the widespread adoption of the AI system affects ethical standards across sectors and user groups.

 

Illustration

 

An AI for social media content moderation is assessed based on fairness and bias prevention. For example, if an AI filters hate speech and misinformation on social media platforms, it is classified under content moderation AI with an emphasis on ensuring unbiased and fair treatment of all users’ content.

 

(4)    Phenomena-Based Concept Classification (PBCC) involves the method of addressing rights-based issues associated with the use and dissemination of artificial intelligence technologies to examine & recognise their inherent purpose on the basis of these factors:

(i)    Utilisation: Assessing how the AI system affects individual or collective rights during its use in various domains.

(ii)   Development: Evaluating whether evaluates whether AI systems incorporate protections for rights recognized under Indian law during their design, training, and deployment phases, considering legal constitutional, and commercial rights.

(iii) Maintenance: Reviewing how ongoing support and updates to the AI system protect user rights.

(iv)  Proliferation: Analysing the rights-based implications of AI technology dissemination and adoption across different sectors and user groups.

 

Illustrations

 

(1) An AI system that analyses personal data for targeted advertising is classified based on its compliance with data protection rights. For example, an AI that personalizes ads based on user behaviour is classified under advertising AI with data privacy considerations.

(2) An AI used in autonomous vehicles is classified based on its implications for road safety and user rights. For instance, an AI that controls self-driving cars is classified under automotive AI with a focus on safety and user rights.

 

 

(5)    Anthropomorphism-Based Concept Classification (ABCC) involves the method of evaluating scenarios where AI systems ordinarily simulate, imitate, replicate, or emulate human attributes, which include: 

(i)    Autonomy: The ability to operate and make decisions independently, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model autonomous decision-making processes using computational methods;

·       Imitation: AI systems learn from and reproduce human-like autonomous behaviours;

·       Replication: AI systems accurately reproduce specific human-like autonomous functions;

·       Emulation: AI systems replicate and potentially enhance human-like autonomy;

Illustration

An AI-powered drone delivery system that navigates through urban environments, avoiding obstacles and adapting its route based on real-time traffic conditions to efficiently deliver packages without human intervention.

 

(ii)   Perception: The ability to interpret and understand sensory information from the environment, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like perception using computational methods;

·       Imitation: AI systems learn from and reproduce specific human-like perceptual processes;

·       Replication: AI systems accurately reproduce specific human-like perceptual abilities;

Illustration

A service robot in a hotel uses computer vision and natural language processing to recognize and greet guests by name, interpret their facial expressions and tone of voice to gauge emotions, and respond appropriately to verbal requests.

 

(iii) Reasoning: The ability to process information, draw conclusions, and solve problems, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like reasoning using computational methods;

·       Imitation: AI systems learn from and reproduce specific human reasoning patterns;

·       Replication: AI systems accurately reproduce specific human-like reasoning abilities;

·       Emulation: AI systems surpass specific human-like reasoning abilities;

Illustration

A medical diagnosis AI system analyses a patient’s symptoms, medical history, test results and imaging scans. It uses this information to generate a list of probable diagnoses, suggest additional tests to rule out possibilities, and recommend an optimal treatment plan.

 

(iv)  Interaction: The ability to communicate and engage with humans or other AI systems, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like interaction using computational methods;

·       Imitation: AI systems learn from and reproduce specific human interaction patterns;

·       Replication: AI systems accurately reproduce specific human-like interaction abilities;

·       Emulation: AI systems enhance human-like interaction;

Illustration

An AI-powered virtual assistant engages in natural conversations with users, understanding context and nuance. It asks clarifying questions when needed, provides relevant information or executes tasks, and even interjects with suggestions or prompts.

 

(v)   Adaptation: The ability to learn from experiences and adjust behaviour accordingly, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like adaptation using computational methods.

·       Imitation: AI systems learn from and reproduce human adaptation behaviours.

·       Replication: AI systems reproduce human-like adaptation abilities, recognizing the inherent complexity.

·       Emulation: AI systems surpass human-like adaptation as an aspirational goal.

Illustration

An AI system for stock trading continuously analyses market trends, world events, and the performance of its own trades. It identifies patterns and correlations, learning which strategies work best in different scenarios. The AI optimizes its trading algorithms and adapts its approach based on accumulated experience, demonstrating adaptive abilities.

 

(vi)  Creativity: The ability to generate novel ideas, solutions, or outputs, based on a set of corresponding scenarios including but not limited to:

·       Simulation: AI systems model human-like creativity using computational methods;

·       Imitation: AI systems learn from and reproduce human creative processes;

·       Replication: AI systems accurately reproduce human-like creative abilities, acknowledging the complexity involved;

·       Emulation: AI systems enhance human-like creativity as a forward-looking objective;

Illustration

An AI music composition tool creates an original symphony. Given a theme and emotional tone, it generates unique melodies, harmonies and instrumentation. It iterates and refines the composition based on aesthetic evaluation models, ultimately producing a piece that is distinct from existing music in its training data.

Section 5 – Technical Methods of Classification

(1)   These methods as designated in clause (ii) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as –

(i)    General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2);

(ii)   General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3);

(iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);

 

(2)   General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i)    Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users.

(ii)   Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors.

(iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements.

(iv)  Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases.

Illustration

An AI system used in healthcare for diagnostics, treatment recommendations, and patient management. This AI consistently performs well in various healthcare settings, adhering to medical standards and providing reliable outcomes. It is characterized by its large scale in handling diverse medical data and serving multiple institutions, its inherent purpose of assisting healthcare professionals in decision-making and care improvement, robust technical architecture and accuracy while adhering to privacy and security standards, and potential limitations in edge cases or rare conditions.

(3)   General Purpose Artificial Intelligence Systems with Multiple Short-Run or Unclear Use Cases (GPAIU) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i)             Scale: The ability to address specific short-term needs or exploratory applications within relevant sectors at a medium scale.

(ii)            Inherent Purpose: Providing targeted solutions for emerging or temporary use cases, with the potential for future adaptation and expansion.

(iii)          Technical Features: Modular and adaptable architectures enabling rapid development and deployment in response to evolving requirements.

(iv)           Technical Limitations: Uncertainties regarding long-term viability, scalability, and compliance with changing industry standards and regulations.

Illustration

An AI system used in experimental smart city projects for traffic management, pollution monitoring, and public safety. Deployed at a medium scale in specific locations for limited durations, its inherent purpose is testing and validating AI feasibility and effectiveness in smart city applications. It features a modular, adaptable technical architecture to accommodate changing requirements and infrastructure integration, but faces potential limitations in scalability, interoperability, and long-term performance due to the experimental nature.

(4)   Specific-Purpose Artificial Intelligence Systems with One or More Associated Standalone Use Cases or Test Cases (SPAI) are classified based on a technical method that evaluates the following factors:

(i)             Scale: The ability to address specific, well-defined problems or serve as proof-of-concept implementations at a small scale.

(ii)            Inherent Purpose: Providing specialized solutions for individual use cases or validating AI technique feasibility in controlled environments.

(iii)           Technical Features: Focused and optimized architectures tailored to the specific requirements of the standalone use case or test case.

(iv)           Technical Limitations: Constraints on generalizability, difficulties scaling beyond the initial use case, and challenges ensuring real-world robustness and reliability.

Illustration

An AI chatbot used by a company for customer service during a product launch. As a small-scale standalone application, its inherent purpose is providing automated support for a specific product or service. It employs a focused, optimized technical architecture for handling product-related queries and interactions, but faces limitations in handling queries outside the predefined scope or adapting to new products without significant modifications.

Section 6 – Commercial Methods of Classification

(1)   These methods as designated in clause (iii) of sub-section (1) of Section 3 involve the categorisation of commercially produced and disseminated artificial intelligence technologies based on their inherent purpose and primary intended use, considering factors such as:

(i)    The core functionality and technical capabilities of the artificial intelligence technology;

(ii)   The main end-users or business end-users for the artificial intelligence technology, and the size of the user base or market share;

(iii)  The primary markets, sectors, or domains in which the artificial intelligence technology is intended to be applied, and the market influence or dominance in those sectors;

(iv)  The key benefits, outcomes, or results the artificial intelligence technology is designed to deliver, and the potential impact on individuals, businesses, or society;

(v)   The annual turnover or revenue generated by the artificial intelligence technology or the company developing and deploying it;

(vi)  The amount of data collected, processed, or utilized by the artificial intelligence technology, and the level of data integration across different services or platforms; and

(vii)Any other quantitative or qualitative factors that may be prescribed by the Central Government or the Indian Artificial Intelligence Council (IAIC) to assess the significance and impact of the artificial intelligence technology.

 

(2)   Based on an assessment of the factors outlined in sub-section (1), artificial intelligence technologies are classified into the following categories –

(i)    Artificial Intelligence as a Product (AI-Pro), as described in sub-section (3);

(ii)   Artificial Intelligence as a Service (AIaaS), as described in sub-section (4);

(iii) Artificial Intelligence as a Component (AI-Com) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure, as described in sub-section (5);

(iv)  Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provided which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts, as described in sub-section (6);

(v)   Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital infrastructure, as described in sub-section (7);

(vi)  Artificial Intelligence for Preview (AI-Pre), as described in sub-section (8);

 

(3)   Artificial Intelligence as a Product (AI-Pro) refers to standalone AI applications or software that are developed and sold as individual products to end-users. These products are designed to perform specific tasks or provide particular services directly to the user;

Illustrations

(1) An AI-powered home assistant device as a product is marketed and sold as a consumer electronic device that provides functionalities like voice recognition, smart home control, and personal assistance.

(2) A commercial software package for predictive analytics is used by businesses to forecast market trends and consumer behaviour.

(4)   Artificial Intelligence as a Service (AIaaS) refers to cloud-based AI solutions that are provided to users on-demand over the internet. Users can access and utilize the capabilities of AI systems without the need to develop or maintain the underlying infrastructure;

Illustrations

(1) A cloud-based machine learning platform offers businesses and developers access to powerful AI tools and frameworks on a subscription basis.

(2) An AI-driven customer service chatbot service that businesses can integrate into their websites to handle customer inquiries and support.

(5)   Artificial Intelligence as a Component (AI-Com) refers to AI technologies that are embedded or integrated into existing products, services, or system infrastructures to enhance their capabilities or performance. In this case, the AI component is not a standalone product but rather a part of a larger system;

 

Illustrations

 

(1) An AI-based recommendation engine integrated into an e-commerce platform to provide personalized shopping suggestions to users.

(2) AI-enhanced cameras in smartphones that utilize machine learning algorithms to improve photo quality and provide features like facial recognition.

 

(6)   Artificial Intelligence as a System (AI-S) refers to end-to-end AI solutions that combine multiple AI components, models, and interfaces. These systems often involve the integration of AI capabilities into existing workflows or the creation of entirely new AI-driven processes in whole or in parts;

 

Illustrations

 

(1) An AI middleware platform that connects various enterprise applications to enhance their functionalities with AI capabilities, such as an AI layer that integrates with CRM systems to provide predictive sales analytics.

(2) An AI system used in smart manufacturing, where AI interfaces integrate with industrial machinery to optimize production processes and maintenance schedules.

 

(7)   Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) refers to the integration of AI technologies into the underlying computing, storage, and network infrastructure to optimize resource allocation, improve efficiency, and enable intelligent automation. This category focuses on the use of AI at the infrastructure level rather than at the application or service level.

 

Illustrations

 

(1) An AI-enabled traffic management system that integrates with city infrastructure to monitor and manage traffic flow, reduce congestion, and optimize public transportation schedules.

(2) AI-powered utilities management systems that are integrated into the energy grid to predict and manage energy consumption, enhancing efficiency and reducing costs.

 

(8)   Artificial Intelligence for Preview (AI-Pre) refers to AI technologies that are made available by companies for testing, experimentation, or early access prior to wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms and infrastructure at various stages of development. AI-Pre technologies are typically characterized by one or more of the following features that may include but not limited to:

(i)    The AI technology is made available to a limited set of end users or participants in a preview program;

(ii)   Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality;

(iii)  The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose.

(iv)  Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology.

(v)   The AI-Pre technology may be provided free of charge, or under a separate pricing model from the company’s standard commercial offerings.

(vi)  After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release.

 

Illustration

A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics:

(1)   The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality.

(2)   The AI system’s capabilities are not yet fully tested, documented or supported, and the company provides no warranties or guarantees.

(3)   The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications.

(4)   After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms.

Section 7 – Risk-centric Methods of Classification

(1)   These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks –

(i)             Narrow risk AI systems as described in sub-section (2);

(ii)            Medium risk AI systems as described in sub-section (3);

(iii)           High risk AI systems as described in sub-section (4);

(iv)           Unintended risk AI systems as described in sub-section (5);

 

(2)   Narrow risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s scale, inherent purpose, technical features and limitations:

(i)    Limited scale of utilisation or expected deployment across sectors, domains or user groups, determined by the AI system’s inherent purpose and technical capabilities;

(ii)   Low potential for harm or adverse impact, with minimal severity and a small number of individuals potentially affected, due to the AI system’s technical features and limitations;

(iii) Feasible options for data principals or end-users to opt-out of the outcomes produced by the system;

(iv)  Low vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating risks associated with the use of the system, facilitated by the AI system’s transparency and interpretability arising from its technical architecture;

(v)   Outcomes produced by the system are typically reversible with minimal effort, owing to the AI system’s focused scope and well-defined operational boundaries.

 

Illustration

 

A virtual assistant AI integrated into a smartphone app to provide basic information lookup and task scheduling would be classified as a narrow risk AI system. Its limited scale of deployment on individual devices, low potential for harm beyond minor inconveniences, opt-out feasibility by disabling the virtual assistant, low user vulnerability due to transparency of its capabilities, and easily reversible outcomes through resetting the app, all contribute to its narrow risk designation.

 

(3)   Medium risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors primarily determined by the system’s technical features and limitations:

(i)    Potential for moderate harm or adverse impact, with the severity and number of potentially affected individuals or entities being higher than narrow risk systems;

(ii)   Limited feasibility for data principals or end-users to opt-out of, or exercise control over, the outcomes or decisions produced by the system in certain contexts;

(iii)  Moderate vulnerability of data principals, end-users or affected entities in realizing, foreseeing or mitigating the risks associated with the use of the system, due to factors such as information asymmetry or power imbalances;

(iv)  Considerable effort may be required to reverse or remediate the outcomes or decisions produced by the system in certain cases;

(v)   The inherent purpose, scale of utilisation or expected deployment of the system across sectors, domains or user groups shall not be primary determinants of its risk level.

(vi)  The system’s technical architecture, model characteristics, training data quality, decision-making processes, and other technical factors shall be the primary considerations in assessing its risk level.

 

Illustration

 

An AI-powered loan approval system used by a regional bank would likely be designated as a medium risk AI system. While its scale is limited to the bank’s customer base, the potential to deny loans unfairly or exhibit bias in decision-making poses moderate risks. Customers may have limited opt-out options once applying for a loan. Information asymmetry between the bank and customers regarding the AI’s decision processes creates moderate user vulnerability. And reversing an improper loan denial could require considerable effort, all pointing to a medium risk classification focused on the AI’s technical limitations rather than its inherent purpose.

 

(4)   High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:

(i)    Widespread utilisation or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences;

(ii)   Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest;

(iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system;

(iv)  High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system;

(v)   Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences.

(vi)  The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.

 

Illustration

 

An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification.

 

(5)   Unintended risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:

(i)    Lack of explicit design intent: The system emerges spontaneously from the complex interactions between its components, models, data, and infrastructure, without being deliberately engineered for a specific purpose.

(ii)   Unpredictable emergence: The system displays novel capabilities, decision-making processes or behavioural patterns that deviate from its original training objectives or intended functionality.

(iii) Uncontrolled evolution: The system continues to learn and evolve in uncontrolled ways after deployment, leading to changes in its behaviour that were not foreseen or accounted for.

(iv)  Inscrutable operation: The internal operations, representations and decision paths of the system become increasingly opaque, hindering interpretability and making it difficult to explain its outputs or behaviours.

 

Illustration

 

An autonomous vehicle navigation system that, through interactions between its various AI components (perception, prediction, path planning), develops unexpected emergent behaviour that was not intended by its designers, potentially leading to accidents.

 

Section 8 - Prohibition of Unintended Risk AI Systems

The development, deployment, and use of unintended risk AI systems, as classified under the sub-section (5) of Section 7, is prohibited.

Section 9 - High-Risk AI Systems in Strategic Sectors


(1)   The Central Government shall designate strategic sectors where the development, deployment, and use of high-risk AI systems shall be subject to sector-specific standards and regulations, based on the risk classification methods outlined in Chapter II of this Act.

(2)   The sector-specific standards and regulations for high-risk AI systems in strategic sectors must address the following aspects:

(i)            Safety: Ensuring that high-risk AI systems operate in a safe and controlled manner, minimizing the potential for harm or unintended consequences to individuals, property, or the environment.

(ii)           Security: Implementing robust security measures to protect high-risk AI systems from unauthorized access, manipulation, or misuse, and safeguarding the integrity and confidentiality of data and systems.

(iii)         Reliability: Establishing mechanisms to ensure the consistent, accurate, and reliable performance of high-risk AI systems, including through rigorous testing, validation, and monitoring processes.

(iv)          Transparency: Promoting transparency in the development, deployment, and operation of high-risk AI systems, enabling stakeholders to understand the underlying algorithms, data sources, and decision-making processes.

(v)           Accountability: Defining clear lines of responsibility and accountability for the actions and outcomes of high-risk AI systems, including provisions for redressal and remediation in case of adverse impacts.

(vi)          Legitimate Uses: Ensuring that the development, deployment, and use of high-risk AI systems in strategic sectors comply with the legitimate uses designated in the provisions of Section 7 of the Digital Personal Data Protection Act, 2023.

(vii)        Any other aspect deemed necessary by the Central Government or the IAIC to mitigate the risks associated with high-risk AI systems in strategic sectors.

(3)   The IAIC shall collaborate with sector-specific regulatory bodies to develop harmonized guidelines and standards for high-risk AI systems in strategic sectors, taking into account the risk classification and associated requirements outlined in this Act.

(4)   In the event of any conflict between the provisions of this Act and sector-specific regulations concerning high-risk AI systems in strategic sectors, the provisions of this Act shall prevail, unless otherwise specified.

bottom of page