Section 7 – Risk-centric Methods of Classification
PUBLISHED
Section 7 – Risk-centric Methods of Classification
(1) These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks –
(i) Narrow risk AI systems as described in sub-section (2);
(ii) Medium risk AI systems as described in sub-section (3);
(iii) High risk AI systems as described in sub-section (4);
(iv) Unintended risk AI systems as described in sub-section (5);
(2) Narrow Risk AI Systems:
(i) Narrow risk AI systems are classified as those with minimal outcome and impact risks, where:
(a) The system is deployed in a limited scope for non-critical functions, so its outcomes do not significantly affect users or systems.
(b) The system causes minimal harm, with impacts limited to temporary inconvenience.
(c) Users can easily opt out of the system’s operations, ensuring they are not forced to accept its outcomes.
(d) The system is fully explainable, allowing users to understand and mitigate any risks from its outcomes.
(e) Errors in the system’s outcomes are easily reversible, with no lasting impact.
(ii) Risk recognition is achieved by assessing the system’s outcomes, such as errors in non-critical tasks, and their impacts, such as temporary inconvenience, ensuring the category provisions directly identify minimal risks without abstract definitions.
Illustration: A virtual assistant on a smartphone app for task scheduling is a narrow risk system. It operates in a non-critical context, causes only temporary inconvenience if it fails, allows users to disable it, is fully explainable, and errors are easily reversible by resetting the app.
(3) Medium Risk AI Systems:
(i) Medium risk AI systems are classified as those with moderate outcome and impact risks, where:
(a) The system causes moderate harm, with outcomes that may lead to incorrect decisions affecting users’ opportunities or resources.
(b) Users have limited ability to opt out or understand the system’s operations, increasing the impact of its outcomes.
(c) The system may produce inconsistent outputs due to technical bias, such as overfitting to training data, affecting the reliability of its outcomes.
(d) Correcting errors in the system’s outcomes requires active intervention, with impacts that may persist until addressed.
(ii) Risk assessment focuses on the system’s technical features, such as model complexity or unverified data, which contribute to its outcome risks.
(iii) Risk recognition is achieved by assessing the system’s outcomes, such as incorrect decisions in resource allocation, and their impacts, such as reduced opportunities for users, ensuring the category provisions directly identify moderate risks without abstract definitions.
Illustration: An AI loan approval system used by a regional bank is a medium risk system. It may lead to incorrect loan denials, limits users’ ability to opt out, may overfit to biased training data, requires intervention to correct errors, and its risks stem from technical features like model complexity.
(4) High Risk AI Systems:
(i) High risk AI systems are classified as those with severe outcome and impact risks, where:
(a) The system is deployed in critical sectors, with outcomes that can disrupt essential services or infrastructure.
(b) The system causes severe harm, with impacts that may lead to physical harm, economic loss, or societal disruption.
(c) Users cannot opt out or control the system’s operations, making its outcomes unavoidable.
(d) The system’s lack of transparency increases the risk of undetected errors, amplifying the impact of its outcomes.
(e) Errors in the system’s outcomes are irreversible or cause permanent harm, with significant long-term impacts.
(ii) Risk recognition is achieved by assessing the system’s outcomes, such as disruptions in critical services, and their impacts, such as economic loss or societal harm, ensuring the category provisions directly identify severe risks without abstract definitions.
Illustration: An AI system controlling a power grid is a high risk system. It operates in a critical sector, can cause outages leading to economic loss, offers no user opt-out, lacks transparency, and failures have irreversible impacts like societal disruption.
(5) Unintended Risk AI Systems:
(i) Unintended risk AI systems are classified as those with emergent and unpredictable outcome and impact risks, where:
(a) The system’s behaviour deviates from its intended design, leading to unexpected outcomes.
(b) The system processes data beyond its intended scope, increasing the risk of unintended impacts.
(c) The system evolves after deployment without oversight, causing outcomes that cannot be predicted or controlled.
(d) The system’s operations are not explainable, making it impossible to understand or mitigate the risks of its outcomes.
(ii) Risk recognition is achieved by assessing the system’s outcomes, such as unexpected behaviours in operation, and their impacts, such as unpredictable harm to users or systems, ensuring the category provisions directly identify emergent risks without abstract definitions.
Illustration: An autonomous vehicle navigation system with emergent behaviour is an unintended risk system. It deviates from its intended design, processes unintended data, evolves without oversight, and its operations are not explainable, leading to unpredictable outcomes like accidents.
Related Indian AI Regulation Sources