top of page

Section 11 – Registration & Certification of AI Systems

PUBLISHED

Section 11 – Registration & Certification of AI Systems

(1)   The IAIC shall establish a voluntary certification scheme for AI systems based on their industry use cases and risk levels, on the basis of the means of classification set forth in Chapter II. The certification scheme shall be designed to promote responsible AI development and deployment.

 

(2)   The IAIC shall maintain a National Registry of Artificial Intelligence Use Cases as described in Section 12 to register and track the development and deployment of AI systems across various sectors. The registry shall be used to inform the development and refinement of the certification scheme and to promote transparency and accountability in artificial intelligence governance.

 

(2)   The certification scheme shall be based on a set of clear, objective, and risk-proportionate criteria that assess the inherent purpose, technical characteristics, and potential impacts of AI systems.

 

(3)   AI systems classified as narrow or medium risk under Section 7 and AI-Pre under sub-section (8) of Section 6 may be exempt from the certification requirement if they meet one or more of the following conditions:

(i)    The AI system is still in the early stages of development or testing and has not yet achieved technical or economic thresholds for effective standardisation;

(ii)  The AI system is being developed or deployed in a highly specialized or niche application area where certification may not be feasible or appropriate; or

(iii) The AI system is being developed or deployed by start-ups, micro, small & medium enterprises, or research institutions.

 

(4)   AI systems that qualify for exemptions under sub-section (3) must establish and maintain incident reporting and response protocols specified in Section 19. Failure to maintain these protocols may result in the revocation of the exemption.

 

(5)   Applicability of Section 4 Classification Methods:

 

(i)    The conceptual methods of classification outlined in Section 4 are intended for consultative and advisory purposes only. Their application is not mandatory for the National AI Registry of Use Cases under this Section. The IAIC is empowered to:

(a)    Issue advisories, clarifications, and guidance documents on the interpretation and application of the classification methods outlined in Section 4.

(b)   Provide sector-specific recommendations for the voluntary use of these classification methods by stakeholders, including developers, regulators, and industry professionals.

(c)    While these classification methods are not mandatory, stakeholders are encouraged to adopt them on a self-regulatory basis. Voluntary application of these methods can help:

(i)     Enhance transparency in AI development.

(ii)   Promote responsible AI deployment across sectors.

(iii)  Facilitate alignment with ethical standards outlined in the National Artificial Intelligence Ethics Code (NAIEC) under Section 13.

 

(ii)   The IAIC may periodically review and update its advisories, clarifications and guidance documents to reflect advancements in AI technologies and emerging best practices, ensuring that stakeholders have access to the latest guidance for applying these conceptual methods.

 

(6)   Notwithstanding anything contained in sub-section (5), entities registering high-risk AI systems as defined in the sub-section (4) of Section 7 and those associated with strategic sectors as specified in Section 9 must apply the conceptual classification methods outlined in Section 4.

 

(7)   The certification scheme and the methods of classification specified in Chapter II shall undergo periodic review and updating every 12 months to ensure its relevance and effectiveness in response to technological advancements and market developments. The review process shall include meaningful consultation with sector-specific regulators and market stakeholders.


 

 

Related Indian AI Regulation Sources

Principles for Responsible AI (Part 1)

Operationalizing Principles for Responsible AI (Part 2)

Fairness Assessment and Rating of Artificial Intelligence Systems (TEC 57050:2023)

The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare

Policy Regarding Use of Artificial Intelligence Tools in District Judiciary

Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report

bottom of page