Section 12 – National Registry of Artificial Intelligence Use Cases
PUBLISHED
Section 12 – National Registry of Artificial Intelligence Use Cases
(1) The National Registry of Artificial Intelligence Use Cases shall include the metadata for each registered AI system as set forth in sub-sections (1)(i) through (1)(xvi):
(i) Name and version of the AI system (required)
(ii) Owning entity of the AI system (required)
(iii) Date of registration (required)
(iv) Sector associated with the AI system and whether the AI system is associated with a strategic sector (required)
(v) Specific use case(s) of the AI system (required)
(vi) Technical classification of the AI system, as per Section 5 (required)
(vii)Key technical characteristics of the AI system as per Section 5, including:
(a) Type of AI model(s) used (required)
(b) Training data sources and characteristics (required)
(c) Performance metrics on standard benchmarks (where available, optional)
(viii) Commercial classification of the AI system as per Section 6 (required)
(ix) Key commercial features of the AI system as per Section 6, including:
(a) Number of end-users and business end-users in India (required, where applicable)
(b) Market share or level of market influence in the intended sector(s) of application (required, where ascertainable)
(c) Annual turnover or revenue generated by the AI system or the company owning it (required, where applicable)
(d) Amount & intended purpose of data collected, processed, or utilized by the AI system (required, where measurable)
(e) Level of data integration across different services or platforms (required, where applicable)
(x) Risk classification of the AI system as per Section 7 (required)
(xi) Conceptual classification of the AI system as per Section 4 (required only for high-risk AI Systems)
(xii) Potential impacts of the AI system as per Section 7, including:
(a) Inherent Purpose (required)
(b) Possible risks and harms observed and documented by the owning entity (required)
(xiii) Certification status (required) (registered & certified / registered & not certified)
(xiv) A detailed post-deployment monitoring plan as per Section 17 (required only for high-risk AI Systems), including:
(a) Performance metrics and key indicators to be tracked (optional)
(b) Risk mitigation and human oversight protocols (required)
(c) Data collection, reporting, and audit trail mechanisms (required)
(d) Feedback and redressal channels for impacted stakeholders (optional)
(e) Commitments to periodic third-party audits and public disclosure of:
(i) Monitoring reports and performance indicators (optional)
(ii) Descriptions of identified risks, incidents or failures as per sub-section (3) of Section 17 (required)
(iii) Corrective actions and mitigation measures implemented (required)
(xv) Incident reporting and response protocols as per Section 19 (required)
(a) Description of the incident reporting mechanisms established (e.g. hotline, online portal)
(b) Timelines committed for incident reporting based on risk classification
(c) Procedures for assessing and determining incident severity levels
(d) Information to be provided in incident reports as per guidelines
(e) Confidentiality and data protection measures for incident data
(f) Minimum mitigation actions to be taken upon incident occurrence
(g) Responsible personnel/team for incident response and mitigation
(h) Commitments on notifying and communicating with impacted parties
(i) Integration with IAIC’s central incident repository and reporting channels
(j) Review and improvement processes for incident response procedures
(k) Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits;
(l) Confirmation that the insurance coverage meets the minimum requirements specified in the sub-section (3) of Section 25 based on the AI system’s risk classification;
(m) Details of the risk assessment conducted to determine the appropriate level of insurance coverage, considering factors such as the AI system’s conceptual, technical, and commercial classifications as per Sections 4, 5, and 6;
(n) Information on the claims process and timelines for notifying the insurer and submitting claims in the event of an incident covered under the insurance policy;
(o) Commitment to maintain the insurance coverage throughout the lifecycle of the AI system and to notify the IAIC of any changes in coverage or insurer.
(xvi) Contact information for the owning entity (required)
Illustration
A technology company develops a new AI system for automated medical diagnosis using computer vision and machine learning techniques. This AI system would be classified as a high-risk system under Section 7(4) due to its potential impact on human health and safety. The company registers this AI system in the National Registry of Artificial Intelligence Use Cases, providing the following metadata:
(i) Name and version: MedVision AI Diagnostic System v1.2
(ii) Owning entity: ABC Technologies Pvt. Ltd.
(iii) Date of registration: 01/05/2024
(iv) Sector: Healthcare
(v) Use case: Automated analysis of medical imaging data (X-rays, CT scans, MRIs) to detect and diagnose diseases
(vi) Technical classification: Specific Purpose AI (SPAI) under Section 5(4)
(vii) Key technical characteristics:
· Convolutional neural networks for image analysis
· Trained on de-identified medical imaging datasets from hospitals
· Achieved 92% accuracy on standard benchmarks
(viii) Commercial classification: AI-Pro under Section 6(3)
(ix) Key commercial features:
· Intended for use by healthcare providers across India
· Not yet deployed, so no market share data
· No revenue generated yet (pre-commercial)
(x) Risk classification: High Risk under Section 7(4)
(xi) Conceptual classification: Assessed under all four methods in Section 4 due to high-risk
(xii) Potential impacts:
· Inherent purpose is to assist medical professionals in diagnosis
· Documented risks include misdiagnosis, bias, lack of interpretability
(xiii) Certification status: Registered & certified
(xiv) Post-deployment monitoring plan:
· Performance metrics like accuracy, false positive/negative rates
· Human oversight, periodic audits for bias/errors
· Logging all outputs, decisions for audit trail
· Channels for user feedback, grievance redressal
· Commitments to third-party audits, public incident disclosure
(xv) Incident reporting protocols:
· Dedicated online portal for incident reporting
· Critical incidents to be reported within 48 hours
· High/medium severity incidents within 7 days
· Procedures for severity assessment, confidentiality measures
· Minimum mitigation actions, impacted party notifications
· Integration with IAIC incident repository
· Insurance coverage details:
· Professional indemnity policy from XYZ Insurance Co., policy #PI12345
· Coverage limit of INR 50 crores, as required for high-risk AI under Section 25(3)(i)
· Risk assessment considered technical complexity, healthcare impact, irreversible consequences
· Claims to be notified within 24 hours, supporting documentation within 7 days
· Coverage to be maintained throughout AI system lifecycle, IAIC to be notified of changes
(xvi) Contact: info@abctech.com
(2) The IAIC may, from time to time, expand or modify the metadata schema for the National Registry as it deems necessary to reflect advancements in AI technology and risk assessment methodologies. The IAIC shall give notice of any such changes at least 60 days prior to the date on which they shall take effect.
(3) The owners of AI systems shall have the duty to provide accurate and current metadata at the time of registration and to notify the IAIC of any material changes to the registered information within:
(i) 15 days of such change occurring for AI systems classified as High Risk under sub-section (4) of Section 7;
(ii) 30 days of such change occurring for AI systems classified as Medium Risk under sub-section (3) of Section 7;
(iii) 60 days of such change occurring for AI systems classified as Narrow Risk under sub-section (2) of Section 7;
(iv) 90 days of such change occurring for AI systems classified as Narrow Risk or Medium Risk under Section 7 that are exempted from certification under sub-section (3) of Section 11.
(4) Notwithstanding anything contained in sub-section (1), the owners of AI systems exempted under sub-section (3) of Section 11 shall only be required to submit the metadata specified in sub-sections (4)(i) through (4)(xi) to register their AI systems:
(i) Name and version of the AI system (required)
(ii) Owning entity of the AI system (required)
(iii) Date of registration (required)
(iv) Sector associated with the AI system (optional)
(v) Specific use case(s) of the AI system (required)
(vi) Technical classification of the AI system, as per Section 5 (optional)
(vii)Commercial classification of the AI system as per Section 6 (required)
(viii) Risk classification of the AI system as per Section 7 (required, narrow risk or medium risk only)
(ix) Certification status (required) (registered & certification is exempted under sub-section (3) of Section 11)
(x) Incident reporting and response protocols as per Section 19 (required)
(a) Description of the incident reporting mechanisms established (e.g. hotline, online portal)
(b) Timelines committed for reporting high/critical severity incidents (within 14-30 days)
(c) Procedures for assessing and determining incident severity levels (only high/critical)
(d) Information to be provided in incident reports (incident description, system details)
(e) Confidentiality measures for incident data based on sensitivity (scaled down)
(f) Minimum mitigation actions to be taken upon high/critical incident occurrence
(g) Responsible personnel/team for incident response and mitigation
(h) Commitments on notifying and communicating with impacted parties
(i) Integration with IAIC’s central incident repository and reporting channels
(j) Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits (required for high-risk AI systems only);
(xi) Contact information for the owning entity (required)
Illustration
A small AI startup develops a chatbot for basic customer service queries using natural language processing techniques. As a low-risk AI system still in early development stages, they claim exemption under Section 11(3) and register with the following limited metadata:
(i) Name and version: ChatAssist v0.5 (beta)
(ii) Owning entity: XYZ AI Solutions LLP
(iii) Date of registration: 15/06/2024
(iv) Sector: Not provided (optional)
(v) Use case: Automated response to basic customer queries via text/voice
(vi) Technical classification: Specific Purpose AI (SPAI) under Section 5(4) (optional)
(vii) Commercial classification: AI-Pre under Section 6(8)
(viii) Risk classification: Narrow Risk under Section 7(2)
(ix) Certification status: Registered & certification exempted under Section 11(3)
(x) Incident reporting protocols:
· Email support@xyzai.com for incident reporting
Timelines committed for reporting high/critical severity incidents (within 14-30 days)
· High/critical incidents to be reported within 30 days
Procedures for assessing and determining incident severity levels (only high/critical)
· Only incident description and system details required
Information to be provided in incident reports (incident description, system details)
Confidentiality measures for incident data based on sensitivity (scaled down)
· Standard data protection measures as per company policy
Minimum mitigation actions to be taken upon high/critical incident occurrence
· Mitigation by product team, notifying customers if major
Responsible personnel/team for incident response and mitigation
Commitments on notifying and communicating with impacted parties
Integration with IAIC’s central incident repository and reporting channels
(xi) Contact: support@xyzai.com
(5) The IAIC shall put in place mechanisms to validate the metadata provided and to audit registered AI systems for compliance with the reported information. Where the IAIC determines that any developer or owner has provided false or misleading information, it may impose penalties, including fines and revocation of certification, as it deems fit.
(6) The IAIC shall publish aggregate statistics and analytics based on the metadata in the National Registry for the purposes of supporting evidence-based policymaking, research, and public awareness about AI development and deployment trends. Provided that commercially sensitive information and trade secrets shall not be disclosed.
(7) Registration and certification under this Act shall be voluntary, and no penal consequences shall attach to the lack of registration or certification of an AI system, except as otherwise expressly provided in this Act.
(8) The examination process for registration and certification of AI use cases shall be conducted by the IAIC in a transparent and inclusive manner, engaging with relevant stakeholders, including:
(i) Technical experts and researchers in the field of artificial intelligence, who can provide insights into the technical aspects, capabilities, and limitations of the AI systems under examination.
(ii) Representatives of industries developing and deploying AI technologies, who can offer practical perspectives on the commercial viability, use cases, and potential impacts of the AI systems.
(iii) Technology standards & business associations and consumer protection groups, who can represent the interests and concerns of end-users, affected communities, and the general public.
(iv) Representatives from diverse communities and individuals who may be impacted by AI systems, to ensure their rights, needs, experiences and perspectives across different contexts are comprehensively accounted for during the examination process.
(v) Any other relevant stakeholders or subject matter experts that the IAIC deems necessary for a comprehensive and inclusive examination of AI use cases.
(9) The IAIC shall publish the results of its examinations for registration and certification of AI use cases, along with any recommendations for risk mitigation measures, regulatory actions, or guidelines, in an accessible format for public review and feedback. This shall include detailed explanations of the classification criteria applied, the stakeholder inputs considered, and the rationale behind the decisions made.