top of page

Chapter IV

PUBLISHED

Chapter IV: CERTIFICATION AND ETHICS CODE

Section 11 – Registration & Certification of AI Systems

(1)   The IAIC shall establish a voluntary certification scheme for AI systems based on their industry use cases and risk levels, on the basis of the means of classification set forth in Chapter II. The certification scheme shall be designed to promote responsible AI development and deployment.

 

(2)   The IAIC shall maintain a National Registry of Artificial Intelligence Use Cases as described in Section 12 to register and track the development and deployment of AI systems across various sectors. The registry shall be used to inform the development and refinement of the certification scheme and to promote transparency and accountability in artificial intelligence governance.

 

(2)   The certification scheme shall be based on a set of clear, objective, and risk-proportionate criteria that assess the inherent purpose, technical characteristics, and potential impacts of AI systems.

 

(3)   AI systems classified as narrow or medium risk under Section 7 and AI-Pre under sub-section (8) of Section 6 may be exempt from the certification requirement if they meet one or more of the following conditions:

(i)    The AI system is still in the early stages of development or testing and has not yet achieved technical or economic thresholds for effective standardisation;

(ii)  The AI system is being developed or deployed in a highly specialized or niche application area where certification may not be feasible or appropriate; or

(iii) The AI system is being developed or deployed by start-ups, micro, small & medium enterprises, or research institutions.

 

(4)   AI systems that qualify for exemptions under sub-section (3) must establish and maintain incident reporting and response protocols specified in Section 19. Failure to maintain these protocols may result in the revocation of the exemption.

 

(5)   Applicability of Section 4 Classification Methods:

 

(i)    The conceptual methods of classification outlined in Section 4 are intended for consultative and advisory purposes only. Their application is not mandatory for the National AI Registry of Use Cases under this Section. The IAIC is empowered to:

(a)    Issue advisories, clarifications, and guidance documents on the interpretation and application of the classification methods outlined in Section 4.

(b)   Provide sector-specific recommendations for the voluntary use of these classification methods by stakeholders, including developers, regulators, and industry professionals.

(c)    While these classification methods are not mandatory, stakeholders are encouraged to adopt them on a self-regulatory basis. Voluntary application of these methods can help:

(i)     Enhance transparency in AI development.

(ii)   Promote responsible AI deployment across sectors.

(iii)  Facilitate alignment with ethical standards outlined in the National Artificial Intelligence Ethics Code (NAIEC) under Section 13.

 

(ii)   The IAIC may periodically review and update its advisories, clarifications and guidance documents to reflect advancements in AI technologies and emerging best practices, ensuring that stakeholders have access to the latest guidance for applying these conceptual methods.

 

(6)   Notwithstanding anything contained in sub-section (5), entities registering high-risk AI systems as defined in the sub-section (4) of Section 7 and those associated with strategic sectors as specified in Section 9 must apply the conceptual classification methods outlined in Section 4.

 

(7)   The certification scheme and the methods of classification specified in Chapter II shall undergo periodic review and updating every 12 months to ensure its relevance and effectiveness in response to technological advancements and market developments. The review process shall include meaningful consultation with sector-specific regulators and market stakeholders.

Section 12 – National Registry of Artificial Intelligence Use Cases

(1)   The National Registry of Artificial Intelligence Use Cases shall include the metadata for each registered AI system as set forth in sub-sections (1)(i) through (1)(xvi):

 

(i)    Name and version of the AI system (required)

(ii)   Owning entity of the AI system (required)

(iii) Date of registration (required)

(iv)  Sector associated with the AI system and whether the AI system is associated with a strategic sector (required)

(v)   Specific use case(s) of the AI system (required)

(vi)  Technical classification of the AI system, as per Section 5 (required)

(vii)Key technical characteristics of the AI system as per Section 5, including:

(a)    Type of AI model(s) used (required)

(b)   Training data sources and characteristics (required)

(c)    Performance metrics on standard benchmarks (where available, optional)

(viii)               Commercial classification of the AI system as per Section 6 (required)

(ix) Key commercial features of the AI system as per Section 6, including:

(a)    Number of end-users and business end-users in India (required, where applicable)

(b)   Market share or level of market influence in the intended sector(s) of application (required, where ascertainable)

(c)    Annual turnover or revenue generated by the AI system or the company owning it (required, where applicable)

(d)   Amount & intended purpose of data collected, processed, or utilized by the AI system (required, where measurable)

(e)    Level of data integration across different services or platforms (required, where applicable)

 

(x)   Risk classification of the AI system as per Section 7 (required)

(xi) Conceptual classification of the AI system as per Section 4 (required only for high-risk AI Systems)

(xii)                Potential impacts of the AI system as per Section 7, including:

(a)    Inherent Purpose (required)

(b)   Possible risks and harms observed and documented by the owning entity (required)

 

(xiii)               Certification status (required) (registered & certified / registered & not certified)

(xiv)               A detailed post-deployment monitoring plan as per Section 17 (required only for high-risk AI Systems), including:

(a)    Performance metrics and key indicators to be tracked (optional)

(b)   Risk mitigation and human oversight protocols (required)

(c)    Data collection, reporting, and audit trail mechanisms (required)

(d)   Feedback and redressal channels for impacted stakeholders (optional)

(e)    Commitments to periodic third-party audits and public disclosure of:

(i)    Monitoring reports and performance indicators (optional)

(ii)   Descriptions of identified risks, incidents or failures as per sub-section (3) of Section 17 (required)

(iii)  Corrective actions and mitigation measures implemented (required)

 

(xv)                 Incident reporting and response protocols as per Section 19 (required)

(a)    Description of the incident reporting mechanisms established (e.g. hotline, online portal)

(b)   Timelines committed for incident reporting based on risk classification

(c)    Procedures for assessing and determining incident severity levels

(d)   Information to be provided in incident reports as per guidelines

(e)    Confidentiality and data protection measures for incident data

(f)     Minimum mitigation actions to be taken upon incident occurrence

(g)   Responsible personnel/team for incident response and mitigation

(h)   Commitments on notifying and communicating with impacted parties

(i)     Integration with IAIC’s central incident repository and reporting channels

(j)     Review and improvement processes for incident response procedures

(k)   Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits;

(l)     Confirmation that the insurance coverage meets the minimum requirements specified in the sub-section (3) of Section 25 based on the AI system’s risk classification;

(m)  Details of the risk assessment conducted to determine the appropriate level of insurance coverage, considering factors such as the AI system’s conceptual, technical, and commercial classifications as per Sections 4, 5, and 6;

(n)   Information on the claims process and timelines for notifying the insurer and submitting claims in the event of an incident covered under the insurance policy;

(o)   Commitment to maintain the insurance coverage throughout the lifecycle of the AI system and to notify the IAIC of any changes in coverage or insurer.

(xvi)               Contact information for the owning entity (required)

 

Illustration

A technology company develops a new AI system for automated medical diagnosis using computer vision and machine learning techniques. This AI system would be classified as a high-risk system under Section 7(4) due to its potential impact on human health and safety. The company registers this AI system in the National Registry of Artificial Intelligence Use Cases, providing the following metadata:

(i)    Name and version: MedVision AI Diagnostic System v1.2

(ii)  Owning entity: ABC Technologies Pvt. Ltd.

(iii) Date of registration: 01/05/2024

(iv)  Sector: Healthcare

(v)   Use case: Automated analysis of medical imaging data (X-rays, CT scans, MRIs) to detect and diagnose diseases

(vi)  Technical classification: Specific Purpose AI (SPAI) under Section 5(4)

(vii)         Key technical characteristics:

 

·       Convolutional neural networks for image analysis

·       Trained on de-identified medical imaging datasets from hospitals

·       Achieved 92% accuracy on standard benchmarks

 

(viii)        Commercial classification: AI-Pro under Section 6(3)

(ix)  Key commercial features:

 

·       Intended for use by healthcare providers across India

·       Not yet deployed, so no market share data

·       No revenue generated yet (pre-commercial)

 

(x)   Risk classification: High Risk under Section 7(4)

(xi)  Conceptual classification: Assessed under all four methods in Section 4 due to high-risk

(xii)         Potential impacts:

 

·       Inherent purpose is to assist medical professionals in diagnosis

·       Documented risks include misdiagnosis, bias, lack of interpretability

 

(xiii)       Certification status: Registered & certified

(xiv)        Post-deployment monitoring plan:

 

·       Performance metrics like accuracy, false positive/negative rates

·       Human oversight, periodic audits for bias/errors

·       Logging all outputs, decisions for audit trail

·       Channels for user feedback, grievance redressal

·       Commitments to third-party audits, public incident disclosure

 

(xv) Incident reporting protocols:

 

·       Dedicated online portal for incident reporting

·       Critical incidents to be reported within 48 hours

·       High/medium severity incidents within 7 days

·       Procedures for severity assessment, confidentiality measures

·       Minimum mitigation actions, impacted party notifications

·       Integration with IAIC incident repository

·       Insurance coverage details:

·       Professional indemnity policy from XYZ Insurance Co., policy #PI12345

·       Coverage limit of INR 50 crores, as required for high-risk AI under Section 25(3)(i)

·       Risk assessment considered technical complexity, healthcare impact, irreversible consequences

·       Claims to be notified within 24 hours, supporting documentation within 7 days

·       Coverage to be maintained throughout AI system lifecycle, IAIC to be notified of changes

 

(xvi)       Contact: info@abctech.com

 

(2)   The IAIC may, from time to time, expand or modify the metadata schema for the National Registry as it deems necessary to reflect advancements in AI technology and risk assessment methodologies. The IAIC shall give notice of any such changes at least 60 days prior to the date on which they shall take effect.

 

(3)   The owners of AI systems shall have the duty to provide accurate and current metadata at the time of registration and to notify the IAIC of any material changes to the registered information within:

 

(i)    15 days of such change occurring for AI systems classified as High Risk under sub-section (4) of Section 7;

(ii)   30 days of such change occurring for AI systems classified as Medium Risk under sub-section (3) of Section 7;

(iii) 60 days of such change occurring for AI systems classified as Narrow Risk under sub-section (2) of Section 7;

(iv)  90 days of such change occurring for AI systems classified as Narrow Risk or Medium Risk under Section 7 that are exempted from certification under sub-section (3) of Section 11.

 

(4)   Notwithstanding anything contained in sub-section (1), the owners of AI systems exempted under sub-section (3) of Section 11 shall only be required to submit the metadata specified in sub-sections (4)(i) through (4)(xi) to register their AI systems:

 

(i)    Name and version of the AI system (required)

(ii)   Owning entity of the AI system (required)

(iii) Date of registration (required)

(iv)  Sector associated with the AI system (optional)

(v)   Specific use case(s) of the AI system (required)

(vi)  Technical classification of the AI system, as per Section 5 (optional)

(vii)Commercial classification of the AI system as per Section 6 (required)

(viii)               Risk classification of the AI system as per Section 7 (required, narrow risk or medium risk only)

(ix) Certification status (required) (registered & certification is exempted under sub-section (3) of Section 11)

(x)   Incident reporting and response protocols as per Section 19 (required)

(a)         Description of the incident reporting mechanisms established (e.g. hotline, online portal)

(b)        Timelines committed for reporting high/critical severity incidents (within 14-30 days)

(c)         Procedures for assessing and determining incident severity levels (only high/critical)

(d)        Information to be provided in incident reports (incident description, system details)

(e)         Confidentiality measures for incident data based on sensitivity (scaled down)

(f)          Minimum mitigation actions to be taken upon high/critical incident occurrence

(g)        Responsible personnel/team for incident response and mitigation

(h)        Commitments on notifying and communicating with impacted parties

(i)          Integration with IAIC’s central incident repository and reporting channels

(j)          Description of the insurance coverage obtained for the AI system, as per Section 25, including the type of policy, insurer, policy number, and coverage limits (required for high-risk AI systems only);

(xi) Contact information for the owning entity (required)

 

Illustration

A small AI startup develops a chatbot for basic customer service queries using natural language processing techniques. As a low-risk AI system still in early development stages, they claim exemption under Section 11(3) and register with the following limited metadata:

(i)    Name and version: ChatAssist v0.5 (beta)

(ii)   Owning entity: XYZ AI Solutions LLP

(iii)  Date of registration: 15/06/2024

(iv)  Sector: Not provided (optional)

(v)   Use case: Automated response to basic customer queries via text/voice

(vi)  Technical classification: Specific Purpose AI (SPAI) under Section 5(4) (optional)

(vii)         Commercial classification: AI-Pre under Section 6(8)

(viii)        Risk classification: Narrow Risk under Section 7(2)

(ix)  Certification status: Registered & certification exempted under Section 11(3)

(x)   Incident reporting protocols:

 

·       Email support@xyzai.com for incident reporting

Timelines committed for reporting high/critical severity incidents (within 14-30 days)

·       High/critical incidents to be reported within 30 days

Procedures for assessing and determining incident severity levels (only high/critical)

·       Only incident description and system details required

Information to be provided in incident reports (incident description, system details)

Confidentiality measures for incident data based on sensitivity (scaled down)

·       Standard data protection measures as per company policy

Minimum mitigation actions to be taken upon high/critical incident occurrence

·       Mitigation by product team, notifying customers if major

Responsible personnel/team for incident response and mitigation

Commitments on notifying and communicating with impacted parties

Integration with IAIC’s central incident repository and reporting channels

 

(xi)  Contact: support@xyzai.com

 

(5)   The IAIC shall put in place mechanisms to validate the metadata provided and to audit registered AI systems for compliance with the reported information. Where the IAIC determines that any developer or owner has provided false or misleading information, it may impose penalties, including fines and revocation of certification, as it deems fit.

(6)   The IAIC shall publish aggregate statistics and analytics based on the metadata in the National Registry for the purposes of supporting evidence-based policymaking, research, and public awareness about AI development and deployment trends. Provided that commercially sensitive information and trade secrets shall not be disclosed.

(7)   Registration and certification under this Act shall be voluntary, and no penal consequences shall attach to the lack of registration or certification of an AI system, except as otherwise expressly provided in this Act.

 

(8)   The examination process for registration and certification of AI use cases shall be conducted by the IAIC in a transparent and inclusive manner, engaging with relevant stakeholders, including:

(i)    Technical experts and researchers in the field of artificial intelligence, who can provide insights into the technical aspects, capabilities, and limitations of the AI systems under examination.

(ii)   Representatives of industries developing and deploying AI technologies, who can offer practical perspectives on the commercial viability, use cases, and potential impacts of the AI systems.

(iii) Technology standards & business associations and consumer protection groups, who can represent the interests and concerns of end-users, affected communities, and the general public.

(iv)  Representatives from diverse communities and individuals who may be impacted by AI systems, to ensure their rights, needs, experiences and perspectives across different contexts are comprehensively accounted for during the examination process.

(v)   Any other relevant stakeholders or subject matter experts that the IAIC deems necessary for a comprehensive and inclusive examination of AI use cases.

 

(9)   The IAIC shall publish the results of its examinations for registration and certification of AI use cases, along with any recommendations for risk mitigation measures, regulatory actions, or guidelines, in an accessible format for public review and feedback. This shall include detailed explanations of the classification criteria applied, the stakeholder inputs considered, and the rationale behind the decisions made.


Section 13 – National Artificial Intelligence Ethics Code

(1)   A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilisation of artificial intelligence technologies;

 

(2)   The NAIEC shall be based on the following core ethical principles:

(i)          AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process.

(ii)        AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes.

(iii)       AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content.

(iv)       AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration.

(v)        AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented.

(vi)       AI systems should be developed and deployed with consideration for their environmental impact, promoting sustainability and minimizing negative ecological consequences throughout their lifecycle.

(vii)     AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented;

(viii)    AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed.

(ix)       AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilisation while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment.

 

(3)   The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end:

(i)    The use of OSS shall be guided by a clear understanding of the open source development model, its scope, constraints, and the varying implementation approaches across different socio-economic and organisational contexts.

(ii)   AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny, while also ensuring that sensitive components and intellectual property are adequately protected.

(iii) The use of OSS in AI development shall not exempt AI systems from complying with the principles and requirements set forth in this Ethics Code, including fairness, accountability, transparency, and adherence to applicable laws and regulations.

(iv)  AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems, and shall implement appropriate governance, quality assurance, and risk management processes.

(v)   The IAIC shall support research and development initiatives under the Digital India Programme that leverage OSS to create AI tools and frameworks that prioritize ethics, safety, inclusivity, and responsible innovation, while also providing guidance and best practices for the effective and sustainable use of OSS in AI development.

(vi)  The IAIC shall collaborate with relevant stakeholders, including open source communities, industry associations, and academic institutions, to develop guidelines and frameworks for the responsible and context-appropriate adoption of OSS in AI development, taking into account the unique challenges and opportunities across different sectors and organisational contexts.

 

(4)   The Ethics Code shall provide guidance on intellectual property and ownership considerations related to AI-generated content. To this end:

(i)    Appropriate mechanisms shall be established to determine ownership, attribution and intellectual property rights over content generated by AI systems, while fostering innovation and protecting the rights of human creators and innovators.

(ii)   Specific considerations shall include recognizing the role of human involvement in developing and deploying the AI systems, establishing guidelines on copyrightability and patentability of AI-generated works and inventions, addressing scenarios where AI builds upon existing protected works, safeguarding trade secrets and data privacy, balancing incentives for AI innovation with disclosure and access principles, and continuously updating policies as AI capabilities evolve.

(iii) The Ethics Code shall encourage transparency and responsible practices in managing intellectual property aspects of AI-generated content across domains such as text, images, audio, video and others.

(iv)  In examining IP and ownership issues related to AI-generated content, the Ethics Code shall be guided by the conceptual classification methods outlined in Section 4, particularly the Anthropomorphism-Based Concept Classification to evaluate scenarios where AI replicates or emulates human creativity and invention.

(v)   The technical classification methods described in Section 5, such as the scale, inherent purpose, technical features, and limitations of the AI system, shall inform the assessment of IP and ownership considerations for AI-generated content.

(vi)  The commercial classification factors specified in the sub-section (1) of Section 6, including the user base, market influence, data integration, and revenue generation of the AI system, shall also be taken into account when determining IP and ownership rights over AI-generated content.

 

(5)   The Ethics Code shall provide guidance on frugal prompt engineering practices for the development of AI systems, including:

(i)    Encouraging the use of concise and well-structured prompts that clearly define the desired outputs and constraints;

(ii)   Recommending the adoption of transfer learning and pre-trained models to reduce the need for extensive fine-tuning;

(iii) Promoting the use of data-efficient techniques, such as few-shot learning or active learning, to minimize the amount of training data required;

(iv)  Suggesting the implementation of early stopping mechanisms to prevent overfitting and improve generalisation;

(v)   Advocating for the utilisation of techniques like model compression, quantisation, or distillation to reduce computational complexity and resource requirements;

(vi)  Encouraging the documentation and maintenance of records on prompt engineering practices, including the rationale behind chosen techniques, performance metrics, and any trade-offs made between efficiency and effectiveness;

(vii)Recommending the periodic review and updating of prompt engineering practices based on the latest research, industry standards, and the guidelines provided by the IAIC;

 

(6)   The Ethics Code shall provide guidance on ensuring fair access rights for all stakeholders involved in the AI value and supply chain, including:

(i)    All stakeholders should have fair and transparent access to datasets necessary for training and developing AI systems. This includes promoting equitable data-sharing practices that ensure smaller entities or research institutions are not unfairly disadvantaged in accessing critical datasets.

(ii)   Ethical use of computational resources should be promoted by ensuring that all stakeholders have transparent access to these resources. Special consideration should be given to smaller entities or research institutions that may require preferential access or pricing models to support innovation.

(iii) Ethical guidelines should ensure that ownership rights over trained models, derived outputs, and intellectual property are clearly defined and respected. Stakeholders involved in the development process must have a clear understanding of their rights and obligations regarding the usage and commercialization of AI technologies.

(iv)  The benefits derived from AI technologies should be distributed equitably among all stakeholders involved in their development and commercialization. This includes ensuring that smaller players contributing critical resources like proprietary datasets or specialized algorithms are fairly compensated.

 

(7)   Adherence to the NAIEC shall be voluntary for all AI systems, as well as those exempted under the sub-section (3) of Section 11. However, the IAIC may mandate adherence to specific principles of the NAIEC and the sub-sections (3), (4), (5) and (6) for high-risk AI systems deployed in sensitive domains, strategic sectors or those with significant potential for societal or sociotechnical impact;

 

(8)   The NAIEC shall be reviewed and updated periodically by the IAIC to reflect advancements in AI technologies, emerging best practices, and evolving societal norms and values related to the responsible development and deployment of AI systems.

 

bottom of page