Section 22 – Shared Sector-Neutral & Sector-Specific Standards
PUBLISHED
Section 22 - Shared Sector-Neutral & Sector-Specific Standards
(1) The IAIC shall coordinate the implementation and review of the following sector-neutral standards for the responsible development, deployment, and use of AI systems:
(i) Fundamental Principles of Liability as outlined in sub-sections (2), (3), and (4);
(2) Liability for harm or damage caused by an AI system shall be allocated based on the following principles:
(i) The party that developed, deployed, or operated the AI system shall be primarily liable for any harm or damage caused by the system, taking into account the system’s classification under the conceptual, technical, commercial, and risk-based methods.
(ii) Liability may be shared among multiple parties involved in the AI system’s lifecycle, based on their respective roles and responsibilities, as well as the system’s classification and associated requirements under Sections 8 and 9.
(iii) End-users shall not be held liable for harm or damage caused by an AI system, unless they intentionally misused or tampered with the system, or failed to comply with user obligations specified based on the system’s classification.
(3) To determine and adjudicate liability for harm caused by AI systems, the following factors shall be considered:
(i) The foreseeability of the harm, in light of the AI system’s intended purpose as identified by the Issue-to-Issue Concept Classification (IICC) under Section 4(2), its capabilities as specified in the Technical Classification under Section 5, and its limitations according to the Risk Classification under Section 7;
(ii) The degree of control exercised over the AI system, considering the human oversight and accountability requirements tied to its Risk Classification under Section 7, particularly the principles of Human Agency and Oversight as outlined in Section 13;
(4) Developers and operators of AI systems shall be required to obtain liability insurance to cover potential harm or damage caused by their AI systems. The insurance coverage shall be proportionate to the risk levels and potential impacts of the AI systems, as determined under the Risk Classification framework in Section 7, and the associated requirements for high-risk AI systems outlined in Section 9. This insurance policy shall ensure that compensation is available to affected individuals or entities in cases where liability cannot be attributed to a specific party.
(5) The IAIC shall enable coordination among sector-specific regulators for the responsible development, deployment, and use of AI systems in sector-specific contexts based on the following set of principles:
(i) Transparency and Explainability:
(a) AI systems should be designed and developed in a transparent manner, allowing users to understand how they work and how decisions are made.
(b) AI systems should be able to explain their decisions in a clear and concise manner, allowing users to understand the reasoning behind their outputs.
(c) Developers should provide clear documentation and user guides explaining the AI system’s capabilities, limitations, and potential risks.
(d) The level of transparency and explainability required may vary based on the AI system’s risk classification and intended use case.
(ii) Fairness and Bias:
(a) AI systems should be regularly monitored for technical bias and discrimination, and appropriate mitigation measures should be implemented to address any identified issues in a sociotechnical context.
(b) Developers should ensure that training data is diverse, representative, and free from biases that could lead to discriminatory outcomes.
(c) Ongoing audits and assessments should be conducted to identify and rectify any emerging biases during the AI system’s lifecycle.
(iii) Safety and Security:
(a) AI systems should be designed and developed with safety and security by design & default.
(b) AI systems should be protected from unauthorized access, modification, or destruction.
(c) Developers should implement robust security measures, such as encryption, access controls, and secure communication protocols, to safeguard AI systems and their data.
(d) AI systems should undergo rigorous testing and validation to ensure they perform safely and reliably under normal and unexpected conditions.
(e) Developers should establish incident response plans and mechanisms to promptly address any safety or security breaches.
(iv) Human Control and Oversight:
(a) AI systems should be subject to human control and oversight to ensure that they are used responsibly.
(b) There should be mechanisms in place for data principals to intervene in the operation of AI systems if necessary.
(c) Developers should implement human-in-the-loop or human-on-the-loop approaches, allowing for human intervention and final decision-making in critical or high-risk scenarios.
(d) Clear protocols should be established for escalating decisions to human operators when AI systems encounter situations beyond their designed scope or when unexpected outcomes occur.
(e) Regular human audits and reviews should be conducted to ensure AI systems are functioning as intended and aligned with human values and societal norms.
(iv) Open Source and Interoperability:
(a) The development of shared sector-neutral standards for AI systems shall leverage open source software and open standards to promote interoperability, transparency, and collaboration.
(b) The IAIC shall encourage the participation of open source communities and stakeholders in the development of AI standards.
(c) Developers should strive to use open source components and frameworks when building AI systems to facilitate transparency, reusability, and innovation.
(d) AI systems should be designed with interoperability in mind, adhering to common data formats, protocols, and APIs to enable seamless integration and data exchange across different platforms and domains.
(e) The IAIC shall promote the development of open benchmarks, datasets, and evaluation frameworks to assess and compare the performance of AI systems transparently.
Related Indian AI Regulation Sources