Section 25 – Insurance Policy for AI Technologies
PUBLISHED
Section 25 - Insurance Policy for AI Technologies
(1) Developers, owners, and operators of high-risk AI systems, as classified under sub-section (4) of Section 7, shall be required to obtain and maintain comprehensive liability insurance coverage to manage and mitigate potential risks associated with the development, deployment, and operation of such systems.
(2) The insurance coverage requirements for high-risk AI systems shall be proportionate to their risk level and potential impacts, as determined by:
(i) Their conceptual classification based on sub-sections (3), and (4) of Section 4;
(ii) Their technical characteristics evaluated as per the criteria under sub-section (4) of Section 5 for Specific Purpose AI (SPAI) systems;
(iii) Their commercial risk factors such as user base, market influence, data integration, and revenue generation specified under Section 6;
(3) The minimum insurance coverage required for high-risk AI systems shall be:
(i) For systems with potential widespread impact or lack of opt-out feasibility under Section 7(4)(a): INR 50 crores;
(ii) For systems with vulnerability factors or irreversible consequences under Section 7(4)(b): INR 25 crores;
(iii) For other high-risk AI systems under Section 7(4): INR 10 crores.
(4) The Insurance Regulatory and Development Authority of India (IRDAI) shall, in consultation with the IAIC and relevant stakeholders, specify the minimum insurance coverage standards for high-risk AI systems, which may include:
(i) Professional indemnity insurance to cover incidents involving inaccurate, inappropriate, or defamatory AI-generated content;
(ii) Cyber risk insurance to cover incidents related to data breaches, network security failures, or other cyber incidents;
(iii) General commercial liability insurance to cover incidents causing third-party injury, damage, or other legally liable scenarios.
(5) For general purpose AI systems classified under sub-sections (2), and (3) of Section 5, the IAIC, in coordination with IRDAI, shall examine and determine appropriate insurance requirements, considering factors such as:
(i) The scale and inherent purpose of the general purpose AI system;
(ii) The potential risks and impacts associated with its multiple use cases across different sectors and domains;
(iii) The technical features and limitations that may affect its safety, security, and reliability;
(iv) The commercial factors such as user base, market influence, and revenue generation.
(6) Based on the examination under sub-section (5), the IAIC may recommend to IRDAI the development of specialized insurance products or coverage requirements for general purpose AI systems, which may include:
(i) Umbrella liability insurance to cover a wide range of risks and liabilities arising from the diverse applications of the AI system;
(ii) Parametric insurance based on predefined triggers or performance metrics to address the unique challenges in assessing and quantifying risks associated with general purpose AI;
(iii) Risk pooling or reinsurance arrangements to spread the risks among multiple insurers or stakeholders.
(7) The IAIC and IRDAI shall collaborate to establish guidelines and best practices for underwriting, risk assessment, and claims handling related to general purpose AI systems, taking into account their distinct characteristics and potential impacts.
(8) Developers, owners, and operators of general purpose AI systems shall be encouraged to maintain adequate insurance coverage based on the recommendations and guidelines issued by the IAIC and IRDAI under sub-sections (6) and (7).
(9) Insurance providers offering AI-specific policies for high-risk systems must have adequate expertise, resources, and reinsurance arrangements to effectively assess risks, price premiums, and settle claims related to AI technologies.
(10)Developers, owners, and operators of high-risk AI systems shall submit proof of adequate insurance coverage to the IAIC as part of the registration and certification process outlined in Section 11.
(11)Failure to obtain and maintain the required insurance coverage for high-risk AI systems shall be treated as a breach of compliance under Section 19, and the IAIC may take appropriate enforcement actions, including:
(i) Issuing warnings and imposing penalties;
(ii) Suspending or revoking the system’s certification;
(iii) Prohibiting the deployment or operation of the AI system until compliance is achieved.
(12)The obligations under sub-sections (2), (3), and (4) of this Section shall apply to data fiduciaries employing high-risk AI systems, provided they are:
(i) Classified as Systemically Significant Digital Enterprises (SSDEs) under Chapter II of the Digital Competition Act, 2024, particularly:
(a) Based on the quantitative and qualitative criteria specified in Section 5; or
(b) Designated as SSDEs by the Competition Commission of India under Section 6, due to their significant presence in the relevant core digital service.
(ii) Notified as Significant Data Fiduciaries under sub-section (1) of Section 10 of the Digital Personal Data Protection Act, 2023, based on factors such as:
(a) The volume and sensitivity of personal data processed;
(b) The risk to the rights of data principals;
(c) The potential impact on the sovereignty, integrity, and security of India.
(13) For AI systems not classified as high-risk under sub-section (4) of Section 7, obtaining insurance coverage is recommended but not mandatory. The IAIC shall provide guidance on suitable insurance products and coverage levels based on the AI system’s risk profile and potential impacts;
(14)The Insurance Regulatory and Development Authority of India (IRDAI), in consultation with the IAIC and relevant stakeholders, shall develop guidelines and best practices for underwriting, risk assessment, and claims handling related to AI technologies. These guidelines shall address:
(i) Assessment methods to evaluate the unique risks and potential impacts of AI systems, taking into account their risk classification and associated factors as outlined in this Act;
(ii) Premium calculation models that consider the risk profile, scale of deployment, and potential consequences of AI systems;
(iii) Claims processing standards that ensure timely, fair, and transparent settlement of claims related to AI systems;
(iv) Data sharing and reporting requirements between insurers and the IAIC to facilitate the monitoring and analysis of AI-related incidents and claims;
(v) Capacity building and training programs for insurance professionals to enhance their understanding of AI technologies and their associated risks;