top of page

Chapter X

PUBLISHED

Chapter X: CONTENT PROVENANCE


Section 23 - Content Provenance and Identification

(1)   AI systems that generate or manipulate content must implement mechanisms to identify the source of such content and maintain records of its origin. These mechanisms shall involve both human oversight and technological methods;

 

(2)   Accountability for tracking AI-generated content shall be determined by the specific use cases of the AI system, such that:

 

(i)    For AI systems classified as high-risk under Section 7(4), accountability shall extend beyond specific use cases to encompass all technological components of the AI system;

 

(ii)   For AI systems exempted under sub-section (3) of Section 11, accountability for tracking AI-generated content shall be proportionate to the system’s risk profile and potential impact. This shall focus on responsible disclosure and coordinated mitigation between providers and users or testers;

 

(iii) For end-users and business end-users of AI systems, accountability and potential liability for AI-generated content shall be examined based on factors such as:

(a)   Whether they intentionally misused or tampered with the AI system contrary to provided guidelines;

(b)   Whether they failed to exercise reasonable care and due diligence in the utilization of the AI system;

(c)    Whether they knowingly propagated or disseminated AI-generated content that could cause harm;

 

(3)   Developers, owners, and operators of AI systems classified as high-risk under sub-section (4) of Section 7 that are involved in generating or manipulating content shall be required to obtain and maintain adequate liability insurance coverage. The insurance coverage must include, but is not limited to:

(i)    Professional indemnity insurance to cover incidents involving inaccurate, inappropriate or defamatory AI-generated content;

(ii)   Cyber risk insurance to cover incidents related to data breaches, network security failures or other cyber incidents involving AI-generated content;

(iii) General commercial liability insurance to cover incidents causing third-party injury, damage or other legally liable scenarios involving AI-generated content.

 

(4)   The insurance coverage shall be proportionate to the risk level and potential impacts of the high-risk AI system, as determined by:

(i)    Its conceptual classification based on the sub-sections (3), and (4) of Section 4;

(ii)   Its technical characteristics evaluated as a Specific Purpose AI (SPAI) system under sub-section (4) of Section 5;

(iii) Its commercial factors such as user base, market influence, data integration, and revenue generation specified under Section 6.

 

(5)   The minimum insurance coverage required for high-risk AI content generation systems shall be:

(i)    For systems with potential widespread impact or lack of opt-out feasibility under Section 7(4)(a): INR 50 crores

(ii)   For systems with vulnerability factors or irreversible consequences under Section 7(4)(b): INR 25 crores

(iii) For other high-risk AI content generation systems under Section 7(4): INR 10 crores

 

(6)   Proof of adequate insurance coverage, in accordance with this Section, shall be provided to the IAIC annually by the developers, owners, and operators of high-risk AI content generation systems.

 

(7)   Failure to obtain and maintain the required insurance coverage shall be treated as a breach of compliance under Section 19, and the IAIC may take appropriate enforcement actions, including but not limited to:

(i)    Issuing warnings and imposing penalties

(ii)   Suspending or revoking the system’s certification

(iii) Prohibiting the deployment or operation of the AI system until compliance is achieved.

 

(8)   For AI systems not classified as high-risk under Section 7(4), it is recommended that developers, owners, and operators obtain appropriate insurance coverage to mitigate potential risks and liabilities associated with AI-generated content. The IAIC shall provide guidance on suitable insurance products and coverage levels based on the AI system’s risk profile and potential impacts.

 

 

(9)   Intermediaries that host, publish, or make available AI-generated content, including but not limited to online platforms, content-sharing services, and cloud service providers, shall implement reasonable measures to identify and mitigate potential risks associated with AI-generated content, particularly content classified as high-risk under sub-section (4) of Section 7:

 

(i)    For high-risk AI-generated content, intermediaries shall:

(a)   Conduct due diligence to assess the potential risks and impacts of the content;

(b)   Implement content moderation practices to detect and address harmful, illegal, or infringing content;

(c)    Maintain records and audit trails to enable traceability and attribution of the content;

(d)   Cooperate with authorities and provide relevant information upon lawful requests.

 

(ii)   Intermediaries shall establish clear & accessible policies and procedures for handling complaints, takedown requests, and legal notices related to AI-generated content, ensuring timely and appropriate action.

 

(iii) Intermediaries shall maintain adequate insurance coverage to compensate for potential damages or harm caused by high-risk AI-generated content they host, publish, or make available, as per the guidelines issued by the IAIC in consultation with the Insurance Regulatory and Development Authority of India (IRDAI).

 

(iv)  The IAIC, in consultation with relevant stakeholders, shall develop guidelines and best practices for intermediaries regarding the handling of AI-generated content, including but not limited to:

(a)    Risk assessment methodologies;

(b)   Content moderation practices;

(c)    Transparency and disclosure requirements;

(d)   Cooperation with authorities and law enforcement;

(e)    Liability and insurance coverage requirements.

 

(10)AI systems must use watermarking techniques to embed identifying information into generated or manipulated content in a manner that is robust, accessible, explainable, and capable of verifying the content’s authenticity and distinguishing AI-generated content from non-AI-generated content:

(i)    The liability, responsibility, and accountability for watermarking techniques, which embed identifying information in AI-generated content, shall also be determined according to the classification methods outlined in Chapter II of this Act in accordance with sub-section (2);

(ii)   The identifying watermark or information must be publicly accessible in a transparent manner, which may include publishing the watermark or making it available through an open API;

(iii) The IAIC shall develop and publish guidelines for implementing, licensing, and using watermarking and other identifying techniques in AI systems. These guidelines shall address the type of information to be embedded, licensing requirements, robustness of techniques, and accessibility of identifying information;

(iv)  The IAIC shall certify the use of watermarking techniques in AI systems and evaluate the effectiveness of these techniques in preventing the misuse of AI-generated content;

(v)   The IAIC shall establish and maintain a public registry of open-access technical methods to identify and examine AI-generated content, accessible to end-users, business users, and government officials. This registry shall provide clear instructions for using these methods and information on their validity;

 

(11)This Section shall apply to all AI systems that generate or manipulate content, regardless of the content’s purpose or intended use, including AI systems that generate text, images, audio, video, or any other forms of content in accordance with sub-section (2).

bottom of page