top of page

Section 19 – Incident Reporting and Mitigation Protocols

PUBLISHED

Section 19 - Incident Reporting and Mitigation Protocols

(1) All developers, operators, and users of AI systems shall establish mechanisms for reporting incidents related to such AI systems.

(2) Incident reporting mechanisms must be easily accessible, user-friendly, and secure, such as a dedicated hotline, online portal, or email address.

(3) Incidents involving high-risk AI systems shall be treated as a priority and reported immediately, but not later than 48 hours from becoming aware of the incident.

(4) For other AI systems, incidents must be reported within 7 days of becoming aware of such incidents.

(5) All incident reports shall be submitted to a central repository established and maintained by the IAIC.

(6) The IAIC shall collect, analyse, and share incident data from this repository to identify trends, potential risks, and develop mitigation strategies.

(7) The IAIC shall publish guidelines on incident reporting requirements, including:

(i) Criteria for determining incident severity:

(a) Critical: Incidents involving high-risk AI systems posing an imminent threat to human life, safety, or fundamental rights;

(b) High: Incidents causing significant harm, disruption, or financial loss;

(c) Medium: Incidents with moderate impact or potential for risk escalation;

(d) Low: Incidents with minimal impact.

(ii) Information to Provide in Incident Reports:

(a) Detailed description of the incident and its impact;

(b) Details of the AI system (type, use case, risk level, deployment stage);

(c) For high-risk AI systems: Root cause analysis, mitigation actions, and supporting data.

(iii) Timelines and Procedure for Reporting:

(a) Critical incidents with high-risk AI systems must be reported within 48 hours;

(b) High or medium severity incidents must be reported within 7 days if involving high-risk AI systems, and within 14 days for all other systems;

(c) Low severity incidents must be reported monthly.

(iv) Confidentiality measures for incident data:

(a) All AI systems must ensure to have:

(b) Data encryption at rest and in transit;

(c) Role-based access controls for incident data;

(d) Maintaining audit logs of all data access;

(e) Secure communication channels for data transmission;

(f) Retaining data as per requirements under cyber and data protection frameworks;

(g) Regular risk assessments on data confidentiality;

(h) Employee training on data protection and handling.

(v) All high-risk AI systems must ensure to have:

(a) Proper encryption key management practices;

(b) Encryption for removable media with incident data;

(c) Multi-factor authentication for data access;

(d) Physical security controls for data storage;

(e) Redacting/anonymizing personal information;

(f) Secure data disposal mechanisms;

(g) Periodic external audits on confidentiality;

(h) Disciplinary actions for violations.

(vi) The following measures are optional for low-risk AI systems:

(a) Key management practices (recommended);

(b) Removable media encryption (as needed);

(c) Multi-factor authentication (recommended);

(d) Physical controls (based on data sensitivity);

(e) Personal data redaction (as applicable);

(f) Secure disposal mechanisms (recommended).


(8) All AI system developers, operators, and users shall implement the following minimum mitigation actions upon becoming aware of an incident:

(i) Assess the incident severity based on IAIC guidelines;

(ii) Contain the incident through isolation, disabling functions, or other measures;

(iii) Investigate the root cause of the incident;

(iv) Remediate the incident through updates, security enhancements, or personnel training;

(v) Communicate incident details and mitigation actions to impacted parties;

(vi) Review and improve internal incident response procedures.


(9) For AI systems exempted from certification under sub-section (3) of Section 11, the following guidelines shall apply regarding incident reporting and response protocols:

(i) Voluntary Incident Reporting: Developers, operators and users of exempted AI systems are encouraged, but not mandatorily required, to establish mechanisms for incident reporting related to such systems.

(ii) Focus on High/Critical Incident: In cases where incident reporting mechanisms are established, the focus shall be on reporting high severity or critical incidents that pose a clear potential for harm or adverse impact.

(iii) Reasonable Timelines: For high/critical incidents involving exempted AI systems, developers shall report such incidents to the IAIC within a reasonable timeline of 14-30 days from becoming aware of the incident.

(iv) Incident Description: Incident reports for exempted AI systems shall primarily include a description of the incident, its perceived severity and impact, and details about the AI system itself (type, use case, risk classification).

(v) Confidentiality Measures: Developers of exempted AI systems shall implement confidentiality measures for incident data that are proportionate to the data sensitivity and potential risks involved.

(vi) Coordinated Disclosure: The IAIC shall establish coordinated disclosure programs to facilitate responsible reporting and remediation of vulnerabilities or incidents related to exempted AI systems.

(vii) Knowledge Sharing: The IAIC shall maintain a knowledge base of reported incidents involving exempted AI systems and share anonymized information to promote learning and improve incident response practices.


(10) The IAIC shall provide support and resources to AI entities on request for effective incident mitigation, prioritizing high-risk AI incidents.

(11) The IAIC shall have the power to audit AI entities and impose penalties for non-compliance with this Section as per the provisions of this Act.

Related Indian AI Regulation Sources

Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules)

Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market participants

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021)

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023)

Digital Personal Data Protection Act, 2023 (DPDPA)

Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules)

bottom of page