top of page

Chapter VII

PUBLISHED

CHAPTER VII: REPORTING AND SHARING

Section 18 - Third-Party Vulnerability Reporting

[***]

Section 19 - Incident Reporting and Mitigation Protocols

(1) All developers, operators, and users of AI systems shall establish mechanisms for reporting incidents related to such AI systems.

(2) Incident reporting mechanisms must be easily accessible, user-friendly, and secure, such as a dedicated hotline, online portal, or email address.

(3) Incidents involving high-risk AI systems shall be treated as a priority and reported immediately, but not later than 48 hours from becoming aware of the incident.

(4) For other AI systems, incidents must be reported within 7 days of becoming aware of such incidents.

(5) All incident reports shall be submitted to a central repository established and maintained by the IAIC.

(6) The IAIC shall collect, analyse, and share incident data from this repository to identify trends, potential risks, and develop mitigation strategies.

(7) The IAIC shall publish guidelines on incident reporting requirements, including:

(i) Criteria for determining incident severity:

(a) Critical: Incidents involving high-risk AI systems posing an imminent threat to human life, safety, or fundamental rights;

(b) High: Incidents causing significant harm, disruption, or financial loss;

(c) Medium: Incidents with moderate impact or potential for risk escalation;

(d) Low: Incidents with minimal impact.

(ii) Information to Provide in Incident Reports:

(a) Detailed description of the incident and its impact;

(b) Details of the AI system (type, use case, risk level, deployment stage);

(c) For high-risk AI systems: Root cause analysis, mitigation actions, and supporting data.

(iii) Timelines and Procedure for Reporting:

(a) Critical incidents with high-risk AI systems must be reported within 48 hours;

(b) High or medium severity incidents must be reported within 7 days if involving high-risk AI systems, and within 14 days for all other systems;

(c) Low severity incidents must be reported monthly.

(iv) Confidentiality measures for incident data:

(a) All AI systems must ensure to have:

(b) Data encryption at rest and in transit;

(c) Role-based access controls for incident data;

(d) Maintaining audit logs of all data access;

(e) Secure communication channels for data transmission;

(f) Retaining data as per requirements under cyber and data protection frameworks;

(g) Regular risk assessments on data confidentiality;

(h) Employee training on data protection and handling.

(v) All high-risk AI systems must ensure to have:

(a) Proper encryption key management practices;

(b) Encryption for removable media with incident data;

(c) Multi-factor authentication for data access;

(d) Physical security controls for data storage;

(e) Redacting/anonymizing personal information;

(f) Secure data disposal mechanisms;

(g) Periodic external audits on confidentiality;

(h) Disciplinary actions for violations.

(vi) The following measures are optional for low-risk AI systems:

(a) Key management practices (recommended);

(b) Removable media encryption (as needed);

(c) Multi-factor authentication (recommended);

(d) Physical controls (based on data sensitivity);

(e) Personal data redaction (as applicable);

(f) Secure disposal mechanisms (recommended).


(8) All AI system developers, operators, and users shall implement the following minimum mitigation actions upon becoming aware of an incident:

(i) Assess the incident severity based on IAIC guidelines;

(ii) Contain the incident through isolation, disabling functions, or other measures;

(iii) Investigate the root cause of the incident;

(iv) Remediate the incident through updates, security enhancements, or personnel training;

(v) Communicate incident details and mitigation actions to impacted parties;

(vi) Review and improve internal incident response procedures.


(9) For AI systems exempted from certification under sub-section (3) of Section 11, the following guidelines shall apply regarding incident reporting and response protocols:

(i) Voluntary Incident Reporting: Developers, operators and users of exempted AI systems are encouraged, but not mandatorily required, to establish mechanisms for incident reporting related to such systems.

(ii) Focus on High/Critical Incident: In cases where incident reporting mechanisms are established, the focus shall be on reporting high severity or critical incidents that pose a clear potential for harm or adverse impact.

(iii) Reasonable Timelines: For high/critical incidents involving exempted AI systems, developers shall report such incidents to the IAIC within a reasonable timeline of 14-30 days from becoming aware of the incident.

(iv) Incident Description: Incident reports for exempted AI systems shall primarily include a description of the incident, its perceived severity and impact, and details about the AI system itself (type, use case, risk classification).

(v) Confidentiality Measures: Developers of exempted AI systems shall implement confidentiality measures for incident data that are proportionate to the data sensitivity and potential risks involved.

(vi) Coordinated Disclosure: The IAIC shall establish coordinated disclosure programs to facilitate responsible reporting and remediation of vulnerabilities or incidents related to exempted AI systems.

(vii) Knowledge Sharing: The IAIC shall maintain a knowledge base of reported incidents involving exempted AI systems and share anonymized information to promote learning and improve incident response practices.


(10) The IAIC shall provide support and resources to AI entities on request for effective incident mitigation, prioritizing high-risk AI incidents.

(11) The IAIC shall have the power to audit AI entities and impose penalties for non-compliance with this Section as per the provisions of this Act.


Section 20 - Responsible Information Sharing

[***]

Section 20A – Transparency and Accountability in AI-related Government Initiatives and Public-Private Partnerships

(1) This section applies to all AI-related initiatives undertaken by any governmental body, statutory authority, public sector entity, or public-private partnership (PPP) involving AI technologies for public services or infrastructure.


(2) Transparency Requirements: All entities under this section must comply with the Right to Information Act, 2005, by publicly disclosing the following information about AI initiatives:

(i) A clear statement of the project’s purpose and expected outcomes;

(ii) Details of funding, including public funds, subsidies, or PPP financial arrangements;

(iii) Summaries of risk assessments addressing privacy, security, and ethical impacts;

(iv) Descriptions of algorithms used in decision-making for public services, including their purpose and functionality;

(v) Key performance indicators (KPIs) to evaluate the AI system’s effectiveness.


(3) Additional Obligations for Public-Private Partnerships (PPPs): PPPs involving AI technologies must:

(i) Disclose key contractual terms, including payment structures, risk allocation, and responsibilities of each party;

(ii) Provide public access to data generated by AI systems in public service contexts, unless restricted under Section 8 of the RTI Act, 2005, or Section 6 of the DPDP Act, 2023;

(iii) Conduct annual independent audits to verify compliance with ethical standards and performance metrics, and publish the audit results.


(4) Algorithmic Accountability: AI systems used in government or PPP initiatives that impact individuals’ rights or access to public services must:

(i) Provide written explanations of algorithmic decisions upon request by affected individuals;

(ii) Document and disclose measures to prevent algorithmic bias, including details of data selection and validation processes;

(iii) Conduct and publish impact assessments before deployment, evaluating risks to vulnerable populations.


(5) Before launching large-scale AI projects or entering PPPs involving AI, the responsible government body must:

(i) Hold public consultations with stakeholders, including civil society, industry experts, academics, and affected communities;

(ii) Publish a summary of consultation feedback and explain how it was incorporated into the project plan.


(6) All entities under this section must submit an annual report to the Indian Artificial Intelligence Council (IAIC), which must be published on official government websites, detailing:

(i) Progress on AI projects;

(ii) Results of audits or impact assessments;

(iii) Incidents of AI misuse or failure, with corrective actions taken;

(iv) Measures implemented to address transparency, accountability, and ethical concerns.


(7) Exemptions: Information may be withheld from disclosure if it:

(i) Compromises national security;

(ii) Violates personal privacy under the DPDP Act, 2023;

(iii) Interferes with ongoing investigations or enforcement actions;

(iv) Conflicts with legitimate use purposes as defined under Section 6 of the DPDP Act, 2023, per Section 8 of the RTI Act, 2005.

bottom of page