top of page

Section 17 – Post-Deployment Monitoring of High-Risk AI Systems

PUBLISHED

Section 17 - Post-Deployment Monitoring of High-Risk AI Systems

(1)   High-risk AI systems as classified in the sub-section (4) of Section 7 shall be subject to ongoing monitoring and evaluation throughout their lifecycle to ensure their safety, security, reliability, transparency and accountability.

(2)   The post-deployment monitoring shall be conducted by the providers, deployers, or users of the high-risk AI systems, as appropriate, in accordance with the guidelines established by the IAIC.

(3)   The IAIC shall develop and establish comprehensive guidelines for the post-deployment monitoring of high-risk AI systems, which may include, but not be limited to, the following:

(i)    Identification and assessment of potential risks, which includes:

(a)    performance deviations,

(b)   malfunctions,

(c)    unintended consequences,

(d)   security vulnerabilities, and

(e)    data breaches;

 

(ii)   Evaluation of the effectiveness of risk mitigation measures and implementation of necessary updates, corrections, or remedial actions;

(iii) Continuous improvement of the AI system’s performance, reliability, and trustworthiness based on real-world feedback and evolving best practices; and

(iv)  Regular reporting to the IAIC on the findings and actions taken as a result of the post-deployment monitoring, including any incidents, malfunctions, or adverse impacts identified, and the measures implemented to address them.

 

(4)   The post-deployment monitoring facilitated by the IAIC shall involve collaboration and coordination among providers, deployers, users, and sector-specific regulatory authorities, to ensure a comprehensive and inclusive approach to AI system oversight.

Related Indian AI Regulation Sources

Advisory on AI Intermediaries and Platforms

Report on AI Governance Guidelines Development

India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation

bottom of page