top of page

Chapter VI

PUBLISHED

Chapter VI: ON GUIDANCE PRINCIPLES AND MONITORING


Section 15 - Guidance Principles for AI-related Agreements

(1)   The following guidance principles shall apply to AI-related agreements to promote transparent, fair, and responsible practices in the development, deployment, and use of AI technologies:

 

(i)    AI Software License Agreement (ASLA):

(a)    The AI Software License Agreement (ASLA) shall be mandatory for AI systems classified as AI-Pro or AI-Com as per Section 6, if they are designated as High Risk AI systems under Section 7.  

(b)   The ASLA shall clearly define:

(i)     The scope of rights granted to the licensee, including limitations on use, modification, and distribution of the AI software;

(ii)   Intellectual property rights and ownership provisions;

(iii)  Term, termination, warranties, and indemnification clauses.

 

(ii)   AI Service Level Agreement (AI-SLA):

(a)    The AI Service Level Agreement (AI-SLA) shall be mandatory for AI systems classified as AIaaS or AI-Com as per Section 6, if they are designated as High Risk or Medium Risk AI systems under Section 7.     

(b)   The AI-SLA shall establish:

(i)     Service levels, performance metrics, availability, and support commitments;

(ii)   Monitoring, measurement, change management, and problem resolution mechanisms;

(iii)  Data handling, security, and business continuity requirements.

 

(iii) AI End-User License Agreement (AI-EULA) or AI End-Client License Agreement (AI-ECLA):

(a)    The AI End-User License Agreement (AI-EULA) shall be mandatory for all AI system classifications intended for end-user or client deployment;

(b)   The AI-EULA or AI-ECLA shall specify:

(i)     Permitted uses and user obligations;

(ii)   Data privacy provisions aligned with the Digital Personal Data Protection Act, 2023 and other cyber and data protection frameworks;

(iii)  Intellectual property rights, warranties, and liability limitations.

 

(iv)  AI Explainability Agreement (AI-ExA):

(a)    The AI Explainability Agreement (AI-ExA) shall be mandatory for all high-risk AI systems under Section 7;

(b)   The AI-ExA shall specify:

(i)     Clear and understandable explanations for AI system outputs and decisions;

(ii)   Documentation and reporting on the AI system’s decision-making processes;

(iii)  Provisions for human review and intervention mechanisms

 

(2)   The following agreements shall be voluntary in nature, but are recommended for adoption by entities engaged in the deployment of AI systems:

(i)    An AI Data Licensing Agreement, which shall govern the terms and conditions for licensing data sets used for training, testing, and validating AI systems;

(ii)   An AI Model Licensing Agreement, which shall cover the licensing of pre-trained AI models or model components for use in developing or deploying AI systems;

(iii) An AI Collaboration Agreement, which shall facilitate collaboration between multiple parties, such as research institutions, companies, or individuals, in the development or deployment of AI systems;

(iv)  An AI Consulting Agreement, which shall govern the terms and conditions under which an AI expert or consulting firm provides advisory services, technical assistance, or training related to the development, deployment, or use of AI systems;

(v)   An AI Maintenance and Support Agreement, which shall define the terms and conditions for ongoing maintenance, support, and updates for AI systems.

 

(3)   Agreements that are mandatory in nature must include provisions addressing the following:

(i)    Requirements for post-deployment monitoring of AI systems classified as High Risk AI systems;

(ii)   Protocols for incident reporting and response in the event of any issues or incidents related to the AI system;

(iii) Penalties or consequences for non-compliance with the terms of the agreement or any applicable laws or regulations.

 

(4)   The IAIC shall develop and publish model AI-related agreements incorporating these guidance principles, taking into account the unique characteristics and risks associated with different types of AI systems, such as:

(i)    The inherent purpose of the AI system, as determined by the conceptual classifications outlined in Section 4;

(ii)   The technical features and limitations of the AI system, as specified in Section 5;

(iii) The commercial factors associated with the AI system, as outlined in Section 6;

(iv)  The risk level of the AI system, as classified under Section 7.

 

(5)   Consultative Principles on AI Value & Supply Chain

(i)    Entities involved in the development, supply, distribution, and commercialization of AI technologies are encouraged to adopt the following consultative principles when forming agreements. The following principles should guide contractual practices:

(a)    Transparency in Ownership & Intellectual Property: Agreements should clearly define ownership rights over trained models, derived outputs, and any intellectual property generated during the development process.

 

Illustration

 

An AI startup develops a machine learning model for financial trading in collaboration with a large financial institution. The contract specifies that while the startup retains ownership of the trained model, the financial institution has exclusive rights to use the model’s outputs for its internal trading operations.

 

(b)   Liability & Risk Allocation: Contracts should clearly allocate risks associated with potential failures in AI systems, ensuring that liability is fairly distributed among stakeholders based on their role in the value chain.

 

Illustration

 

A manufacturer integrates an AI-powered quality control system into its production line. The contract with the AI provider specifies that if the system fails due to faulty training data provided by a third-party vendor, liability will be shared between the AI provider and the vendor, based on their respective contributions to the failure.

 

(c)    Data Sharing & Privacy Protections: Contracts should include provisions for secure data sharing between entities while ensuring compliance with the Digital Personal Data Protection Act, 2023.

 

Illustration

 

A healthcare provider contracts with an AI company to develop a diagnostic tool using patient data. The contract includes detailed clauses on anonymizing patient data before sharing it with the AI company, ensuring compliance with the Digital Personal Data Protection Act.

 

(ii)   The Indian Artificial Intelligence Council (IAIC) may issue advisories on best practices for drafting contracts related to different stages of the value & supply chains. These advisories will provide sector-specific guidance on how these principles can be applied effectively without imposing mandatory requirements.

(iii) While these principles are not mandatory under this Act, entities are encouraged to adopt them as part of a self-regulatory framework. Voluntary adherence to these principles can help foster trust among stakeholders and promote responsible commercialization of AI technologies.

 

(6)   Entities engaged in the development, deployment, or use of AI systems may adopt and customize the model templates provided by the IAIC to suit their specific contexts and requirements.

(7)   The IAIC may mandate the use of model agreements for certain high-risk sectors, high-risk use cases as per Section 6, or types of entities, where the potential risks associated with the AI system are deemed significant.

(8)   The model agreements shall be reviewed and updated periodically to reflect advancements in AI technologies, evolving best practices, and changes in the legal and regulatory landscape..

 

Section 16 - Guidance Principles for AI-related Corporate Governance

(1)   Entities involved in the development, deployment, and use of artificial intelligence (AI) techniques, tools or methods across their governance structures and decision-making processes must adhere to the following guiding principles as per the National Artificial Intelligence Ethics Code under Section 13:

(i)    Accountability and Responsibility:

(a)    Clear accountability for decisions and actions involving the use of AI techniques must be maintained within the organization by the appropriate leadership or management.

(b)   Robust governance frameworks must be established to assign roles, responsibilities and oversight mechanisms related to the development, deployment and monitoring of AI systems used for corporate governance purposes.

 

(ii)   Transparency and Explainability:

(a)    AI systems used to aid corporate decision-making must employ transparent models and techniques that enable interpretability of their underlying logic, data inputs and decision rationales

(b)   Comprehensive documentation must be maintained on the AI system’s architecture, training data, performance metrics and potential limitations or biases

(c)    Internal policies, directives and guidelines must be made by entities for impacted stakeholders to access explanations of how AI-driven decisions were made and what factors influenced those decisions

 

(iii) Human Agency and Oversight:

(a)   The use of AI techniques in corporate governance must be subject to meaningful human control, oversight and the ability to intervene in or override AI system outputs when necessary.

(b)   Appropriate human review mechanisms must be implemented, particularly for high-stakes decisions impacting all relevant stakeholders, including employees, shareholders, customers, and the public interest;

(c)   Company or Organisation policies must clearly define the roles and responsibilities of humans versus AI systems in governance and decision-making processes;

 

(iv)  Intellectual Property and Ownership Considerations:

(a)   Corporate entities should establish clear policies and processes for determining ownership, attribution, and intellectual property rights over AI-generated content, inventions, and innovations.

(b)   These policies should recognize and protect the contributions of human creators, inventors, and developers involved in the development and deployment of AI systems.

(c)    Corporations should balance the need for incentivizing innovation through intellectual property protections with the principles of transparency, accountability, and responsible use of AI technologies.

 

(v)   Encouraging Open Source Adoption:

(a)    Companies and organisations are encouraged to leverage open-source software (OSS) and open standards in the development and deployment of AI systems, where appropriate.

(b)   The use of OSS can promote transparency, collaboration, and innovation in the AI ecosystem while ensuring compliance with applicable laws, regulations, and ethical principles outlined in Section 13.

(c)    Companies and organisations should contribute to and participate in open-source AI communities, fostering knowledge sharing and collective advancement of AI technologies.

 

(2)   For the purposes of these Guidance Principles, the artificial intelligence (AI) techniques, tools or methods across governance structures and decision-making processes shall refer to:

(i)     AI systems that replicate or emulate human decision-making abilities through autonomy, perception, reasoning, interaction, adaptation and creativity, as evaluated under the Anthropomorphism-Based Concept Classification (ABCC) described in sub-section (5) of Section 4;

(ii)   AI systems whose development, deployment and utilization within corporate governance structures necessitates the evaluation and mitigation of potential ethical risks and implications, in accordance with the Ethics-Based Concept Classification (EBCC) under sub-section (3) of Section 4;

(iii) AI systems that may impact individual rights such as privacy, due process, non-discrimination as well as collective rights, requiring a rights-based assessment as per the Phenomena-Based Concept Classification (PBCC) outlined in sub-section (4) of Section 4;

(iv)  General Purpose AI Applications with Multiple Stable Use Cases (GPAIS) that can reliably operate across various governance functions as per the technical classification criteria specified in sub-section (2) of Section 5;

(v)   Specific Purpose AI Applications (SPAI) designed for specialized governance use cases based on the factors described in sub-section (4) of Section 5;

(vi)  AI systems classified as high-risk under the sub-section (4) of Section 7 due to their potential for widespread impact, lack of opt-out feasibility, vulnerability factors or irreversible consequences related to corporate governance processes;

(vii)AI systems classified as medium-risk under the sub-section (3) of Section 7 that require robust governance frameworks focused on transparency, explainability and accountability aspects;

(viii)               AI systems classified as narrow-risk under the sub-section (2) of Section 7 where governance approaches should account for their technical limitations and vulnerabilities.

 

(3)   For AI systems exempted from certification under Section 11(3), companies and organisations may adopt a lean governance approach, focusing on:

(i)    Establishing basic incident reporting and response protocols as per Section 19, without the stringent requirements applicable to high-risk AI systems.

(ii)   Maintaining documentation and ensuring interpretability of the AI systems to the extent feasible, given their limited risk profile.

(iii) Conducting periodic risk assessments and implementing corrective measures as necessary, commensurate with the AI system’s potential impact.

 

(4)   The IAIC may mandate the application of the guidance principles outlined in this section for certain high-risk sectors, high-risk use cases as per Section 6, or types of entities, where the potential risks associated with the AI system are deemed significant.

(5)   The guidance principles shall be reviewed and updated periodically to reflect advancements in AI technologies, evolving best practices, and changes in the legal and regulatory landscape.


Section 17 - Post-Deployment Monitoring of High-Risk AI Systems

(1)   High-risk AI systems as classified in the sub-section (4) of Section 7 shall be subject to ongoing monitoring and evaluation throughout their lifecycle to ensure their safety, security, reliability, transparency and accountability.

(2)   The post-deployment monitoring shall be conducted by the providers, deployers, or users of the high-risk AI systems, as appropriate, in accordance with the guidelines established by the IAIC.

(3)   The IAIC shall develop and establish comprehensive guidelines for the post-deployment monitoring of high-risk AI systems, which may include, but not be limited to, the following:

(i)    Identification and assessment of potential risks, which includes:

(a)    performance deviations,

(b)   malfunctions,

(c)    unintended consequences,

(d)   security vulnerabilities, and

(e)    data breaches;

 

(ii)   Evaluation of the effectiveness of risk mitigation measures and implementation of necessary updates, corrections, or remedial actions;

(iii) Continuous improvement of the AI system’s performance, reliability, and trustworthiness based on real-world feedback and evolving best practices; and

(iv)  Regular reporting to the IAIC on the findings and actions taken as a result of the post-deployment monitoring, including any incidents, malfunctions, or adverse impacts identified, and the measures implemented to address them.

 

(4)   The post-deployment monitoring facilitated by the IAIC shall involve collaboration and coordination among providers, deployers, users, and sector-specific regulatory authorities, to ensure a comprehensive and inclusive approach to AI system oversight.

(5)   The IAIC shall establish mechanisms for the independent auditing and verification of the post-deployment monitoring activities of high-risk AI systems, as specified in the registration and certification metadata requirements under Section 12. This shall ensure transparency, accountability, and public trust in the governance of such AI systems through:

(i)    Mandatory documentation and reporting by providers on the monitoring protocols, performance metrics, risk mitigation measures, and human oversight mechanisms implemented for their high-risk AI systems;

(ii)   Periodic audits by accredited third-party auditors to validate the accuracy and completeness of the reported information against the certification criteria;

(iii) Public disclosure of audited monitoring reports and key performance indicators, subject to reasonable protection of confidential business information;

(iv)  Enabling mechanisms for relevant stakeholders and impacted communities to submit feedback and concerns regarding the real-world impacts of deployed high-risk AI systems.

(6)   Failure to comply with the post-deployment monitoring requirements and guidelines established by the IAIC may result in penalties as may be prescribed by the IAIC.

bottom of page