top of page

Section 15 – Guidance Principles for AI-related Agreements

PUBLISHED

Section 15 - Guidance Principles for AI-related Agreements

(1)   The following guidance principles shall apply to AI-related agreements to promote transparent, fair, and responsible practices in the development, deployment, and use of AI technologies:

 

(i)    AI Software License Agreement (ASLA):

(a)    The AI Software License Agreement (ASLA) shall be mandatory for AI systems classified as AI-Pro or AI-Com as per Section 6, if they are designated as High Risk AI systems under Section 7.  

(b)   The ASLA shall clearly define:

(i)     The scope of rights granted to the licensee, including limitations on use, modification, and distribution of the AI software;

(ii)   Intellectual property rights and ownership provisions;

(iii)  Term, termination, warranties, and indemnification clauses.

 

(ii)   AI Service Level Agreement (AI-SLA):

(a)    The AI Service Level Agreement (AI-SLA) shall be mandatory for AI systems classified as AIaaS or AI-Com as per Section 6, if they are designated as High Risk or Medium Risk AI systems under Section 7.     

(b)   The AI-SLA shall establish:

(i)     Service levels, performance metrics, availability, and support commitments;

(ii)   Monitoring, measurement, change management, and problem resolution mechanisms;

(iii)  Data handling, security, and business continuity requirements.

 

(iii) AI End-User License Agreement (AI-EULA) or AI End-Client License Agreement (AI-ECLA):

(a)    The AI End-User License Agreement (AI-EULA) shall be mandatory for all AI system classifications intended for end-user or client deployment;

(b)   The AI-EULA or AI-ECLA shall specify:

(i)     Permitted uses and user obligations;

(ii)   Data privacy provisions aligned with the Digital Personal Data Protection Act, 2023 and other cyber and data protection frameworks;

(iii)  Intellectual property rights, warranties, and liability limitations.

 

(iv)  AI Explainability Agreement (AI-ExA):

(a)    The AI Explainability Agreement (AI-ExA) shall be mandatory for all high-risk AI systems under Section 7;

(b)   The AI-ExA shall specify:

(i)     Clear and understandable explanations for AI system outputs and decisions;

(ii)   Documentation and reporting on the AI system’s decision-making processes;

(iii)  Provisions for human review and intervention mechanisms

 

(2)   The following agreements shall be voluntary in nature, but are recommended for adoption by entities engaged in the deployment of AI systems:

(i)    An AI Data Licensing Agreement, which shall govern the terms and conditions for licensing data sets used for training, testing, and validating AI systems;

(ii)   An AI Model Licensing Agreement, which shall cover the licensing of pre-trained AI models or model components for use in developing or deploying AI systems;

(iii) An AI Collaboration Agreement, which shall facilitate collaboration between multiple parties, such as research institutions, companies, or individuals, in the development or deployment of AI systems;

(iv)  An AI Consulting Agreement, which shall govern the terms and conditions under which an AI expert or consulting firm provides advisory services, technical assistance, or training related to the development, deployment, or use of AI systems;

(v)   An AI Maintenance and Support Agreement, which shall define the terms and conditions for ongoing maintenance, support, and updates for AI systems.

 

(3)   Agreements that are mandatory in nature must include provisions addressing the following:

(i)    Requirements for post-deployment monitoring of AI systems classified as High Risk AI systems;

(ii)   Protocols for incident reporting and response in the event of any issues or incidents related to the AI system;

(iii) Penalties or consequences for non-compliance with the terms of the agreement or any applicable laws or regulations.

 

(4)   The IAIC shall develop and publish model AI-related agreements incorporating these guidance principles, taking into account the unique characteristics and risks associated with different types of AI systems, such as:

(i)    The inherent purpose of the AI system, as determined by the conceptual classifications outlined in Section 4;

(ii)   The technical features and limitations of the AI system, as specified in Section 5;

(iii) The commercial factors associated with the AI system, as outlined in Section 6;

(iv)  The risk level of the AI system, as classified under Section 7.

 

(5)   Consultative Principles on AI Value & Supply Chain

(i)    Entities involved in the development, supply, distribution, and commercialization of AI technologies are encouraged to adopt the following consultative principles when forming agreements. The following principles should guide contractual practices:

(a)    Transparency in Ownership & Intellectual Property: Agreements should clearly define ownership rights over trained models, derived outputs, and any intellectual property generated during the development process.

 

Illustration

 

An AI startup develops a machine learning model for financial trading in collaboration with a large financial institution. The contract specifies that while the startup retains ownership of the trained model, the financial institution has exclusive rights to use the model’s outputs for its internal trading operations.

 

(b)   Liability & Risk Allocation: Contracts should clearly allocate risks associated with potential failures in AI systems, ensuring that liability is fairly distributed among stakeholders based on their role in the value chain.

 

Illustration

 

A manufacturer integrates an AI-powered quality control system into its production line. The contract with the AI provider specifies that if the system fails due to faulty training data provided by a third-party vendor, liability will be shared between the AI provider and the vendor, based on their respective contributions to the failure.

 

(c)    Data Sharing & Privacy Protections: Contracts should include provisions for secure data sharing between entities while ensuring compliance with the Digital Personal Data Protection Act, 2023.

 

Illustration

 

A healthcare provider contracts with an AI company to develop a diagnostic tool using patient data. The contract includes detailed clauses on anonymizing patient data before sharing it with the AI company, ensuring compliance with the Digital Personal Data Protection Act.

 

(ii)   The Indian Artificial Intelligence Council (IAIC) may issue advisories on best practices for drafting contracts related to different stages of the value & supply chains. These advisories will provide sector-specific guidance on how these principles can be applied effectively without imposing mandatory requirements.

(iii) While these principles are not mandatory under this Act, entities are encouraged to adopt them as part of a self-regulatory framework. Voluntary adherence to these principles can help foster trust among stakeholders and promote responsible commercialization of AI technologies.

 

(6)   Entities engaged in the development, deployment, or use of AI systems may adopt and customize the model templates provided by the IAIC to suit their specific contexts and requirements.

(7)   The IAIC may mandate the use of model agreements for certain high-risk sectors, high-risk use cases as per Section 6, or types of entities, where the potential risks associated with the AI system are deemed significant.

(8)   The model agreements shall be reviewed and updated periodically to reflect advancements in AI technologies, evolving best practices, and changes in the legal and regulatory landscape.

Related Indian AI Regulation Sources

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021)

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023)

Advisory on AI Intermediaries and Platforms

Report on AI Governance Guidelines Development

Advisory on Prohibition of AI Tools/Apps in Office Devices

Buckeye Trust v. PCIT, ITA No. 1051/Bang/2024 (ITAT Bengaluru Bench 2024-2025)

Policy Regarding Use of Artificial Intelligence Tools in District Judiciary

KMG Wires Private Limited vs. National Faceless Assessment Centre & Others, Writ Petition (L) No. 24366 of 2025, Bombay High Court, Order dated October 6, 2025

Circular on Use of Open/External Artificial Intelligence (AI) Tools for Official Work, File No. Z-11/12/1/Misc.Matter/2024-MSU

India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation

bottom of page