Chapters VIII, VIII-A & IX
PUBLISHED
Chapter VIII: INTELLECTUAL PROPERTY PROTECTIONS
Section 21 - Intellectual Property Protections
(1) In recognition of the unique challenges and opportunities presented by the development and use of artificial intelligence systems, AI systems must be protected through a combination of existing intellectual property (IP) rights, such as copyright, patents, and design rights, as well as new and evolving IP concepts specifically tailored to address the spatial aspects of AI systems.
(2) For the purposes of this Section, “spatial aspects of AI systems” shall refer to the unique capabilities of AI technologies, including but not limited to:
(i) Dynamically adapting and generating novel outputs based on changing inputs, environments, and interactions;
(ii) Operating with varying levels of autonomy in decision-making, task execution, and self-learning;
(iii) Integrating and analysing data from multiple spatial, temporal, and contextual sources;
(iv) Enabling novel applications, services, and experiences leveraging spatial computing technologies.
(3) The objectives of providing a combination of existing intellectual property rights are to:
(i) Encourage innovation by securing enforceable rights for AI developers over their creations, inventions, and generated outputs;
(ii) Enhance interoperability by ensuring contractual arrangements are not unduly hindered by restrictive IP terms;
(iii) Promote fair competition by preventing unauthorized exploitation of AI-related IP assets developed in India;
(iv) Protect individual privacy and data rights by aligning IP protections with provisions under the Digital Personal Data Protection Act, 2023 and other data protection frameworks.
(4) The IAIC shall establish consultative mechanisms, in cooperation with relevant IP authorities and stakeholders, to develop a comprehensive framework for the identification, protection, and enforcement of intellectual property rights related to AI systems, including:
(i) Defining the scope and limitations of combined IP protections for AI systems and their spatial aspects;
(ii) Assessing the compatibility of such protections with existing IP laws and international treaties;
(iii) Addressing interoperability considerations to enable seamless integration and data exchange among AI systems;
(iv) Examining IP implications of AI systems’ ability to process, learn from, and generate content based on copyrighted works or patented inventions;
(v) Developing guidelines for determining authorship, inventorship, and ownership of AI-generated content and innovations;
(vi) Establishing protocols for rights management, licensing, and commercialization of AI-related IP assets.
(5) The use of open-source software in AI systems shall be subject to the terms and conditions of the respective open-source licenses, with the IAIC providing guidance on compatibility between such licenses and the IP protections framework for AI systems.
(6) The IAIC shall periodically review and update the IP protections framework to accommodate advancements in AI technologies, evolving legal and regulatory landscapes, and emerging best practices in the field of AI and spatial computing.
CHAPTER VIII-A: INTERNATIONAL COOPERATION FRAMEWORK
Section 21A – Data Classification and Localisation Requirements
(1) The Central Government shall establish a data classification and tiering system that defines storage, access, and transfer requirements based on data sensitivity and strategic importance. The system shall include the following tiers:
(i) Tier 1: Critical National Security Data
(a) Characteristics: Includes data with direct national security implications, sensitive government infrastructure data, critical defence information, and biometric/sensitive personal identification data.
(ii) Tier 2: Strategic Sectoral Data
(a) Strategic Sectors Designated:
(i) Healthcare
(ii) Financial Services
(iii) Critical Infrastructure, and
(iv) Emerging Technology Research
(iii) Tier 3: Commercial and Research Data
(a) Characteristics: Includes non-sensitive commercial data, academic and research collaboration data, and open-source AI training datasets.
(2) To promote responsible data management and adherence to localisation requirements among companies, the Central Government shall provide incentives aligned with the entity’s AI classification under Chapter II. Incentives include:
(i) Tax Benefits: Available for entities compliant with localisation protocols, with additional consideration given based on the AI system’s classification type under the commercial methods of classification in Section 6.
(ii) Expedited Cross-Border Approvals: Reserved for institutions with demonstrated responsible cross-border data management, particularly those operating high-risk AI systems or classified under AI-IaaS and AI-Com as per methods of classification in Section 5 due to their integration with sensitive digital infrastructure.
(iii) Recognition Certificates for Exemplary Management Practices: Granted to institutions that demonstrate best practices in data management, security, and AI governance, taking into account methods of classification in Sections 5 and 7.
(3) The framework shall be rolled out in phases over 24 months and include:
(i) Regular review and recalibration to adapt to emerging technological and policy challenges.
(ii) Stakeholder consultation mechanisms to incorporate feedback from industry, academia, and government entities.
(iii) Capacity building programs to support entities in implementing and maintaining compliance with these standards.
Chapter IX: SECTOR-NEUTRAL & SECTOR-SPECIFIC STANDARDS
Section 22 - Shared Sector-Neutral & Sector-Specific Standards
(1) The IAIC shall coordinate the implementation and review of the following sector-neutral standards for the responsible development, deployment, and use of AI systems:
(i) Fundamental Principles of Liability as outlined in sub-sections (2), (3), and (4);
(2) Liability for harm or damage caused by an AI system shall be allocated based on the following principles:
(i) The party that developed, deployed, or operated the AI system shall be primarily liable for any harm or damage caused by the system, taking into account the system’s classification under the conceptual, technical, commercial, and risk-based methods.
(ii) Liability may be shared among multiple parties involved in the AI system’s lifecycle, based on their respective roles and responsibilities, as well as the system’s classification and associated requirements under Sections 8 and 9.
(iii) End-users shall not be held liable for harm or damage caused by an AI system, unless they intentionally misused or tampered with the system, or failed to comply with user obligations specified based on the system’s classification.
(3) To determine and adjudicate liability for harm caused by AI systems, the following factors shall be considered:
(i) The foreseeability of the harm, in light of the AI system’s intended purpose as identified by the Issue-to-Issue Concept Classification (IICC) under Section 4(2), its capabilities as specified in the Technical Classification under Section 5, and its limitations according to the Risk Classification under Section 7;
(ii) The degree of control exercised over the AI system, considering the human oversight and accountability requirements tied to its Risk Classification under Section 7, particularly the principles of Human Agency and Oversight as outlined in Section 13;
(4) Developers and operators of AI systems shall be required to obtain liability insurance to cover potential harm or damage caused by their AI systems. The insurance coverage shall be proportionate to the risk levels and potential impacts of the AI systems, as determined under the Risk Classification framework in Section 7, and the associated requirements for high-risk AI systems outlined in Section 9. This insurance policy shall ensure that compensation is available to affected individuals or entities in cases where liability cannot be attributed to a specific party.
(5) The IAIC shall enable coordination among sector-specific regulators for the responsible development, deployment, and use of AI systems in sector-specific contexts based on the following set of principles:
(i) Transparency and Explainability:
(a) AI systems should be designed and developed in a transparent manner, allowing users to understand how they work and how decisions are made.
(b) AI systems should be able to explain their decisions in a clear and concise manner, allowing users to understand the reasoning behind their outputs.
(c) Developers should provide clear documentation and user guides explaining the AI system’s capabilities, limitations, and potential risks.
(d) The level of transparency and explainability required may vary based on the AI system’s risk classification and intended use case.
(ii) Fairness and Bias:
(a) AI systems should be regularly monitored for technical bias and discrimination, and appropriate mitigation measures should be implemented to address any identified issues in a sociotechnical context.
(b) Developers should ensure that training data is diverse, representative, and free from biases that could lead to discriminatory outcomes.
(c) Ongoing audits and assessments should be conducted to identify and rectify any emerging biases during the AI system’s lifecycle.
(iii) Safety and Security:
(a) AI systems should be designed and developed with safety and security by design & default.
(b) AI systems should be protected from unauthorized access, modification, or destruction.
(c) Developers should implement robust security measures, such as encryption, access controls, and secure communication protocols, to safeguard AI systems and their data.
(d) AI systems should undergo rigorous testing and validation to ensure they perform safely and reliably under normal and unexpected conditions.
(e) Developers should establish incident response plans and mechanisms to promptly address any safety or security breaches.
(iv) Human Control and Oversight:
(a) AI systems should be subject to human control and oversight to ensure that they are used responsibly.
(b) There should be mechanisms in place for data principals to intervene in the operation of AI systems if necessary.
(c) Developers should implement human-in-the-loop or human-on-the-loop approaches, allowing for human intervention and final decision-making in critical or high-risk scenarios.
(d) Clear protocols should be established for escalating decisions to human operators when AI systems encounter situations beyond their designed scope or when unexpected outcomes occur.
(e) Regular human audits and reviews should be conducted to ensure AI systems are functioning as intended and aligned with human values and societal norms.
(ii) Open Source and Interoperability:
(a) The development of shared sector-neutral standards for AI systems shall leverage open source software and open standards to promote interoperability, transparency, and collaboration.
(b) The IAIC shall encourage the participation of open source communities and stakeholders in the development of AI standards.
(c) Developers should strive to use open source components and frameworks when building AI systems to facilitate transparency, reusability, and innovation.
(d) AI systems should be designed with interoperability in mind, adhering to common data formats, protocols, and APIs to enable seamless integration and data exchange across different platforms and domains.
(e) The IAIC shall promote the development of open benchmarks, datasets, and evaluation frameworks to assess and compare the performance of AI systems transparently.