Search Results
Results found for empty search
- Section 23 – Content Provenance and Identification | Indic Pacific
Section 23 – Content Provenance and Identification PUBLISHED Previous Next Section 23 - Content Provenance and Identification (1) AI systems that generate or manipulate content must establish and maintain robust mechanisms for source attribution, origin documentation, and ethical data handling. These mechanisms shall integrate technical measures, human oversight, and compliance with applicable laws to ensure transparency and accountability in the following manner: (i) Clearly document the origins of all content sources, ensuring that: (a) Sources are identified with precision, including the website, database, or platform from which data is obtained; (b) Only publicly available data or data acquired with explicit, documented consent from the data subject is utilised, where such data collection adheres to ethical practices, defined as: (i) Ensuring transparency by publicly disclosing the purpose, scope, and intended use of data collection, enabling accountability across all applications of the AI system; (ii) Complying with all applicable laws, including the Digital Personal Data Protection Act, 2023, and respecting the terms of service, intellectual property rights, and access restrictions of data sources, to safeguard the integrity of content generation and manipulation processes; (iii) Avoiding the collection of sensitive personal data unless strictly necessary, legally permitted, and subject to heightened safeguards, including mandatory risk assessments for applications involving high-stakes decision-making or vulnerable populations; (iv) Implementing measures to prevent unauthorized access, use, or distribution of the collected data, including the use of anonymisation or pseudonymisation techniques to minimize the risk of re-identification, where: (a) Anonymisation refers to the irreversible process of transforming data into a form where the data subject cannot be identified, meeting standards of irreversibility as per best practices; (b) Pseudonymisation refers to replacing identifying characteristics with artificial identifiers, ensuring that re-identification is only possible with additional, securely stored information; (v) Permitting the use of in-copyright works for text and data mining (TDM) purposes, provided that: (a) The TDM is conducted for non-commercial research, statistical, or operational optimization purposes, supporting innovation while respecting the rights of content creators; (b) The entity has lawful access to the data, either through public availability, consent, or authorised licensing; (c) The TDM process does not involve the reproduction or distribution of the original copyrighted works beyond what is necessary for the mining process, and appropriate attribution is provided where feasible; (vi) For AI systems deployed in strategic sectors under applicable regulations, additional compliance with sector-specific data security and national interest requirements shall apply, as prescribed by the relevant authority. (c) Any use of web scraping adheres to the target website’s terms of service and robots.txt protocols, with prior written permission obtained where required. (ii) Maintain comprehensive and auditable technical documentation of data collection methods used in training datasets, which shall include: (a) A detailed description of acquisition techniques, such as APIs, manual collection, or automated scraping, ensuring all methods comply with legal and ethical standards; (b) Evidence of compliance with the Digital Personal Data Protection Act, 2023, for any personal data collected, including records of user consent where applicable; (c) A commitment to data minimization, ensuring that only data necessary for the specified purpose is collected and processed. (iii) Establish and maintain verifiable records of data provenance, categorizing data as follows: (a) Personal data, processed strictly in accordance with the Digital Personal Data Protection Act, 2023, with documented consent and purpose limitation; (b) Non-personal data, collected through authorized and transparent methods, ensuring no violation of intellectual property rights or website terms of service; (c) Synthetic data, generated by the AI system itself, with clear documentation of the generation process to distinguish it from real-world data and prevent misrepresentation. (2) Accountability for tracking AI-generated content shall be determined by the specific use cases of the AI system, such that for end-users and business end-users of AI systems, accountability and liability for AI-generated content must be examined based on factors such as: (i) Whether they intentionally misused or tampered with the AI system despite being aware of its key limitations; (ii) Whether they failed to exercise reasonable care and due diligence in the utilisation of the AI system; (iii) Whether they knowingly propagated or disseminated AI-generated content that could cause harm; (3) Intermediaries that host, publish, or make available AI-generated content shall: (i) Implement non-discriminatory content policies that: (a) Prohibit demonetisation or de-prioritisation of content solely based on its AI-generated nature when properly watermarked and disclosed; (b) Maintain parity in content recommendation algorithms between human-created and AI-generated works meeting provenance requirements; (c) Provide appeal mechanisms for creators affected by automated moderation of AI-generated content; (4) Watermarking techniques must incorporate machine-readable metadata containing: (i) Scraping methodology classification; (ii) Geographic origin of training data sources; (iii) Licensing status of underlying datasets; (5) Developers, owners, and operators of AI systems as described in sub-sections (3) to (7) of Section 6 shall obtain and maintain adequate liability insurance coverage proportionate to their commercial classification and risk profile. The coverage must include: (i) Professional indemnity insurance to cover incidents involving inaccurate, inappropriate or defamatory AI-generated content; (ii) Cyber risk insurance to cover incidents related to data breaches, network security failures or other cyber incidents involving AI-generated content; (iii) General commercial liability insurance to cover incidents causing third-party injury, damage or other legally liable scenarios involving AI-generated content; (v) Specific coverage for claims arising from data scraping activities conducted in the development, training, or operation of the AI system. (6) Exceptions for AI-Preview (AI-Pre) Systems: AI systems as described in sub-section (8) of Section 6 shall be exempt from sub-section (5) requirements only if: (i) User base remains below 50,000 real-time active testers (ii) No personal/sensitive data processing occurs (iii) Annual development budget remains under ₹5 crore (iv) System displays prominent "Preview Version" watermarks (v) Revenue generation is limited to subscription fees for testing purposes, nominal one-time access fees, or cost recovery mechanisms that do not constitute full commercial deployment, provided that: (a) Such revenue does not exceed 15% of the developing entity's total annual revenue (b) All monetary transactions are clearly disclosed as supporting a preview or test version (c) No claims of complete or commercial-grade functionality are made in marketing materials (vi) The system is not used to generate, simulate, or manipulate user consent for any purpose (vii) All interactions regarding terms of service, permissions, or agreements are conducted without AI intermediation (viii) Regular checks or audits verify the system's inputs and outputs do not engage in preference or opinion manipulation (ix) The developer maintains comprehensive logs of all system prompts and responses that could influence user decision-making (x) Users are explicitly informed if the system utilises persuasive or preference-shaping techniques in its responses (xi) Educational implementations, provided that content generation capabilities are supervised; (xii) Research applications, provided that in the case of research institutions, centres and firms: (a) Limited usage by verified research entities; (b) Publication of findings adheres to responsible disclosure guidelines; (c) Basic insurance coverage for potential third-party effects is maintained. (xiii) Terms and conditions are easily accessible in clear and plain language, and a readily contactable person is designated in accordance with sub-sections (9) and (28) of Section 2 of the Consumer Protection Act, 2019, to handle user queries, complaints, or grievances. (xiv) Appropriate insurance is maintained for any public-facing implementations. (7) AI systems as described in sub-section (8) of Section 6 exceeding any criteria in (6) must: (i) Obtain insurance within 30 days of threshold breach (ii) Reclassify under appropriate Section 6 commercial category (8) The minimum insurance coverage required for AI content generation systems shall be: (vi) ₹ 50 crores for AI-S (Artificial Intelligence as a System) and AI-IaaS (Artificial Intelligence-enabled Infrastructure as a Service) under sub-sections (6) and (7) of Section 6 respectively (vii) ₹ 25 crores for AI-Pro (Artificial Intelligence as a Product) and AIaaS (Artificial Intelligence as a Service) under sub-sections (3) and (4) of Section 6 respectively (viii) ₹ 10 crores for AI-Com (Artificial Intelligence as a Component) under sub-section (5) of Section 6 (ix) ₹ 2 crores for AI-Pre (Artificial Intelligence for Preview) under sub-section (8) of Section 6 with public-facing implementations (9) The IAIC shall establish and maintain a public registry of open-access technical methods to identify and examine AI-generated content, accessible to end-users, business users, and government officials. This registry shall provide clear instructions for using these methods and information on their validity; (10) This Section shall apply to all AI systems that generate or manipulate content, regardless of the content’s purpose or intended use, including AI systems that generate text, images, audio, video, or any other forms of content. Related Indian AI Regulation Sources Amitabh Bachchan v. Rajat Nagi & Ors., CS(COMM) 819/2022, Delhi High Court, Order dated November 25, 2022 November 2022 Advisory on Ethical Use of Social Media and Deepfakes in Elections May 2024 Advisory on Labeling AI-Generated and Synthetic Content in Elections January 2025 Karan Johar v. India Pride Advisory Private Ltd. & Ors. ("Shaadi Ke Director Karan Aur Johar"), COM IPR Suit (L.) No. 17863/2024, Bombay High Court, Order dated March 7, 2025 March 2025 Advisory on Enhanced Standards for AI-Generated and Synthetic Content in Elections (Bihar Assembly Elections) October 2025 Aishwarya Rai Bachchan v. Aishwaryaworld.com & Ors., CS(COMM) 956/2025, Delhi High Court, Order dated September 9, 2025 September 2025 Akkineni Nagarjuna v. www.bfxxx.org & Ors., CS(COMM) 1023/2025, Delhi High Court, Order dated September 25, 2025 September 2025 Sudhir Chaudhary v. Meta Platforms Inc & Ors., CS(COMM) 1089/2025, Delhi High Court, Order dated October 10, 2025 October 2025 Suniel V Shetty v. John Doe & Ashok Kumar, COM IP Suit (L) No. 32130/2025, Bombay High Court, Order dated October 10, 2025 October 2025 Hrithik Roshan v. Ashok Kumar/John Doe & Ors., CS(COMM) 1107/2025, Delhi High Court, Order dated October 15, 2025 October 2025 Raj Shamani v. John Doe & Ors., CS(COMM) 1233/2025, Delhi High Court, Order dated November 21, 2025 November 2025
- Synthetic Content | Glossary of Terms | Indic Pacific | IPLR
Synthetic Content Date of Addition 22 March 2025 Artificially generated information created algorithmically rather than captured from real-world events. This includes synthetic data, media, text, and other content types produced through generative AI techniques to mimic properties of authentic content. Synthetic content encompasses many forms including media (computer-generated images, audio, video), text (artificially generated articles, dialogues), tabular data (synthetic database records), and unstructured data for training computer vision, speech recognition, and other AI systems. Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare | Indic Pacific | IPLR | indicpacific.com
ICMR's March 2023 guidance establishing ethical framework for AI applications in Indian biomedical research and healthcare sectors. The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare ICMR's March 2023 guidance establishing ethical framework for AI applications in Indian biomedical research and healthcare sectors. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. March 2023 Read the Document Issuing Authority Indian Council of Medical Research (ICMR) Type of Legal / Policy Document Guidance documents with normative influence Status In Force Regulatory Stage Post-regulatory Binding Value Guidance documents with normative influence AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 11 – Registration & Certification of AI Systems Section 11 – Registration & Certification of AI Systems Section 13 – National Artificial Intelligence Ethics Code Section 13 – National Artificial Intelligence Ethics Code Section 22 – Shared Sector-Neutral & Sector-Specific Standards Section 22 – Shared Sector-Neutral & Sector-Specific Standards
- Omnipotence | Glossary of Terms | Indic Pacific | IPLR
Omnipotence Date of Addition 26 April 2024 In the context of Artificial Intelligence, this implies that any AI system, due to its inherent yet limited features of processing and generating outputs, could be effective in shaping multiple sectors, eventualities and legal dilemmas. In short, any omnipotent AI system could have first, second & third order effects due to its actions. This was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019), Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and many key publications by ISAIL & VLiGTA . Related Long-form Insights on IndoPacific.App 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Learn More Draft Digital Competition Bill, 2024 for India: Feedback Report [IPLR-IG-003] Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Advisory on Enhanced Standards for AI-Generated and Synthetic Content in Elections (Bihar Assembly Elections) | Indic Pacific | IPLR | indicpacific.com
Election Commission of India's October 2025 directive mandating labeling of AI-generated political content and deepfakes during election campaigns. Advisory on Enhanced Standards for AI-Generated and Synthetic Content in Elections (Bihar Assembly Elections) Election Commission of India's October 2025 directive mandating labeling of AI-generated political content and deepfakes during election campaigns. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. October 2025 Read the Document Issuing Authority Election Commission of India (ECI) Type of Legal / Policy Document Executive Instruments - Administrative Decisions Status In Force Regulatory Stage Regulatory Binding Value Non-binding but institutionally endorsed instruments AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 23 – Content Provenance and Identification Section 23 – Content Provenance and Identification
- Section 13 – National Artificial Intelligence Ethics Code | Indic Pacific
Section 13 – National Artificial Intelligence Ethics Code PUBLISHED Previous Next Section 13 – National Artificial Intelligence Ethics Code (1) A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilisation of artificial intelligence technologies; (2) The NAIEC shall be based on the following core ethical principles: (i) AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process. (ii) AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes, including caste and class . (iii) AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content. (iv) AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration. (v) AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented. (vi) AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented; (vii) AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed. (viii) AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilisation while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment. (3) The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end: (i) The use of OSS shall be guided by a clear understanding of the open source development model, its scope, constraints, and the varying implementation approaches across different socio-economic and organisational contexts. (ii) AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny, while also ensuring that sensitive components and intellectual property are adequately protected. (iii)AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems, and shall implement appropriate governance, quality assurance, and risk management processes. (4) The Ethics Code shall provide guidance on intellectual property and ownership considerations related to AI-generated content. To this end: (i) Specific considerations shall include recognizing the role of human involvement in developing and deploying the AI systems, establishing guidelines on copyrightability and patentability of AI-generated works and inventions, addressing scenarios where AI builds upon existing protected works, safeguarding trade secrets and data privacy, balancing incentives for AI innovation with disclosure and access principles, and continuously updating policies as AI capabilities evolve. (ii) The Ethics Code shall encourage transparency and responsible practices in managing intellectual property aspects of AI-generated content across domains such as text, images, audio, video and others. (iii)In examining IP and ownership issues related to AI-generated content, the Ethics Code shall be guided by the conceptual classification methods outlined in Section 4, particularly the Anthropomorphism-Based Concept Classification to evaluate scenarios where AI replicates or emulates human creativity and invention. (iv) The technical classification methods described in Section 5, such as the scale, inherent purpose, technical features, and limitations of the AI system, shall inform the assessment of IP and ownership considerations for AI-generated content. (v) The commercial classification factors specified in the sub-section (1) of Section 6, including the user base, market influence, data integration, and revenue generation of the AI system, shall also be taken into account when determining IP and ownership rights over AI-generated content. (5) The Ethics Code shall provide guidance on frugal prompt engineering practices for the development of AI systems, ensuring efficiency, accessibility, and the equitable advancement of artificial intelligence, as follows: (i) Encourage the use of concise and well-structured prompts that specify desired outputs and constraints, minimizing unnecessary complexity in AI interactions; (ii) Recommend the adoption of transfer learning and pre-trained models to reduce the need for extensive fine-tuning, thereby conserving computational resources; (iii)Promote the use of data-efficient techniques, such as few-shot learning or active learning, to decrease the volume of training data required for effective model performance; (iv) Suggest the implementation of early stopping mechanisms to prevent overfitting and enhance model generalisation, ensuring robust performance with minimal training; (v) Advocate for the use of techniques such as model compression, quantisation, or distillation to reduce computational complexity and resource demands, making AI development more sustainable; (vi) Require the documentation and maintenance of records on prompt engineering practices, detailing the techniques used, performance metrics achieved, and any trade-offs between efficiency and effectiveness, to ensure transparency and accountability; (vii) Declare that prompt engineering, as a fundamental practice for optimizing AI systems, constitutes a global commons and a shared resource for the benefit of all humanity, and as such: (a) Shall not be monetized, commercialized, or subject to proprietary claims, ensuring that the knowledge and techniques of prompt engineering remain freely accessible to all; (b) Shall be treated as a universal public good, akin to principles established in international agreements governing shared resources, to foster global collaboration and innovation in AI development & education. (6) The Ethics Code shall provide guidance on ensuring fair access rights for all stakeholders involved in the AI value and supply chain, including: (i) All stakeholders should have fair and transparent access to datasets necessary for training and developing AI systems. This includes promoting equitable data-sharing practices that ensure smaller entities or research institutions are not unfairly disadvantaged in accessing critical datasets. (ii) Ethical use of computational resources should be promoted by ensuring that all stakeholders have transparent access to these resources. Special consideration should be given to smaller entities or research institutions that may require preferential access or pricing models to support innovation. (iii) Ethical guidelines should ensure that ownership rights over trained models, derived outputs, and intellectual property are clearly defined and respected. Stakeholders involved in the development process must have a clear understanding of their rights and obligations regarding the usage and commercialization of AI technologies. (iv) The benefits derived from AI technologies should be distributed ensuring that smaller players contributing critical resources like proprietary datasets or specialized algorithms are fairly compensated. (7) Adherence to the NAIEC shall be voluntary for all AI systems, as well as those exempted under the sub-section (3) of Section 11. (8) Strategic Sector Safeguards: AI systems deployed in strategic sectors, particularly those classified as high-risk under Section 7, shall adhere to heightened ethical standards that prioritize: (i) Safety Imperative: Developers and operators of AI systems shall design, implement, and maintain robust safety measures that minimize potential harm to individuals, property, society, and the environment throughout the system's lifecycle; (ii) Security by Design: AI systems shall incorporate security measures from the earliest stages of development to protect against unauthorized access, manipulation, or misuse, with particular emphasis on safeguarding data integrity and system confidentiality; (iii) Reliability and Resilience: All AI systems shall demonstrate consistent, accurate, and dependable performance through rigorous testing, validation, and continuous monitoring, with enhanced requirements for systems in critical infrastructure or essential services; (iv) Transparent Operations: AI systems shall implement mechanisms that enable appropriate stakeholder understanding of underlying algorithms, data sources, and decision-making processes, adhering to disclosure needs in line with intellectual property protections; (v) Accountable Governance: Clear lines of responsibility shall be established for AI system outcomes, with specified channels for redress and remediation in cases of adverse impacts, particularly for systems affecting fundamental rights or public welfare; (vi) Legitimate Purpose Alignment: AI systems shall be developed and deployed exclusively for purposes that comply with the legitimate uses framework established under Section 7 of the Digital Personal Data Protection Act, 2023 and shall not be repurposed for unauthorized applications without appropriate review. Related Indian AI Regulation Sources Principles for Responsible AI (Part 1) February 2021 Operationalizing Principles for Responsible AI (Part 2) August 2021 Fairness Assessment and Rating of Artificial Intelligence Systems (TEC 57050:2023) July 2023 The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare March 2023 Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report August 2025 Strengthening AI Governance Through Techno-Legal Framework (White Paper, Part 2 of Emerging Policy Priorities Series) January 2026
- Democratising Access to AI Infrastructure (White Paper, Version 3.0) | Indic Pacific | IPLR | indicpacific.com
Released on December 29, 2025, by the Office of the Principal Scientific Adviser to the Government of India (Prof. Ajay Kumar Sood), this is the first white paper in the series on "Emerging Policy Priorities for India's AI Ecosystem". The white paper defines democratising access to AI infrastructure as making foundational AI resources—compute capacity, high-quality datasets, and enabling tools—available beyond a limited set of large firms and major urban hubs, treating them as shared national resources. Democratising Access to AI Infrastructure (White Paper, Version 3.0) Released on December 29, 2025, by the Office of the Principal Scientific Adviser to the Government of India (Prof. Ajay Kumar Sood), this is the first white paper in the series on "Emerging Policy Priorities for India's AI Ecosystem". The white paper defines democratising access to AI infrastructure as making foundational AI resources—compute capacity, high-quality datasets, and enabling tools—available beyond a limited set of large firms and major urban hubs, treating them as shared national resources. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. December 2025 Read the Document Issuing Authority Office of Principal Scientific Adviser (OPSA), Government of India Type of Legal / Policy Document Guidance documents with normative influence Status Enacted Regulatory Stage Pre-regulatory Binding Value Guidance documents with normative influence AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 10 – Composition and Functions of the Council Section 10 – Composition and Functions of the Council Section 15 – Guidance Principles for AI-related Agreements Section 15 – Guidance Principles for AI-related Agreements Section 16 – Guidance Principles for AI-related Corporate Governance Section 16 – Guidance Principles for AI-related Corporate Governance Section 17 – Post-Deployment Monitoring of High-Risk AI Systems Section 17 – Post-Deployment Monitoring of High-Risk AI Systems
- Responsible AI #AIforAll (Discussion Paper on Facial Recognition Technology) | Indic Pacific | IPLR | indicpacific.com
NITI Aayog's November 2022 document applying responsible AI principles to facial recognition technology use cases demonstrating practical implementation approaches. Responsible AI #AIforAll (Discussion Paper on Facial Recognition Technology) NITI Aayog's November 2022 document applying responsible AI principles to facial recognition technology use cases demonstrating practical implementation approaches. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. November 2022 Read the Document Issuing Authority NITI Aayog Type of Legal / Policy Document Guidance documents with normative influence Status Enacted Regulatory Stage Pre-regulatory Binding Value Guidance documents with normative influence AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More AIACT.IN Version 3 Quick Explainer Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 3 – Classification of Artificial Intelligence Section 3 – Classification of Artificial Intelligence Section 7 – Risk-centric Methods of Classification Section 7 – Risk-centric Methods of Classification Section 9 – High-Risk AI Systems in Strategic Sectors Section 9 – High-Risk AI Systems in Strategic Sectors
- Prompt Injection | Glossary of Terms | Indic Pacific | IPLR
Prompt Injection Date of Addition 17 October 2025 A security vulnerability classified as the #1 OWASP risk for LLMs where malicious user inputs override system instructions and safety guardrails through carefully crafted natural language commands. This attack vector exploits the fundamental inability of language models to distinguish between system-level instructions and user-provided content, enabling adversaries to manipulate model behavior, extract sensitive information, or bypass ethical constraints. Prompt injection represents a critical socio-technical challenge distinct from traditional cybersecurity vulnerabilities because it operates through semantic manipulation rather than code exploitation. Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Proprietary Information | Glossary of Terms | Indic Pacific | IPLR
Proprietary Information Date of Addition 26 April 2024 Proprietary information in the context of generative AI applications is any information that is not publicly known and that gives a company or individual a competitive advantage. This can include information about the generative AI model itself, such as its training data, architecture, and parameters. It can also include information about the specific applications for the generative AI model, such as the products or services that it is used to create. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) . Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More [Version 1] A New Artificial Intelligence Strategy and an Artificial Intelligence (Development & Regulation) Bill, 2023 Learn More [Version 2] Draft Artificial Intelligence (Development & Regulation) Act, 2023 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3 Learn More AIACT.IN Version 3 Quick Explainer Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Sections 4-9, AiACT.IN V4 Infographic Explainers Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More [AIACT.IN V4] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 4 Learn More [AIACT.IN V5] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 5 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Digital Colony Risk | Glossary of Terms | Indic Pacific | IPLR
Digital Colony Risk Explainers The Complete Glossary Digital Colony Risk Date of Addition 20 Feb 2026 The condition in which a politically sovereign state becomes progressively dependent on foreign-owned digital infrastructure, platforms, or AI systems — not through any discrete act of subjugation but through the incremental accrual of technical, economic, and regulatory concessions that, in aggregate, transfer effective control over the state's data economy and technological development to external corporate or state actors. The risk is characterised by its gradual onset: each individual dependency appears containable in isolation, while the cumulative structure renders domestic sovereignty increasingly nominal. The condition is most acute where the available remedies are themselves structurally compromised: judicial mechanisms may find jurisdictional reach limited by the corporate architecture of foreign platforms, while executive instruments capable of compelling compliance tend to operate outside the framework of independent oversight — resolving the accountability gap against the platform, without necessarily resolving it in favour of the citizen. In either case, the locus of effective control remains external to, or unmediated by, the ordinary legal and democratic institutions of the state. Distinguished from formal colonialism by its operation through market mechanisms, contractual architecture, and institutional asymmetry rather than territorial control or legal compulsion. Related Long-form Insights on IndoPacific.App Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Section 8 – Prohibition of Unintended Risk AI Systems | Indic Pacific
Section 8 – Prohibition of Unintended Risk AI Systems PUBLISHED Previous Next Section 8 - Prohibition of Unintended Risk AI Systems The development, deployment, and use of unintended risk AI systems, as classified under the sub-section (5) of Section 7, is prohibited. Related Indian AI Regulation Sources Ferid Allani v. Union of India & Ors., W.P.(C) 7/2014 (Delhi High Court, Dec 12, 2019) December 2019 Jaswinder Singh @ Jassi v. State of Punjab & Anr., CRM-M-22496-2022, order dated 27-3-2023 March 2023 Md Zakir Hussain v. State of Manipur, W.P. (C) No. 1080 of 2023 (Manipur High Court, May 23, 2024) May 2024

