Search Results
Results found for empty search
- Anil Kapoor v. Simply Life India & Ors., CS(COMM) 652/2023 and I.A. 18237/2023-18243/2023, Delhi High Court, Order dated September 20, 2023 | Indic Pacific | IPLR | indicpacific.com
Delhi High Court September 2023 pioneering judgment protecting actor's personality rights against AI-generated deepfakes and unauthorized digital exploitation. Anil Kapoor v. Simply Life India & Ors., CS(COMM) 652/2023 and I.A. 18237/2023-18243/2023, Delhi High Court, Order dated September 20, 2023 Delhi High Court September 2023 pioneering judgment protecting actor's personality rights against AI-generated deepfakes and unauthorized digital exploitation. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. September 2023 Read the Document Issuing Authority Delhi High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Related draft AI Law Provisions of aiact.in Section 21 – Intellectual Property Protections Section 21 – Intellectual Property Protections
- Distributed Ledger | Glossary of Terms | Indic Pacific | IPLR
Distributed Ledger Explainers The Complete Glossary Distributed Ledger Date of Addition 26 Apr 2024 A distributed ledger (also called a shared ledger or distributed ledger technology or DLT) is the consensus of replicated, shared, and synchronized digital data that is geographically spread (distributed) across many sites, countries, or institutions. Related Long-form Insights on IndoPacific.App Reinventing & Regulating Policy Use Cases of Web3 for India [VLiGTA-TR-004] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Section 25 – Insurance Policy for AI Technologies | Indic Pacific
Section 25 – Insurance Policy for AI Technologies PUBLISHED Previous Next Section 25 - Insurance Policy for AI Technologies (1) Developers, owners, and operators of high-risk AI systems, as classified under sub-section (4) of Section 7, shall be required to obtain and maintain comprehensive liability insurance coverage to manage and mitigate potential risks associated with the development, deployment, and operation of such systems. (2) The insurance coverage requirements for high-risk AI systems shall be proportionate to their risk level and potential impacts, as determined by: (i) Their conceptual classification based on sub-sections (3), and (4) of Section 4; (ii) Their technical characteristics evaluated as per the criteria under sub-section (4) of Section 5 for Specific Purpose AI (SPAI) systems; (iii) Their commercial risk factors such as user base, market influence, data integration, and revenue generation specified under Section 6; (3) The minimum insurance coverage required for high-risk AI systems shall be: (i) For systems with potential widespread impact or lack of opt-out feasibility under Section 7(4)(a): INR 50 crores; (ii) For systems with vulnerability factors or irreversible consequences under Section 7(4)(b): INR 25 crores; (iii) For other high-risk AI systems under Section 7(4): INR 10 crores. (4) The Insurance Regulatory and Development Authority of India (IRDAI) shall, in consultation with the IAIC and relevant stakeholders, specify the minimum insurance coverage standards for high-risk AI systems, which may include: (i) Professional indemnity insurance to cover incidents involving inaccurate, inappropriate, or defamatory AI-generated content; (ii) Cyber risk insurance to cover incidents related to data breaches, network security failures, or other cyber incidents; (iii) General commercial liability insurance to cover incidents causing third-party injury, damage, or other legally liable scenarios. (5) For general purpose AI systems classified under sub-sections (2), and (3) of Section 5, the IAIC, in coordination with IRDAI, shall examine and determine appropriate insurance requirements, considering factors such as: (i) The scale and inherent purpose of the general purpose AI system; (ii) The potential risks and impacts associated with its multiple use cases across different sectors and domains; (iii) The technical features and limitations that may affect its safety, security, and reliability; (iv) The commercial factors such as user base, market influence, and revenue generation. (6) Based on the examination under sub-section (5), the IAIC may recommend to IRDAI the development of specialized insurance products or coverage requirements for general purpose AI systems, which may include: (i) Umbrella liability insurance to cover a wide range of risks and liabilities arising from the diverse applications of the AI system; (ii) Parametric insurance based on predefined triggers or performance metrics to address the unique challenges in assessing and quantifying risks associated with general purpose AI; (iii) Risk pooling or reinsurance arrangements to spread the risks among multiple insurers or stakeholders. (7) The IAIC and IRDAI shall collaborate to establish guidelines and best practices for underwriting, risk assessment, and claims handling related to general purpose AI systems, taking into account their distinct characteristics and potential impacts. (8) Developers, owners, and operators of general purpose AI systems shall be encouraged to maintain adequate insurance coverage based on the recommendations and guidelines issued by the IAIC and IRDAI under sub-sections (6) and (7). (9) Insurance providers offering AI-specific policies for high-risk systems must have adequate expertise, resources, and reinsurance arrangements to effectively assess risks, price premiums, and settle claims related to AI technologies. (10)Developers, owners, and operators of high-risk AI systems shall submit proof of adequate insurance coverage to the IAIC as part of the registration and certification process outlined in Section 11. (11)Failure to obtain and maintain the required insurance coverage for high-risk AI systems shall be treated as a breach of compliance under Section 19, and the IAIC may take appropriate enforcement actions, including: (i) Issuing warnings and imposing penalties; (ii) Suspending or revoking the system’s certification; (iii) Prohibiting the deployment or operation of the AI system until compliance is achieved. (12)The obligations under sub-sections (2), (3), and (4) of this Section shall apply to data fiduciaries employing high-risk AI systems, provided they are: (i) Classified as Systemically Significant Digital Enterprises (SSDEs) under Chapter II of the Digital Competition Act, 2024, particularly: (a) Based on the quantitative and qualitative criteria specified in Section 5; or (b) Designated as SSDEs by the Competition Commission of India under Section 6, due to their significant presence in the relevant core digital service. (ii) Notified as Significant Data Fiduciaries under sub-section (1) of Section 10 of the Digital Personal Data Protection Act, 2023, based on factors such as: (a) The volume and sensitivity of personal data processed; (b) The risk to the rights of data principals; (c) The potential impact on the sovereignty, integrity, and security of India. (13) For AI systems not classified as high-risk under sub-section (4) of Section 7, obtaining insurance coverage is recommended but not mandatory. The IAIC shall provide guidance on suitable insurance products and coverage levels based on the AI system’s risk profile and potential impacts; (14)The Insurance Regulatory and Development Authority of India (IRDAI), in consultation with the IAIC and relevant stakeholders, shall develop guidelines and best practices for underwriting, risk assessment, and claims handling related to AI technologies. These guidelines shall address: (i) Assessment methods to evaluate the unique risks and potential impacts of AI systems, taking into account their risk classification and associated factors as outlined in this Act; (ii) Premium calculation models that consider the risk profile, scale of deployment, and potential consequences of AI systems; (iii) Claims processing standards that ensure timely, fair, and transparent settlement of claims related to AI systems; (iv) Data sharing and reporting requirements between insurers and the IAIC to facilitate the monitoring and analysis of AI-related incidents and claims; (v) Capacity building and training programs for insurance professionals to enhance their understanding of AI technologies and their associated risks; Related Indian AI Regulation Sources Karnataka Platform-Based Gig Workers (Social Security and Welfare) Ordinance, 2025 May 2025
- Synthetic Confidence | Glossary of Terms | Indic Pacific | IPLR
Synthetic Confidence Date of Addition 8 May 2025 Synthetic confidence is the deceptive phenomenon where generative AI systems, particularly large language models (LLMs), produce fluent, authoritative outputs that mimic reasoning and certainty but often diverge from truth or accurate causality. Trained on vast, partially untraceable datasets to prioritise persuasiveness over veracity, these models generate convincing responses that mask reasoning failures and hallucinations—nonsensical or inaccurate outputs stemming from factors like overfitting, training data bias, and high model complexity. This artificially generated appearance of competence creates an illusion of control and understanding, obscuring the unpredictable and opaque nature of AI systems and their potential to propagate fluent misinformation. Sources OpenAI o3 and o4-mini System Card, April 16, 2025 The Urgency of Interpretability, April 2025 Analyzing o3 and o4-mini with ARC-AGI, April 22, 2025 The coinage of this term is attributed to Stephen Klein, Founder & CEO of Curiouser.AI , specifically this LinkedIn post . Related Long-form Insights on IndoPacific.App Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Section 7 – Risk-centric Methods of Classification | Indic Pacific
Section 7 – Risk-centric Methods of Classification PUBLISHED Previous Next Section 7 – Risk-centric Methods of Classification (1) These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks – (i) Narrow risk AI systems as described in sub-section (2); (ii) Medium risk AI systems as described in sub-section (3); (iii) High risk AI systems as described in sub-section (4); (iv) Unintended risk AI systems as described in sub-section (5); (2) Narrow Risk AI Systems: (i) Narrow risk AI systems are classified as those with minimal outcome and impact risks, where: (a) The system is deployed in a limited scope for non-critical functions, so its outcomes do not significantly affect users or systems. (b) The system causes minimal harm, with impacts limited to temporary inconvenience. (c) Users can easily opt out of the system’s operations, ensuring they are not forced to accept its outcomes. (d) The system is fully explainable, allowing users to understand and mitigate any risks from its outcomes. (e) Errors in the system’s outcomes are easily reversible, with no lasting impact. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as errors in non-critical tasks, and their impacts, such as temporary inconvenience, ensuring the category provisions directly identify minimal risks without abstract definitions. Illustration: A virtual assistant on a smartphone app for task scheduling is a narrow risk system. It operates in a non-critical context, causes only temporary inconvenience if it fails, allows users to disable it, is fully explainable, and errors are easily reversible by resetting the app. (3) Medium Risk AI Systems: (i) Medium risk AI systems are classified as those with moderate outcome and impact risks, where: (a) The system causes moderate harm, with outcomes that may lead to incorrect decisions affecting users’ opportunities or resources. (b) Users have limited ability to opt out or understand the system’s operations, increasing the impact of its outcomes. (c) The system may produce inconsistent outputs due to technical bias, such as overfitting to training data, affecting the reliability of its outcomes. (d) Correcting errors in the system’s outcomes requires active intervention, with impacts that may persist until addressed. (ii) Risk assessment focuses on the system’s technical features, such as model complexity or unverified data, which contribute to its outcome risks. (iii) Risk recognition is achieved by assessing the system’s outcomes, such as incorrect decisions in resource allocation, and their impacts, such as reduced opportunities for users, ensuring the category provisions directly identify moderate risks without abstract definitions. Illustration: An AI loan approval system used by a regional bank is a medium risk system. It may lead to incorrect loan denials, limits users’ ability to opt out, may overfit to biased training data, requires intervention to correct errors, and its risks stem from technical features like model complexity. (4) High Risk AI Systems: (i) High risk AI systems are classified as those with severe outcome and impact risks, where: (a) The system is deployed in critical sectors, with outcomes that can disrupt essential services or infrastructure. (b) The system causes severe harm, with impacts that may lead to physical harm, economic loss, or societal disruption. (c) Users cannot opt out or control the system’s operations, making its outcomes unavoidable. (d) The system’s lack of transparency increases the risk of undetected errors, amplifying the impact of its outcomes. (e) Errors in the system’s outcomes are irreversible or cause permanent harm, with significant long-term impacts. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as disruptions in critical services, and their impacts, such as economic loss or societal harm, ensuring the category provisions directly identify severe risks without abstract definitions. Illustration: An AI system controlling a power grid is a high risk system. It operates in a critical sector, can cause outages leading to economic loss, offers no user opt-out, lacks transparency, and failures have irreversible impacts like societal disruption. (5) Unintended Risk AI Systems: (i) Unintended risk AI systems are classified as those with emergent and unpredictable outcome and impact risks, where: (a) The system’s behaviour deviates from its intended design, leading to unexpected outcomes. (b) The system processes data beyond its intended scope, increasing the risk of unintended impacts. (c) The system evolves after deployment without oversight, causing outcomes that cannot be predicted or controlled. (d) The system’s operations are not explainable, making it impossible to understand or mitigate the risks of its outcomes. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as unexpected behaviours in operation, and their impacts, such as unpredictable harm to users or systems, ensuring the category provisions directly identify emergent risks without abstract definitions. Illustration: An autonomous vehicle navigation system with emergent behaviour is an unintended risk system. It deviates from its intended design, processes unintended data, evolves without oversight, and its operations are not explainable, leading to unpredictable outcomes like accidents. Related Indian AI Regulation Sources Ferid Allani v. Union of India & Ors., W.P.(C) 7/2014 (Delhi High Court, Dec 12, 2019) December 2019 Responsible AI #AIforAll (Discussion Paper on Facial Recognition Technology) November 2022 Jaswinder Singh @ Jassi v. State of Punjab & Anr., CRM-M-22496-2022, order dated 27-3-2023 March 2023 Md Zakir Hussain v. State of Manipur, W.P. (C) No. 1080 of 2023 (Manipur High Court, May 23, 2024) May 2024
- Advisory on Prohibition of AI Tools/Apps in Office Devices | Indic Pacific | IPLR | indicpacific.com
Issued by the Department of Expenditure in February 2025, this communication determined that AI tools and AI apps in office computers and devices pose risks to the confidentiality of government data and documents. The advisory strictly prohibits employees from using any AI tools or apps in office devices. This represents a complete ban approach similar to ESIC's policy but applied across the Finance Ministry. Advisory on Prohibition of AI Tools/Apps in Office Devices Issued by the Department of Expenditure in February 2025, this communication determined that AI tools and AI apps in office computers and devices pose risks to the confidentiality of government data and documents. The advisory strictly prohibits employees from using any AI tools or apps in office devices. This represents a complete ban approach similar to ESIC's policy but applied across the Finance Ministry. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. February 2025 Read the Document Issuing Authority Ministry of Finance, Department of Expenditure Type of Legal / Policy Document Executive Instruments - Administrative Decisions Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 15 – Guidance Principles for AI-related Agreements Section 15 – Guidance Principles for AI-related Agreements Section 16 – Guidance Principles for AI-related Corporate Governance Section 16 – Guidance Principles for AI-related Corporate Governance
- Our Journey & Achievements | Indic Pacific Legal Research
We are humbled to present our key achievements & journey at Indic Pacific Legal Research. Journey & Achievements We are proud and delighted to highlight our journey and achievements at the Indic Pacific family, and our knowledge ecosystem. Go to IndoPacific.App We are sure you might be curious about our in-house insights and achievements. Wish to find out? Go to indopacific.app and search "Indic Pacific Legal Research" or the "Indian Society of Artificial Intelligence and Law". Featured in Reinventing the wheel of the india AI story Artificial Intelligence Ethics and International Law, 1st Edition & ISAIL Abhivardhan's first authored book, "Artificial Intelligence Ethics and International Law" was originally published in 2019, inspiring his efforts for laying the foundation of the Indian Society of Artificial Intelligence and Law. Discussing AI as an Entity in Prague Abhivardhan had presented an important paper on the Entitative Nature of Artificial Intelligence in International Law at SOLAIR Conference 2019, a conference jointly organised by the Czech Government and the Czech Academy of Sciences. Indic Pacific & ISAIL, since 2019 Both Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law were incorporated in 2019 Abhivardhan's paper published at AIAI 2020 Abhivardhan's one of the most important publication in AI and Law in 2020 include "The Ethos of Artificial Intelligence as a Legal Personality in a Globalized Space: Examining the Overhaul of the Post-Liberal Technological Order The 2020 & 2021 Handbooks on AI and International Law The 2020 & 2021 Handbooks on AI and International Law were published by the Indian Society of Artificial Intelligence and Law, marking the release of two key flagship publications by ISAIL encompassing around 40+ different international legal domains on the impact of AI across legal domains. The IndoPacific.App was launched in 2023 The digital library section of the IndoPacific App was launched (earlier known as the VLiGTA® App) was launched in 2023. India's inaugural artificial intelligence regulation proposal, AIACT.IN Abhivardhan has drafted India's first privately proposed AI regulation bill for India / Bharat to promote a democratic and inclusive discourse about AI standardisation & regulation in India. Artificial Intelligence Ethics and International Law, 2nd Edition (2024) was published in November 2023 Abhivardhan's first book, "Artificial Intelligence Ethics and International Law" was revisited with a 2nd edition and was presented to experts and stalwarts including Arvind Subramaniyam, Intel (formerly), T Koshy, MD, ONDC, Dr Vivek Lall, General Atomics and others. Abhivardhan contributed to a key GenAI + FinTech Moot Proposition for Responsible AI Education among Law Students Abhivardhan was felicitated by Justice Hemant Gupta (Retd.) as the author of a GenAI + Fintech Moot Proposition to promote legal education on Responsible & Explainable AI-related legal disputes. The Moot Proposition can also be accessed at vligta.app. The 2020 Handbook on AI and International Law is recognised by the Council of Europe The Council of Europe has listed the 2020 Handbook on AI and International Law, one of the leading AI and International Law publications by Indian Society of Artificial Intelligence and Law as the only Indian AI initiative on their website, apart from the NITI Aayog's 2018 National Strategy on Artificial Intelligence. Abhivardhan at Startup20 (G20 Brazil 2024) Engagement Group Session Abhivardhan was invited to provide insights on the effective implementation of various National AI Strategies to the Startup20 Brazil Engagement Group, as a part of G20 Brazil (2024) The year 2018 marked a significant milestone for India, as the nation embarked on a transformative journey in the realm of artificial intelligence. The NITI Aayog's National Strategy for Artificial Intelligence (2018) unveiled a promising vision for billions of Indians , while the tabling of the Data Protection Bill in the Indian Parliament signalled a commitment to safeguarding citizens' rights in the digital age. These developments, following the landmark Right to Privacy judgment (Puttaswamy I) and the Aadhar Act judgment (Puttaswamy II), set the stage for a new era of technological advancement and legal innovation in the country. On a global scale, AI was already making remarkable strides, with its pace of evolution accelerating at an unprecedented rate. The hype surrounding the rise of digital technologies like AI was palpable, as the world began to recognize the immense potential they held for transforming various aspects of our lives. Amidst this exciting landscape, our Founder, Abhivardhan, was honored to contribute to the growing body of knowledge in the field. His work, "Artificial Intelligence Ethics and International Law , " published in mid-2019, aimed to shed light on the ethical and legal implications of AI on a global scale. Additionally, his engagement to speak at the SOLAIR Conference on the "Entitative Nature of Artificial Intelligence in International Law , " jointly conducted by the Czech Government and the Czech Academy of Sciences, further underscored the importance of his research efforts. The critical appreciation and recognition of Abhivardhan's early research in AI and Law, at a time when these topics were not yet at the forefront of the Indian policy landscape, served as a catalyst for his vision. Inspired by the potential to make a meaningful impact, he laid the foundation for Indic Pacific Legal Research and the Indian Society of Artificial Intelligence and Law (ISAIL) in mid and late 2019. While Indic Pacific Legal Research focused on providing valuable consulting services, ISAIL became the embodiment of Abhivardhan's unwavering commitment to legal innovation and research in the field of AI. Despite the challenges posed by the COVID-19 pandemic, both organizations have not only survived but have continued to make significant strides in their respective domains. This page thereby serves as a humble testament to the key achievements and milestones that have shaped the journey of Indic Pacific Legal Research and ISAIL. It is a celebration of the tireless efforts, dedication, and vision of Abhivardhan and the teams behind these organisations. Through their work, they have not only contributed to the advancement of AI and law in India but have also inspired a new generation of researchers and innovators to push the boundaries of what is possible. As we reflect on the past and look towards the future, we remain hopeful and optimistic about the potential for AI and law to drive positive change in our society. The journey of Indic Pacific Legal Research and ISAIL serves as a reminder that with passion, perseverance, and a commitment to excellence, we can overcome challenges and make a lasting impact in the world. Our Brands Unique Perspectives, Common Goals: Showcasing Our Law & Policy Products & Brands A digital library and ecosystem app, which offers a skill testing experience in law & policy domains India's inaugural private AI regulation bill for India / Bharat authored by Abhivardhan An independent industry forum for legal, policy & technology professionals which supports the AI ecosystem of start-ups and MSMEs to advocate and promote AI standardisation in India An interactive glossary of key terms used in domains such as technology law, artificial intelligence governance and law & policy in our in-house insights A digital publication network featuring industry-conscious insights by Indic Pacific Legal Research A pioneering platform dedicated to the development and dissemination of AI standardization guidelines by the Indian Society of Artificial Intelligence and Law
- Section 10 – Composition and Functions of the Council | Indic Pacific
Section 10 – Composition and Functions of the Council PUBLISHED Previous Next Section 10 - Composition and Functions of the Council (1) With effect from the date notified by the Central Government, there shall be established the Indian Artificial Intelligence Council (IAIC), a statutory body for the purposes of this Act. (2) The IAIC shall be an autonomous body corporate with perpetual succession, a common seal, and the power to acquire, hold and transfer property, both movable and immovable, and to contract and be contracted, and sue or be sued by its name. (3) The IAIC shall coordinate and oversee the development, deployment, and governance of artificial intelligence systems across all government bodies, ministries, departments, and regulatory authorities, adopting a whole-of-government approach. (4) The headquarters of the IAIC shall be located at the place notified by the Central Government. (5) The IAIC shall consist of a Chairperson and such number of other Members, not exceeding [X], as the Central Government may notify. (6) The Chairperson and Members shall be appointed by the Central Government through a transparent and merit-based selection process, as may be prescribed. (7) The Chairperson and Members shall be individuals of eminence, integrity and standing, possessing specialized knowledge or practical experience in fields relevant to the IAIC’s functions, including but not limited to: (i) Data and artificial intelligence governance, policy and regulation; (ii) Administration or implementation of laws related to consumer protection, digital rights and artificial intelligence and other emerging technologies; (iii) Dispute resolution, particularly technology and data-related disputes; (iv) Information and communication technology, digital economy and disruptive technologies; (v) Law, regulation or techno-regulation focused on artificial intelligence, data protection and related domains; (vi) Any other relevant field deemed beneficial by the Central Government. (8) At least three Members shall be experts in law with demonstrated understanding of legal and regulatory frameworks related to artificial intelligence, data protection and emerging technologies. (9) The IAIC shall have the following functions: (i) Develop and implement policies, guidelines and standards for responsible development, deployment and governance of AI systems in India; (ii) Coordinate and collaborate with relevant ministries, regulatory bodies and stakeholders to ensure harmonised AI governance across sectors; (iii) Establish and maintain the National Registry of AI Use Cases as per Section 12; (iv) Administer the certification scheme for AI systems as specified in Section 11; (v) Develop and promote the National AI Ethics Code as outlined in Section 13; (vi) Facilitate stakeholder consultations, public discourse and awareness on societal implications of AI; (vii) Promote research, development and innovation in AI with a focus on responsibility and ethics; (viii) Engage with international AI regulatory bodies, standard-setting organizations, and global AI safety initiatives to promote knowledge exchange and align India’s AI governance framework with global best practices. This includes: (a) Developing bilateral and multilateral agreements to support collaborative research, data sharing, and risk management. (b) Participating in international AI safety and ethics dialogues to shape global AI norms. (c) Coordinating on cross-border data flow standards and AI certification criteria to ensure seamless compliance for international AI applications in India. (ix) Take regulatory actions to ensure compliance with the policies, standards, and guidelines issued by the IAIC under this Act, which may include: (a) Issuing show-cause notices requiring non-compliant entities to explain the reasons for non-compliance and outline corrective measures within a specified timeline; (b) Imposing monetary penalties based on the severity of non-compliance, the risk level involved, and the potential impact on individuals, businesses, or society, with penalties being commensurate with the financial capacity of the non-compliant entity; (c) Suspending or revoking certifications, registrations, or approvals related to non-compliant AI systems, preventing their further development, deployment, or operation until compliance is achieved; (d) Mandating independent audits of the non-compliant entity’s processes at their own cost, with audit reports to be submitted to the IAIC for review and further action; (e) Issuing directives to non-compliant entities to implement specific remedial measures within a defined timeline, such as enhancing data quality controls, improving governance frameworks, or strengthening decision-making procedures; (f) In cases of persistent or egregious non-compliance, recommending the temporary or permanent suspension of the non-compliant entity’s AI-related operations, subject to due process and the principles of natural justice; (g) Taking any other regulatory action deemed necessary and proportionate to ensure compliance with the prescribed standards and to safeguard the responsible development, deployment, and use of AI systems. (x) Advise the Central Government on matters related to AI policy, regulation and governance, and recommend legislative or regulatory changes as necessary; (xi) Perform any other functions necessary to achieve the objectives of this Act or as assigned by the Central Government. (10)The IAIC may constitute advisory committees, expert groups or task forces as deemed necessary to assist in its functions. (11)The IAIC shall endeavour to function as a digital office to the extent practicable, conducting proceedings, filings, hearings and pronouncements through digital means as per applicable laws. Related Indian AI Regulation Sources Report on AI Governance Guidelines Development January 2025 Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report August 2025 India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation November 2025 Digital Personal Data Protection Rules, 2025 November 2025 Working Paper on Generative AI and Copyright (Part 1): "One Nation One License One Payment" December 2025 Democratising Access to AI Infrastructure (White Paper, Version 3.0) December 2025
- Grounded AI Safety | Glossary of Terms | Indic Pacific | IPLR
Grounded AI Safety Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Grounded AI Safety Date of Addition 18 May 2025 Grounded AI Safety is a principle-driven approach adopted by the Indian Society of Artificial Intelligence and Law for The Bharat Pacific Stack , to managing risks in AI systems, rooted in the fundamental understanding that current AI, such as large language models, functions as statistical pattern-matchers without true comprehension or reasoning ability. This approach: Anchors in Observable Limitations : Risk mitigation begins with empirical evidence of AI’s inherent constraints, such as struggles with tasks requiring conceptual understanding—like misinterpreting time differences across regions or failing to follow rules in strategic games—focusing on these measurable shortcomings rather than assumed capabilities. Centers on Human-Driven Risks : The primary dangers arise from human over-reliance on or misuse of these limited systems, such as deploying them in critical areas like scheduling or decision-making where their errors could lead to significant consequences, rather than from AI autonomously causing catastrophic outcomes. Rejects Speculative Existential Narratives : AI safety must exclude unproven predictions of AI-driven doomsday scenarios that lack evidence and inflate AI’s potential, as these narratives misguide priorities and empower those who might exploit fear for profit, influence, or excessive control. Prioritises Evidence-Based Safeguards : Solutions involve systematic testing to identify and address specific failure modes—like errors in visual representations or logical reasoning—paired with transparent improvements, ensuring AI systems are used responsibly within their known boundaries. This definition is inspired by a post by Dr Gary Marcus, on X . Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term
- Strategic Hedging | Glossary of Terms | Indic Pacific | IPLR
Strategic Hedging Date of Addition 26 April 2024 Strategic hedging means a state spreads its risk by pursuing two opposite policies towards other countries via balancing and engagement, to prepare for all best and worst case scenarios, with a calculated combination of its soft power & hard power. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022). Related Long-form Insights on IndoPacific.App Global Customary International Law Index: A Prologue [GLA-TR-00X] Learn More An Indian Perspective on Special Purpose Acquisition Companies [GLA-TR-001] Learn More India-led Global Governance in the Indo-Pacific: Basis & Approaches [GLA-TR-003] Learn More Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Global Legalism, Volume 1 Learn More Global Relations and Legal Policy, Volume 1 [GRLP1] Learn More South Asian Review of International Law, Volume 1 Learn More Indian International Law Series, Volume 1 Learn More Global Relations and Legal Policy, Volume 2 Learn More The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Sudhir Chaudhary v. Meta Platforms Inc & Ors., CS(COMM) 1089/2025, Delhi High Court, Order dated October 10, 2025 | Indic Pacific | IPLR | indicpacific.com
Delhi High Court October 2025 interim injunction protecting journalist Sudhir Chaudhary's personality rights against AI-generated deepfake videos with 48-hour platform takedown mechanism. Sudhir Chaudhary v. Meta Platforms Inc & Ors., CS(COMM) 1089/2025, Delhi High Court, Order dated October 10, 2025 Delhi High Court October 2025 interim injunction protecting journalist Sudhir Chaudhary's personality rights against AI-generated deepfake videos with 48-hour platform takedown mechanism. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. October 2025 Read the Document Issuing Authority Delhi High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Related draft AI Law Provisions of aiact.in Section 21 – Intellectual Property Protections Section 21 – Intellectual Property Protections Section 23 – Content Provenance and Identification Section 23 – Content Provenance and Identification
- Section 30 – Power to Make Regulations | Indic Pacific
Section 30 – Power to Make Regulations PUBLISHED Previous Next Section 30 - Power to Make Regulations (1) The IAIC may, by notification, make regulations consistent with this Act and the rules made thereunder to carry out the provisions of this Act. (2) In particular, and without prejudice to the generality of the foregoing power, such regulations may provide for all or any of the following matters, namely — (a) The criteria and process for the classification of AI systems based on their conceptual, technical, commercial, and risk-based factors, as specified in Sections 4, 5, 6, and 7; (b) The standards, guidelines, and best practices for the development, deployment, and use of AI systems, including those related to transparency, explainability, fairness, safety, security, and human oversight, as outlined in Section 13; (c) The procedures and requirements for the registration and certification of AI systems, including the criteria for exemptions and the maintenance of the National Registry of Artificial Intelligence Use Cases, as specified in Sections 11 and 12; (d) The guidelines and mechanisms for post-deployment monitoring of high-risk AI systems, as outlined in Section 17; (e) The procedures and protocols for third-party vulnerability reporting, incident reporting, and responsible information sharing, as mentioned in Sections 18, 19, and 20; (f) The guidelines and requirements for content provenance and identification in AI-generated content, as specified in Section 23; (g) The insurance coverage requirements and risk assessment procedures for entities developing or deploying high-risk AI systems, as outlined in Section 25; (h) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by regulations. (3) Every regulation made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the regulation or both Houses agree that the regulation should not be made, the regulation shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that regulation. Related Indian AI Regulation Sources


