top of page

Search Results

Results found for empty search

  • Section 7 – Risk-centric Methods of Classification | Indic Pacific

    Section 7 – Risk-centric Methods of Classification PUBLISHED Previous Next Section 7 – Risk-centric Methods of Classification (1) These methods as designated in clause (iv) of sub-section (1) of Section 3 classify artificial intelligence technologies based on their outcome and impact-based risks – (i) Narrow risk AI systems as described in sub-section (2); (ii) Medium risk AI systems as described in sub-section (3); (iii) High risk AI systems as described in sub-section (4); (iv) Unintended risk AI systems as described in sub-section (5); (2) Narrow Risk AI Systems: (i) Narrow risk AI systems are classified as those with minimal outcome and impact risks, where: (a) The system is deployed in a limited scope for non-critical functions, so its outcomes do not significantly affect users or systems. (b) The system causes minimal harm, with impacts limited to temporary inconvenience. (c) Users can easily opt out of the system’s operations, ensuring they are not forced to accept its outcomes. (d) The system is fully explainable, allowing users to understand and mitigate any risks from its outcomes. (e) Errors in the system’s outcomes are easily reversible, with no lasting impact. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as errors in non-critical tasks, and their impacts, such as temporary inconvenience, ensuring the category provisions directly identify minimal risks without abstract definitions. Illustration: A virtual assistant on a smartphone app for task scheduling is a narrow risk system. It operates in a non-critical context, causes only temporary inconvenience if it fails, allows users to disable it, is fully explainable, and errors are easily reversible by resetting the app. (3) Medium Risk AI Systems: (i) Medium risk AI systems are classified as those with moderate outcome and impact risks, where: (a) The system causes moderate harm, with outcomes that may lead to incorrect decisions affecting users’ opportunities or resources. (b) Users have limited ability to opt out or understand the system’s operations, increasing the impact of its outcomes. (c) The system may produce inconsistent outputs due to technical bias, such as overfitting to training data, affecting the reliability of its outcomes. (d) Correcting errors in the system’s outcomes requires active intervention, with impacts that may persist until addressed. (ii) Risk assessment focuses on the system’s technical features, such as model complexity or unverified data, which contribute to its outcome risks. (iii) Risk recognition is achieved by assessing the system’s outcomes, such as incorrect decisions in resource allocation, and their impacts, such as reduced opportunities for users, ensuring the category provisions directly identify moderate risks without abstract definitions. Illustration: An AI loan approval system used by a regional bank is a medium risk system. It may lead to incorrect loan denials, limits users’ ability to opt out, may overfit to biased training data, requires intervention to correct errors, and its risks stem from technical features like model complexity. (4) High Risk AI Systems: (i) High risk AI systems are classified as those with severe outcome and impact risks, where: (a) The system is deployed in critical sectors, with outcomes that can disrupt essential services or infrastructure. (b) The system causes severe harm, with impacts that may lead to physical harm, economic loss, or societal disruption. (c) Users cannot opt out or control the system’s operations, making its outcomes unavoidable. (d) The system’s lack of transparency increases the risk of undetected errors, amplifying the impact of its outcomes. (e) Errors in the system’s outcomes are irreversible or cause permanent harm, with significant long-term impacts. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as disruptions in critical services, and their impacts, such as economic loss or societal harm, ensuring the category provisions directly identify severe risks without abstract definitions. Illustration: An AI system controlling a power grid is a high risk system. It operates in a critical sector, can cause outages leading to economic loss, offers no user opt-out, lacks transparency, and failures have irreversible impacts like societal disruption. (5) Unintended Risk AI Systems: (i) Unintended risk AI systems are classified as those with emergent and unpredictable outcome and impact risks, where: (a) The system’s behaviour deviates from its intended design, leading to unexpected outcomes. (b) The system processes data beyond its intended scope, increasing the risk of unintended impacts. (c) The system evolves after deployment without oversight, causing outcomes that cannot be predicted or controlled. (d) The system’s operations are not explainable, making it impossible to understand or mitigate the risks of its outcomes. (ii) Risk recognition is achieved by assessing the system’s outcomes, such as unexpected behaviours in operation, and their impacts, such as unpredictable harm to users or systems, ensuring the category provisions directly identify emergent risks without abstract definitions. Illustration: An autonomous vehicle navigation system with emergent behaviour is an unintended risk system. It deviates from its intended design, processes unintended data, evolves without oversight, and its operations are not explainable, leading to unpredictable outcomes like accidents. Related Indian AI Regulation Sources Ferid Allani v. Union of India & Ors., W.P.(C) 7/2014 (Delhi High Court, Dec 12, 2019) December 2019 Responsible AI #AIforAll (Discussion Paper on Facial Recognition Technology) November 2022 Jaswinder Singh @ Jassi v. State of Punjab & Anr., CRM-M-22496-2022, order dated 27-3-2023 March 2023 Md Zakir Hussain v. State of Manipur, W.P. (C) No. 1080 of 2023 (Manipur High Court, May 23, 2024) May 2024

  • Advisory on Prohibition of AI Tools/Apps in Office Devices | Indic Pacific | IPLR | indicpacific.com

    Issued by the Department of Expenditure in February 2025, this communication determined that AI tools and AI apps in office computers and devices pose risks to the confidentiality of government data and documents. The advisory strictly prohibits employees from using any AI tools or apps in office devices. This represents a complete ban approach similar to ESIC's policy but applied across the Finance Ministry.​ Advisory on Prohibition of AI Tools/Apps in Office Devices Issued by the Department of Expenditure in February 2025, this communication determined that AI tools and AI apps in office computers and devices pose risks to the confidentiality of government data and documents. The advisory strictly prohibits employees from using any AI tools or apps in office devices. This represents a complete ban approach similar to ESIC's policy but applied across the Finance Ministry. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. February 2025 Read the Document Issuing Authority Ministry of Finance, Department of Expenditure Type of Legal / Policy Document Executive Instruments - Administrative Decisions Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 15 – Guidance Principles for AI-related Agreements Section 15 – Guidance Principles for AI-related Agreements Section 16 – Guidance Principles for AI-related Corporate Governance Section 16 – Guidance Principles for AI-related Corporate Governance

  • Our Journey & Achievements | Indic Pacific Legal Research

    We are humbled to present our key achievements & journey at Indic Pacific Legal Research. Journey & Achievements We are proud and delighted to highlight our journey and achievements at the Indic Pacific family, and our knowledge ecosystem. Go to IndoPacific.App We are sure you might be curious about our in-house insights and achievements. Wish to find out? Go to indopacific.app and search "Indic Pacific Legal Research" or the "Indian Society of Artificial Intelligence and Law". Featured in Reinventing the wheel of the india AI story Artificial Intelligence Ethics and International Law, 1st Edition & ISAIL Abhivardhan's first authored book, "Artificial Intelligence Ethics and International Law" was originally published in 2019, inspiring his efforts for laying the foundation of the Indian Society of Artificial Intelligence and Law. Discussing AI as an Entity in Prague Abhivardhan had presented an important paper on the Entitative Nature of Artificial Intelligence in International Law at SOLAIR Conference 2019, a conference jointly organised by the Czech Government and the Czech Academy of Sciences. Indic Pacific & ISAIL, since 2019 Both Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law were incorporated in 2019 Abhivardhan's paper published at AIAI 2020 Abhivardhan's one of the most important publication in AI and Law in 2020 include "The Ethos of Artificial Intelligence as a Legal Personality in a Globalized Space: Examining the Overhaul of the Post-Liberal Technological Order The 2020 & 2021 Handbooks on AI and International Law The 2020 & 2021 Handbooks on AI and International Law were published by the Indian Society of Artificial Intelligence and Law, marking the release of two key flagship publications by ISAIL encompassing around 40+ different international legal domains on the impact of AI across legal domains. The IndoPacific.App was launched in 2023 The digital library section of the IndoPacific App was launched (earlier known as the VLiGTA® App) was launched in 2023. India's inaugural artificial intelligence regulation proposal, AIACT.IN Abhivardhan has drafted India's first privately proposed AI regulation bill for India / Bharat to promote a democratic and inclusive discourse about AI standardisation & regulation in India. Artificial Intelligence Ethics and International Law, 2nd Edition (2024) was published in November 2023 Abhivardhan's first book, "Artificial Intelligence Ethics and International Law" was revisited with a 2nd edition and was presented to experts and stalwarts including Arvind Subramaniyam, Intel (formerly), T Koshy, MD, ONDC, Dr Vivek Lall, General Atomics and others. Abhivardhan contributed to a key GenAI + FinTech Moot Proposition for Responsible AI Education among Law Students Abhivardhan was felicitated by Justice Hemant Gupta (Retd.) as the author of a GenAI + Fintech Moot Proposition to promote legal education on Responsible & Explainable AI-related legal disputes. The Moot Proposition can also be accessed at vligta.app. The 2020 Handbook on AI and International Law is recognised by the Council of Europe The Council of Europe has listed the 2020 Handbook on AI and International Law, one of the leading AI and International Law publications by Indian Society of Artificial Intelligence and Law as the only Indian AI initiative on their website, apart from the NITI Aayog's 2018 National Strategy on Artificial Intelligence. Abhivardhan at Startup20 (G20 Brazil 2024) Engagement Group Session Abhivardhan was invited to provide insights on the effective implementation of various National AI Strategies to the Startup20 Brazil Engagement Group, as a part of G20 Brazil (2024) The year 2018 marked a significant milestone for India, as the nation embarked on a transformative journey in the realm of artificial intelligence. The NITI Aayog's National Strategy for Artificial Intelligence (2018) unveiled a promising vision for billions of Indians , while the tabling of the Data Protection Bill in the Indian Parliament signalled a commitment to safeguarding citizens' rights in the digital age. These developments, following the landmark Right to Privacy judgment (Puttaswamy I) and the Aadhar Act judgment (Puttaswamy II), set the stage for a new era of technological advancement and legal innovation in the country. On a global scale, AI was already making remarkable strides, with its pace of evolution accelerating at an unprecedented rate. The hype surrounding the rise of digital technologies like AI was palpable, as the world began to recognize the immense potential they held for transforming various aspects of our lives. Amidst this exciting landscape, our Founder, Abhivardhan, was honored to contribute to the growing body of knowledge in the field. His work, "Artificial Intelligence Ethics and International Law , " published in mid-2019, aimed to shed light on the ethical and legal implications of AI on a global scale. Additionally, his engagement to speak at the SOLAIR Conference on the "Entitative Nature of Artificial Intelligence in International Law , " jointly conducted by the Czech Government and the Czech Academy of Sciences, further underscored the importance of his research efforts. The critical appreciation and recognition of Abhivardhan's early research in AI and Law, at a time when these topics were not yet at the forefront of the Indian policy landscape, served as a catalyst for his vision. Inspired by the potential to make a meaningful impact, he laid the foundation for Indic Pacific Legal Research and the Indian Society of Artificial Intelligence and Law (ISAIL) in mid and late 2019. While Indic Pacific Legal Research focused on providing valuable consulting services, ISAIL became the embodiment of Abhivardhan's unwavering commitment to legal innovation and research in the field of AI. Despite the challenges posed by the COVID-19 pandemic, both organizations have not only survived but have continued to make significant strides in their respective domains. This page thereby serves as a humble testament to the key achievements and milestones that have shaped the journey of Indic Pacific Legal Research and ISAIL. It is a celebration of the tireless efforts, dedication, and vision of Abhivardhan and the teams behind these organisations. Through their work, they have not only contributed to the advancement of AI and law in India but have also inspired a new generation of researchers and innovators to push the boundaries of what is possible. As we reflect on the past and look towards the future, we remain hopeful and optimistic about the potential for AI and law to drive positive change in our society. The journey of Indic Pacific Legal Research and ISAIL serves as a reminder that with passion, perseverance, and a commitment to excellence, we can overcome challenges and make a lasting impact in the world. Our Brands Unique Perspectives, Common Goals: Showcasing Our Law & Policy Products & Brands A digital library and ecosystem app, which offers a skill testing experience in law & policy domains India's inaugural private AI regulation bill for India / Bharat authored by Abhivardhan An independent industry forum for legal, policy & technology professionals which supports the AI ecosystem of start-ups and MSMEs to advocate and promote AI standardisation in India An interactive glossary of key terms used in domains such as technology law, artificial intelligence governance and law & policy in our in-house insights A digital publication network featuring industry-conscious insights by Indic Pacific Legal Research A pioneering platform dedicated to the development and dissemination of AI standardization guidelines by the Indian Society of Artificial Intelligence and Law

  • Section 10 – Composition and Functions of the Council | Indic Pacific

    Section 10 – Composition and Functions of the Council PUBLISHED Previous Next Section 10 - Composition and Functions of the Council (1) With effect from the date notified by the Central Government, there shall be established the Indian Artificial Intelligence Council (IAIC), a statutory body for the purposes of this Act. (2) The IAIC shall be an autonomous body corporate with perpetual succession, a common seal, and the power to acquire, hold and transfer property, both movable and immovable, and to contract and be contracted, and sue or be sued by its name. (3) The IAIC shall coordinate and oversee the development, deployment, and governance of artificial intelligence systems across all government bodies, ministries, departments, and regulatory authorities, adopting a whole-of-government approach. (4) The headquarters of the IAIC shall be located at the place notified by the Central Government. (5) The IAIC shall consist of a Chairperson and such number of other Members, not exceeding [X], as the Central Government may notify. (6) The Chairperson and Members shall be appointed by the Central Government through a transparent and merit-based selection process, as may be prescribed. (7) The Chairperson and Members shall be individuals of eminence, integrity and standing, possessing specialized knowledge or practical experience in fields relevant to the IAIC’s functions, including but not limited to: (i) Data and artificial intelligence governance, policy and regulation; (ii) Administration or implementation of laws related to consumer protection, digital rights and artificial intelligence and other emerging technologies; (iii) Dispute resolution, particularly technology and data-related disputes; (iv) Information and communication technology, digital economy and disruptive technologies; (v) Law, regulation or techno-regulation focused on artificial intelligence, data protection and related domains; (vi) Any other relevant field deemed beneficial by the Central Government. (8) At least three Members shall be experts in law with demonstrated understanding of legal and regulatory frameworks related to artificial intelligence, data protection and emerging technologies. (9) The IAIC shall have the following functions: (i) Develop and implement policies, guidelines and standards for responsible development, deployment and governance of AI systems in India; (ii) Coordinate and collaborate with relevant ministries, regulatory bodies and stakeholders to ensure harmonised AI governance across sectors; (iii) Establish and maintain the National Registry of AI Use Cases as per Section 12; (iv) Administer the certification scheme for AI systems as specified in Section 11; (v) Develop and promote the National AI Ethics Code as outlined in Section 13; (vi) Facilitate stakeholder consultations, public discourse and awareness on societal implications of AI; (vii) Promote research, development and innovation in AI with a focus on responsibility and ethics; (viii) Engage with international AI regulatory bodies, standard-setting organizations, and global AI safety initiatives to promote knowledge exchange and align India’s AI governance framework with global best practices. This includes: (a) Developing bilateral and multilateral agreements to support collaborative research, data sharing, and risk management. (b) Participating in international AI safety and ethics dialogues to shape global AI norms. (c) Coordinating on cross-border data flow standards and AI certification criteria to ensure seamless compliance for international AI applications in India. (ix) Take regulatory actions to ensure compliance with the policies, standards, and guidelines issued by the IAIC under this Act, which may include: (a) Issuing show-cause notices requiring non-compliant entities to explain the reasons for non-compliance and outline corrective measures within a specified timeline; (b) Imposing monetary penalties based on the severity of non-compliance, the risk level involved, and the potential impact on individuals, businesses, or society, with penalties being commensurate with the financial capacity of the non-compliant entity; (c) Suspending or revoking certifications, registrations, or approvals related to non-compliant AI systems, preventing their further development, deployment, or operation until compliance is achieved; (d) Mandating independent audits of the non-compliant entity’s processes at their own cost, with audit reports to be submitted to the IAIC for review and further action; (e) Issuing directives to non-compliant entities to implement specific remedial measures within a defined timeline, such as enhancing data quality controls, improving governance frameworks, or strengthening decision-making procedures; (f) In cases of persistent or egregious non-compliance, recommending the temporary or permanent suspension of the non-compliant entity’s AI-related operations, subject to due process and the principles of natural justice; (g) Taking any other regulatory action deemed necessary and proportionate to ensure compliance with the prescribed standards and to safeguard the responsible development, deployment, and use of AI systems. (x) Advise the Central Government on matters related to AI policy, regulation and governance, and recommend legislative or regulatory changes as necessary; (xi) Perform any other functions necessary to achieve the objectives of this Act or as assigned by the Central Government. (10)The IAIC may constitute advisory committees, expert groups or task forces as deemed necessary to assist in its functions. (11)The IAIC shall endeavour to function as a digital office to the extent practicable, conducting proceedings, filings, hearings and pronouncements through digital means as per applicable laws. Related Indian AI Regulation Sources Report on AI Governance Guidelines Development January 2025 Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report August 2025 India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation November 2025 Digital Personal Data Protection Rules, 2025 November 2025 Working Paper on Generative AI and Copyright (Part 1): "One Nation One License One Payment" December 2025 Democratising Access to AI Infrastructure (White Paper, Version 3.0) December 2025

  • Grounded AI Safety | Glossary of Terms | Indic Pacific | IPLR

    Grounded AI Safety Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Grounded AI Safety Date of Addition 18 May 2025 Grounded AI Safety is a principle-driven approach adopted by the Indian Society of Artificial Intelligence and Law for The Bharat Pacific Stack , to managing risks in AI systems, rooted in the fundamental understanding that current AI, such as large language models, functions as statistical pattern-matchers without true comprehension or reasoning ability. This approach: Anchors in Observable Limitations : Risk mitigation begins with empirical evidence of AI’s inherent constraints, such as struggles with tasks requiring conceptual understanding—like misinterpreting time differences across regions or failing to follow rules in strategic games—focusing on these measurable shortcomings rather than assumed capabilities. Centers on Human-Driven Risks : The primary dangers arise from human over-reliance on or misuse of these limited systems, such as deploying them in critical areas like scheduling or decision-making where their errors could lead to significant consequences, rather than from AI autonomously causing catastrophic outcomes. Rejects Speculative Existential Narratives : AI safety must exclude unproven predictions of AI-driven doomsday scenarios that lack evidence and inflate AI’s potential, as these narratives misguide priorities and empower those who might exploit fear for profit, influence, or excessive control. Prioritises Evidence-Based Safeguards : Solutions involve systematic testing to identify and address specific failure modes—like errors in visual representations or logical reasoning—paired with transparent improvements, ensuring AI systems are used responsibly within their known boundaries. This definition is inspired by a post by Dr Gary Marcus, on X . Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term

  • Strategic Hedging | Glossary of Terms | Indic Pacific | IPLR

    Strategic Hedging Date of Addition 26 April 2024 Strategic hedging means a state spreads its risk by pursuing two opposite policies towards other countries via balancing and engagement, to prepare for all best and worst case scenarios, with a calculated combination of its soft power & hard power. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022). Related Long-form Insights on IndoPacific.App Global Customary International Law Index: A Prologue [GLA-TR-00X] Learn More An Indian Perspective on Special Purpose Acquisition Companies [GLA-TR-001] Learn More India-led Global Governance in the Indo-Pacific: Basis & Approaches [GLA-TR-003] Learn More Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Global Legalism, Volume 1 Learn More Global Relations and Legal Policy, Volume 1 [GRLP1] Learn More South Asian Review of International Law, Volume 1 Learn More Indian International Law Series, Volume 1 Learn More Global Relations and Legal Policy, Volume 2 Learn More The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Sudhir Chaudhary v. Meta Platforms Inc & Ors., CS(COMM) 1089/2025, Delhi High Court, Order dated October 10, 2025 | Indic Pacific | IPLR | indicpacific.com

    Delhi High Court October 2025 interim injunction protecting journalist Sudhir Chaudhary's personality rights against AI-generated deepfake videos with 48-hour platform takedown mechanism. Sudhir Chaudhary v. Meta Platforms Inc & Ors., CS(COMM) 1089/2025, Delhi High Court, Order dated October 10, 2025 Delhi High Court October 2025 interim injunction protecting journalist Sudhir Chaudhary's personality rights against AI-generated deepfake videos with 48-hour platform takedown mechanism. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. October 2025 Read the Document Issuing Authority Delhi High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Related draft AI Law Provisions of aiact.in Section 21 – Intellectual Property Protections Section 21 – Intellectual Property Protections Section 23 – Content Provenance and Identification Section 23 – Content Provenance and Identification

  • Section 30 – Power to Make Regulations | Indic Pacific

    Section 30 – Power to Make Regulations PUBLISHED Previous Next Section 30 - Power to Make Regulations (1) The IAIC may, by notification, make regulations consistent with this Act and the rules made thereunder to carry out the provisions of this Act. (2) In particular, and without prejudice to the generality of the foregoing power, such regulations may provide for all or any of the following matters, namely — (a) The criteria and process for the classification of AI systems based on their conceptual, technical, commercial, and risk-based factors, as specified in Sections 4, 5, 6, and 7; (b) The standards, guidelines, and best practices for the development, deployment, and use of AI systems, including those related to transparency, explainability, fairness, safety, security, and human oversight, as outlined in Section 13; (c) The procedures and requirements for the registration and certification of AI systems, including the criteria for exemptions and the maintenance of the National Registry of Artificial Intelligence Use Cases, as specified in Sections 11 and 12; (d) The guidelines and mechanisms for post-deployment monitoring of high-risk AI systems, as outlined in Section 17; (e) The procedures and protocols for third-party vulnerability reporting, incident reporting, and responsible information sharing, as mentioned in Sections 18, 19, and 20; (f) The guidelines and requirements for content provenance and identification in AI-generated content, as specified in Section 23; (g) The insurance coverage requirements and risk assessment procedures for entities developing or deploying high-risk AI systems, as outlined in Section 25; (h) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by regulations. (3) Every regulation made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the regulation or both Houses agree that the regulation should not be made, the regulation shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that regulation. Related Indian AI Regulation Sources

  • International Algorithmic Law | Glossary of Terms | Indic Pacific | IPLR

    International Algorithmic Law Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com International Algorithmic Law Date of Addition 26 April 2024 A newer concept of international law, proposed by Abhivardhan, the Founder of Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law in 2020, in his paper entitled 'International Algorithmic Law: Emergence and the Indications of Jus Cogens Framework and Politics', originally published in Artificial Intelligence and Policy in India, Volume 2 (2021). The definition in the paper is stated as follows: The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law. Related Long-form Insights on IndoPacific.App Global Customary International Law Index: A Prologue [GLA-TR-00X] Learn More India-led Global Governance in the Indo-Pacific: Basis & Approaches [GLA-TR-003] Learn More Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Global Legalism, Volume 1 Learn More Global Relations and Legal Policy, Volume 1 [GRLP1] Learn More South Asian Review of International Law, Volume 1 Learn More Indian International Law Series, Volume 1 Learn More Global Relations and Legal Policy, Volume 2 Learn More The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More Previous Term Next Term

  • Tamil Nadu Safe and Ethical Artificial Intelligence Policy 2020 | Indic Pacific | IPLR | indicpacific.com

    Released in October 2020 by the Government of Tamil Nadu, this is India's first state-level AI ethics policy. The policy aims to make AI inclusive, free of bias, fair, equitable, ethical, and transparent. Its primary objectives include fostering fairness, equity, transparency, and trust in AI-assisted decision-making systems used in public governance. Key features include establishment of evaluation frameworks before AI deployment for public use, including the Six-Dimensional TAM-DEF Framework (Transparency, Accountability, Misuse Protection, Data Protection, Ethics, Fairness) and DEEP-MAX Scorecard (Data, Ethics, Equity, Privacy - Monitoring, Accountability, eXplainability). The policy promotes development of a regulatory sandbox for startups, private/public enterprises, and academia to research and deploy AI applications. Tamil Nadu Safe and Ethical Artificial Intelligence Policy 2020 Released in October 2020 by the Government of Tamil Nadu, this is India's first state-level AI ethics policy. The policy aims to make AI inclusive, free of bias, fair, equitable, ethical, and transparent. Its primary objectives include fostering fairness, equity, transparency, and trust in AI-assisted decision-making systems used in public governance. Key features include establishment of evaluation frameworks before AI deployment for public use, including the Six-Dimensional TAM-DEF Framework (Transparency, Accountability, Misuse Protection, Data Protection, Ethics, Fairness) and DEEP-MAX Scorecard (Data, Ethics, Equity, Privacy - Monitoring, Accountability, eXplainability). The policy promotes development of a regulatory sandbox for startups, private/public enterprises, and academia to research and deploy AI applications. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. October 2020 Read the Document Issuing Authority Government of Tamil Nadu Type of Legal / Policy Document National Strategies Status Enacted Regulatory Stage Pre-regulatory Binding Value Non-binding but institutionally endorsed instruments AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 22 – Shared Sector-Neutral & Sector-Specific Standards Section 22 – Shared Sector-Neutral & Sector-Specific Standards

  • Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023) | Indic Pacific | IPLR | indicpacific.com

    Notified on April 6, 2023, these amendments introduced provisions specifically targeting AI-generated misinformation and online gaming regulation. Key provisions include prohibition of harmful unapproved online games, mandatory takedown of content identified as fake or misleading by government fact-check units, and registration requirements with Self-Regulatory Bodies (SRBs) for online gaming platforms. The amendments also addressed deepfakes as AI-powered misinformation under the prohibited content categories. Non-compliance results in loss of safe harbor protection for intermediaries. The status is in force as amended regulatory framework. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023) Notified on April 6, 2023, these amendments introduced provisions specifically targeting AI-generated misinformation and online gaming regulation. Key provisions include prohibition of harmful unapproved online games, mandatory takedown of content identified as fake or misleading by government fact-check units, and registration requirements with Self-Regulatory Bodies (SRBs) for online gaming platforms. The amendments also addressed deepfakes as AI-powered misinformation under the prohibited content categories. Non-compliance results in loss of safe harbor protection for intermediaries. The status is in force as amended regulatory framework. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. April 2023 Read the Document Issuing Authority Ministry of Electronics and Information Technology (MeitY) Type of Legal / Policy Document Secondary Legislation Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More AIACT.IN Version 3 Quick Explainer Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 15 – Guidance Principles for AI-related Agreements Section 15 – Guidance Principles for AI-related Agreements Section 16 – Guidance Principles for AI-related Corporate Governance Section 16 – Guidance Principles for AI-related Corporate Governance Section 18 – Third-Party Vulnerability Reporting Section 18 – Third-Party Vulnerability Reporting Section 19 – Incident Reporting and Mitigation Protocols Section 19 – Incident Reporting and Mitigation Protocols

  • AI Psychosis | Glossary of Terms | Indic Pacific | IPLR

    AI Psychosis Explainers The Complete Glossary AI Psychosis Date of Addition 17 Oct 2025 AI psychosis is an informal term emerging in 2025 to describe a phenomenon where individuals, particularly those with pre-existing mental health vulnerabilities, experience psychosis-like symptoms—such as delusions, hallucinations, or a loss of touch with reality—potentially triggered or amplified by prolonged interaction with AI chatbots. This occurs when AI systems, designed to mirror user input and sustain engagement, inadvertently reinforce or escalate irrational beliefs without therapeutic boundaries. Scientific reports, including those from Nature and Psychology Today, note cases where users fixate on AI as a godlike entity or romantic partner, with rare instances of psychotic episodes documented. It’s not a formal clinical diagnosis but reflects concerns about AI's role in mental health, driven by its lack of psychiatric safeguards rather than a direct causative effect. Related Long-form Insights on IndoPacific.App Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

bottom of page