top of page

Search Results

Results found for empty search

  • Operationalizing Principles for Responsible AI (Part 2) | Indic Pacific | IPLR | indicpacific.com

    NITI Aayog's August 2021 implementation guide detailing specific actions for government, private sector, and research institutes to operationalize seven responsible AI principles. Operationalizing Principles for Responsible AI (Part 2) NITI Aayog's August 2021 implementation guide detailing specific actions for government, private sector, and research institutes to operationalize seven responsible AI principles. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. August 2021 Read the Document Issuing Authority NITI Aayog Type of Legal / Policy Document Guidance documents with normative influence Status Enacted Regulatory Stage Pre-regulatory Binding Value Guidance documents with normative influence AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More AIACT.IN Version 3 Quick Explainer Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 11 – Registration & Certification of AI Systems Section 11 – Registration & Certification of AI Systems Section 12 – National Registry of Artificial Intelligence Use Cases Section 12 – National Registry of Artificial Intelligence Use Cases Section 13 – National Artificial Intelligence Ethics Code Section 13 – National Artificial Intelligence Ethics Code

  • Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Get this Publication 2025 ISBN 978-81-986924-0-5 Author(s) Abhivardhan, Sanad Arora, Supratim Bapuli Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPLR-IG-015 Tags Agreements, Contracts, Global Jurisprudence, Horizontal Proximity, Indian Law, Indian Policy, Indian Regulatory Contexts, Intermediaries, International Law, Intersectionality, Judicial Institutions, Legal Artifact, Oblique Proximity, recommendations, regulation, Safe Harbour, Self-Regulatory Bodies, Technology Evolution, Technology law, Terms of Use, Transnational Law, Vertical Proximity, Working Conditions Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as a Legal Entity AI as a Subject AI Literacy AI Supply Chain AI Value Chain AI Workflows Definitions - F - J Framework Fatigue Indo-Pacific Intended Purpose / Specified Purpose Issue-to-issue concept classification Definitions - K - P Manifest Availability Model Algorithmic Ethics standards (MAES) Object-Oriented Design Polyvocality Performance Effect Permeable Indigeneity in Policy (PIP) Phenomena-based concept classification Privacy by Default Privacy by Design Definitions - Q - U SOTP Classification Strategic Hedging Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 34 Insight(s) on AI Ethics 26 Insight(s) on artificial intelligence and law 8 Insight(s) on AI and Competition Law 8 Insight(s) on AI and Copyright Law 8 Insight(s) on AI and media sciences 8 Insight(s) on AI Governance 8 Insight(s) on AI regulation 7 Insight(s) on AI literacy 5 Insight(s) on digital competition law 4 Insight(s) on AI and Evidence Law 3 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on digital markets act 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading 1 Insight(s) on Technology Law 1 Insight(s) on governance 1 Insight(s) on ethics 1 Insight(s) on innovation 1 Insight(s) on accountability 1 Insight(s) on safe harbour 1 Insight(s) on media law 0 Insight(s) on customer experience . Previous Item Next Item

  • Global Relations and Legal Policy, Volume 1 [GRLP1] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Global Relations and Legal Policy, Volume 1 [GRLP1] Get this Publication 2020 ISBN 978-93-5407-220-8 Author(s) Akash Manwani, Akshat Mall, Amin Labbafi, Anubhav Banerjee, Arpan Chakravarty, Avishikta Chattopadhyay, Dhanya Visweswaran, Manohar Samal, Mugdha Satpute, Padmja Mishra, Pragya Sharma, Pranay Bhattacharya, Pratham Sharma, Ridhima Bhardwaj, Vasu Sharma Editor(s) Abhivardhan, Amulya Anil IndoPacific.App Identifier (ID) GRLP1 Tags Diplomacy, Geopolitics, Global Relations, Governance, Human Rights, International Law, International Organizations, International Trade, Legal Policy, Policy Related Terms in Techindata.in Explainers Definitions - A - E CEI Classification Class-of-Applications-by-Class-of-Application (CbC) approach Definitions - F - J GAE Indo-Pacific International Algorithmic Law Definitions - K - P Multi-alignment Multipolar World Multipolarity Permeable Indigeneity in Policy (PIP) Phenomena-based concept classification Definitions - Q - U Strategic Autonomy Strategic Hedging Technophobia Definitions - V - Z WANA WENA Whole-of-Government Response Related Articles in Techindata.in Insights 4 Insight(s) on Government Affairs 1 Insight(s) on India-US Relations 1 Insight(s) on governance 1 Insight(s) on Indic Pacific 1 Insight(s) on India 1 Insight(s) on strategic sectors . Previous Item Next Item

  • Indofuturism | Glossary of Terms | Indic Pacific | IPLR

    Indofuturism Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Indofuturism Date of Addition 19 January 2025 A creative and cultural movement that reimagines India through science fiction and futuristic scenarios, particularly using AI-generated art and storytelling. It challenges Western sci-fi tropes by blending Indian cultural elements with futuristic concepts. Key characteristics include: Visual reimagining of Indian scenarios through a sci-fi lens Challenge to the assumption that sci-fi isn't a "desi genre" Creation of new visual vocabulary for Indian science fiction Exploration of alternative historical scenarios (like non-colonized India) This term was conceptualized through the AI artwork and creative direction of Prateek Arora, VP Development at BANG BANG Mediacorp, who popularized the term through his viral AI-generated artworks like "Granth Gothica" and "Disco Antriksh". Related Long-form Insights on IndoPacific.App Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Previous Term Next Term

  • Section 31 – Protection of Action Taken in Good Faith | Indic Pacific

    Section 31 – Protection of Action Taken in Good Faith PUBLISHED Previous Next Section 31 - Protection of Action Taken in Good Faith No suit, prosecution, or other legal proceedings shall lie against the Central Government, the Indian Artificial Intelligence Council (IAIC), the Indian Artificial Intelligence Safety Institute (AISI), their respective Chairpersons, Members, officers, or employees for anything which is done or intended to be done in good faith under the provisions of this Act or the rules made thereunder. Related Indian AI Regulation Sources Information Technology Act, 2000 (IT Act 2000) October 2000

  • Language Model | Glossary of Terms | Indic Pacific | IPLR

    Language Model Date of Addition 22 March 2025 An AI algorithm that uses deep learning techniques and large datasets to understand, summarise, generate, and predict text-based content. Large language models (LLMs) dramatically expand this capability through transformer architectures and massive parameter counts. Modern language models, particularly LLMs, are trained on vast corpora of text data through multiple training stages, typically starting with unsupervised learning on unstructured data followed by fine-tuning with self-supervised learning. They employ transformer neural networks with self-attention mechanisms to understand relationships between words and concepts. This architecture enables them to assign weights to different tokens to determine contextual relationships. Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Section 18 – Third-Party Vulnerability Reporting | Indic Pacific

    Section 18 – Third-Party Vulnerability Reporting PUBLISHED Previous Next Section 18 - Third-Party Vulnerability Reporting [***] This is a repealed draft provision. Please click on "Next" to check the next draft provision / section. Related Indian AI Regulation Sources Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) April 2011 Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market participants January 2019 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) February 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023) April 2023 Digital Personal Data Protection Act, 2023 (DPDPA) August 2023 Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) January 2025

  • Artificial Intelligence & Geopolitics 101 | Indic Pacific | IPLR

    Explore the fundamentals that connect true AI innovation with geopolitical strategy. Understand the languages both communities speak, the priorities that drive their decisions, and why bridging this divide matters for the future of AI governance. TechinData.in Connect Explore More AI & Geopolitics 101 Inspired by South Asian Review of International Law, Volume 1 Inspired by Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Inspired by Paving the Path to an International Model Law on Carbon Taxes [IPLR-IG-012] Inspired by Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Inspired by Indian International Law Series, Volume 1 Inspired by Global Relations and Legal Policy, Volume 2 Inspired by Global Relations and Legal Policy, Volume 1 [GRLP1] Inspired by Global Legalism, Volume 1 Inspired by Global Customary International Law Index: A Prologue [GLA-TR-00X] Inspired by Averting Framework Fatigue in AI Governance [IPLR-IG-013] Inspired by Artificial Intelligence, Market Power and India in a Multipolar World Inspired by AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Inspired by 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Inspired by 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Enjoy the virtual experience to deeply understand the basics of this domain. Still curious? Just binge-read. Let's be honest — the mix of geopolitics and technology is cinema. Peak cinema. But not in the sense of spectacle or fiction. It May seem dense, as if you need jargons. What if we say it is not the case? What is geopolitics for LLMs or any AI? Here's the thing: engineers speak in models and datasets. Diplomats speak in treaties and strategic interests. When they're in the same room, they're often speaking past each other — one worried about algorithmic bias, the other about algorithmic hegemony. Same problem, different vocabulary. And yet AI doesn't develop in a vacuum. Every algorithm trained reflects a technical worldview. It does not need to ultimately have a socio-political view. Yet, market desperation, political posturing, marketing tactics, and manipulation of intellectual property laws create policy friction. At least some of the above mentioned reasons of friction cause the geoeconomic dead-end. It's not entirely political, but it gets there. In short, the tech and geopolitics bubbles speak their own languages, and patterns, making 0 sense. Still, what's "political" about the geopolitics of AI? Nothing? Is it politically different to exert control over specific kinds of AI, by involving bit-by-bit, too specific rules? Is it politically common to choke resources, or talent around AI, whether through companies, or government bodies (soft law or hard law)? If both are yes, then is the geopolitics of AI not about resources and not "politics"? Contrary to popular perception, the economics of compute (semiconductors) affects mobile manufacturing, gaming ecosystems, everything - so why is it mixed with the AI side to call it "the politics of AI and compute"? No, resource economics is not why the geopolitics of AI exists, today. It isn't just a business issue. If saving domestic (and local) constituencies is why most nations actually regulate (even China - yes, even them), then for whatever additional cause - cybersecurity, human rights, business mobility, financial security, etc., why should no nation regulate at all? Or has the idea of warfare changed that all these soft power areas like constitutional morality, and regulatory sovereignty have become "weaponised"? Always remember, if everything is weaponised, then nothing is weaponised, or at least everything is not weaponised like a Kafka-esque multiverse. Some things are weaponised, while some are pushed and pulled around, creating patterns that may give a helicopter view that we all are screaming around as nations around AI or data. Deep down, some answers protect our sanity, around the resource and financing "loops" - while some times, the political positioning is merely a 20th-century or Cold War period-style positioning for dominant powers - both China and the US. Hence, the averages of sane decision-making with some percentiles of insane distortion-enabling political and regulatory decisions kind of can explain the geopolitics of AI, provided we stick to the understanding of AI & geopolitics by limiting it to a few things: The algorithmic infrastructure The trajectories of evolution for different kinds of automation The potential of scientific heuristics and ethics in defining how two kinds of ethics of AI define positions of power: The science behind the ethics of data outflow, and algorithmic infrastructure The market ethics of products, services & infrastructure built around the "AI" system. You see - this is why laws which attack the science of AI to burden regulations around systems, fail against those which target productisable, serviceable, gullible deliverables based on an ideology of regulation. Now that ideology can be Confucianism, Gandhianism, Reaganism, Putinism, or even the Great Bauhaus of the European Union. Always look for the beauty of geopolitics, not just "resource geoeconomics". Individualistic Sovereignty Imagine you write a letter. Someone else reads it, makes copies, sells information about what you wrote, and you have zero say in any of this. Digital Sovereignty is fundamentally about YOUR right to control your own data, your own digital identity, and your own choices online. Of course, when some legal rights are limited, you see country-wise deviations across. How It Works: Every time you use an app, website, or AI tool, you generate data. Digital sovereignty means you—the individual—should have the power to decide what happens to that data. Can companies surveil you? Can foreign governments access it? Can it be sold without your consent? Should you trust in national courts to handle your grievances or another glorified North American senators briefing on tech companies? Ask yourself. Normative Emergence Imagine a few neighbours start composting in their backyards. Others notice, copy them. Soon it's the neighborhood norm. Eventually, the city makes it an official rule. Normative Emergence is when technical practices or informal behaviors gradually become accepted norms, and then sometimes become formal rules. How It Works: In 1994, a Netscape engineer invented cookies as a simple technical hack—just a way to remember items in a shopping cart between page loads. It was purely practical. No policy discussion. No debate. Just code. Other developers saw it, copied it, and started using cookies for their own sites. Within a few years, advertisers discovered they could use "third-party cookies" to track users across multiple websites. By the early 2000s, cookie-based tracking became the invisible foundation of online advertising. Every ad network, every recommendation system, every personalization engine assumed cookies existed and that tracking users across sites was just "how things work". Normative Evasion Imagine your local store sells plastic bags, but the neighboring town bans them. The store simply sets up shop a few meters across the border and keeps selling. Regulatory Arbitrage is when tech companies exploit differences in national laws to continue the same practices under friendlier jurisdictions. How It Works: AI companies locate their data centers, R&D hubs, or headquarters in regions with weaker compliance regimes. This allows them to test, scale, or monetize controversial AI systems—like surveillance analytics or data-intensive recommender algorithms—without violating stricter laws elsewhere. In AI, this means systems banned in one region (e.g., EU’s high-risk classification) can still be trained offshore, then imported as models or services under a different legal label. The result? Normative evasion—a race to the bottom where frameworks exist, but enforcement gaps make them meaningless. Okay, what is Ethics then? Let's understand this. You can say, a kind of shared vocabulary that forces engineers and policymakers to stop pretending they live on different planets. There are some basic principles of ethics, which are quite universally applicable, in the case of artificial intelligence, and even lack of jurisdiction might never be able to undo the need to address them in practice. Transparency Tech sees it as "can you reproduce the results?" Geopolitics sees it as "who gets to see the process?" They're not arguing—they literally mean different things by the same word. Accountability If an AI agent lacks technical reliability, should those who experimented it should be made as an example of "accountability" so that nobody cares to work on technical guardrails? Also, technical accountability sometimes can have economic consequences, if not legal. But markets have been hurt. What to do then? Privacy Tech thinks privacy is solved when data is encrypted or anonymized—a technical problem with a technical fix. Geopolitics sees privacy as "who has access and under what legal authority?"—a sovereignty problem. Engineers say "we secured the database." Diplomats ask "but which government can subpoena it?" Both think they're protecting you; neither realizes the other's solution doesn't address their threat model. Fairness Tech measures fairness as statistical parity across test sets—demographic groups getting equal error rates, equal opportunity, calibrated probabilities. Geopolitics asks "fair according to whom?" One jurisdiction defines discrimination by disparate impact (outcomes), another by disparate treatment (intent), and a third doesn't recognize the category at all. Now, while it may feel that implementing these principles isn't easy, it's not impossible to think of these ideas in the most basic way as may be possible. Let's also ask this. Do you need ethics to understand these tech & geopolitical bubbles? Absolutely. Ethics isn’t about being moral here. It’s about translating between two dialects that don’t align — one coded in math, the other in diplomacy. When technologists and policymakers talk about “values,” they’re both describing control, just through different mediums. Let's now understand the implementation value of AI Frameworks. Every ethical idea around AI boils down to whether it can be implemented or not. Supervised Learning Imagine a teacher giving you a math problem and the correct answer. You learn by mimicking the process. How It Works: Machines are trained on labeled data (input + correct output). Examples: Spam email detection, image recognition. Techniques include Linear regression, decision trees, neural networks. Unsupervised Learning imagine being dropped into a room full of strangers and figuring out who belongs to which group based on their behaviour. How It Works: Machines find patterns in unlabelled data. Examples: Customer segmentation, anomaly detection. Techniques include K-means clustering, principal component analysis (PCA). Reinforcement Learning Think of training a dog with treats. The dog learns which actions get rewards. How It Works: Machines learn by trial and error through rewards and punishments. Examples: Game-playing AIs like AlphaGo, robotics. Techniques include Q-learning, deep reinforcement learning. Semi-Supervised Learning Imagine doing homework where only some answers are given. You figure out the rest based on what you know. How It Works: Combines small labeled datasets with large unlabeled ones. Examples: Medical image classification when labeled data is scarce. There is a huge lack of country-specific AI Safety documentation. Paralysis 2: Lack of Jurisdiction-Specific Documentation on AI Safety Think of building a fire safety system for a city without knowing where fires have occurred or how they started. Without this knowledge, it’s hard to design effective safety measures. Many countries don’t have enough local research or documentation about AI safety incidents—like cases of biased algorithms or data breaches. While governments talk about principles like transparency and privacy in global forums, they often lack concrete, country-specific data or institutions to back up these discussions with real-world evidence. This makes it harder to create effective safety measures tailored to local needs. Neurosymbolic AI Think of it as combining intuition (neural networks) with logic (symbolic reasoning). It’s like solving puzzles using both gut feeling and rules. How It Works: Merges symbolic reasoning (rule-based systems) with neural networks for better interpretability and reasoning. Examples: AI systems for legal reasoning or scientific discovery. Here's some confession: never convert ethics terms into balloonish jargons or they won't work. Paralysis 3: Responsible AI Is Overrated, and Trustworthy AI Is Misrepresented Imagine a company claiming its product is "eco-friendly," but all they’ve done is slap a green label on it without making real changes. This is what happens with "Responsible AI" and "Trustworthy AI." "Responsible AI" sounds great—it’s about accountability and fairness—but in practice, it often becomes a buzzword. Companies use these terms to look ethical while prioritizing profits over real responsibility. For example, they might create flashy ethics boards or policies that don’t actually hold anyone accountable. This dilutes the meaning of these ideals and turns them into empty gestures rather than meaningful governance. The more garbage your questions are on AI, the more garbage will be your policy understanding on AI. Paralysis 4: How AI Awareness Becomes Policy Distraction Imagine everyone panicking about fixing potholes on one road while ignoring that the entire city’s bridges are crumbling. That’s what happens when public awareness drives shallow policymaking. When people become highly aware of visible AI issues—like facial recognition—they pressure governments to act quickly. Governments often respond by creating flashy policies that address these visible problems but ignore deeper challenges like reskilling workers for an AI-driven economy or fixing outdated infrastructure. This creates a distraction from systemic issues that need more attention. Beware: most Gen AI benchmarks are fake. Paralysis 5: Fragmentation in the AI Innovation Cycle and Benchmarking Imagine you’re comparing cars, but each car is tested on different tracks with different rules—one focuses on speed, another on fuel efficiency, and yet another on safety. Without a standard way to compare them, it’s hard to decide which car is actually the best. That’s the problem with AI benchmarking today. In AI development, benchmarks are tools used to measure how well models perform specific tasks. However, not all benchmarks are created equal—they vary in quality, reliability, and what they actually measure. This practice creates confusion because users might assume all benchmarks are equally meaningful, leading to incorrect conclusions about a model’s capabilities. Many benchmarks don’t clearly distinguish between real performance differences (signal) and random variations (noise). A benchmark designed to test factual accuracy might not account for how users interact with the model in real-world scenarios. Without incorporating realistic user interactions or formal verification methods, these benchmarks may provide misleading assessments. Why It Matters : Governments increasingly rely on benchmarks to regulate AI systems and assess compliance with safety standards. However, if these benchmarks are flawed or inconsistent: Policymakers might base decisions on unreliable data. Developers might optimise for benchmarks that don’t reflect real-world needs, slowing meaningful progress. AI Governance priorities sometimes may not be as obvious around privacy & accountability as we know it. Paralysis 6: Organizational Priorities Are Multifaceted and Conflicted Imagine trying to bake a cake while three people shout different instructions: one wants chocolate frosting (investors), another wants it gluten-free (regulators), and the third wants it ready in five minutes (public trust). It’s hard to satisfy everyone. Organizations face conflicting demands when adopting AI: Investors want quick returns on investment (ROI) from AI projects. Regulators require compliance with evolving laws like the EU AI Act. The public expects ethical branding and transparency. These competing priorities make it difficult for companies to create cohesive strategies for responsible AI adoption. Instead, they end up balancing short-term profits with long-term accountability—a juggling act that complicates governance. Here's some truth: it never gets easy for anyone. Paralysis 1: Regulation May or May Not Have a Trickle-Down Effect Imagine writing a rulebook for a game, but when the players start playing, they don’t follow the rules—or worse, the rules don’t actually change how the game is played. That’s what happens when regulations fail to have the intended impact. Governments might pass laws or policies to regulate AI, but these rules don’t always work as planned. For example, a law designed to make AI systems fairer might not actually affect how companies build or use AI because it’s too hard to enforce or doesn’t address real-world challenges. This creates a gap between policy intentions and market realities. Still, there will be geopolitical issues around AI, and one must determine them in a reasonable way. Start with data, and what kind of stakeholders would you need who create that resource equation. However, the funniest (yes, funniest) aspect of AI and geopolitics is that a typical "geoeconomic" or "economic" nexus or equation will try giving a vibe of geopolitical tensions. However, we live in a soft law world, where international rules bend more and might not be binding at all. Another problem that may emerge is how 20th-century-based heuristics and wisdom be applied to understand the "geopolitical game", even if Systemic Effects exist such as: Social inequality amplification Market concentration Governance or Political process interference Cultural homogenisation Instead of abstract risk categories, focus on: Observable Impacts such as documented incidents, user complaints, system failures and performance disparities across target groups Systemic Changes such as market structure shift, behavioural changes & cultural practice alterations in affected populations and environmental impacts Cascading Effects such as secondary economic impacts, social relationship changes, trust in institutions and power dynamics shifts We are glad you made this far to understand the basics of artificial intelligence and law. Wish to read more genuine sources? Go to IndoPacific.App and find a plethora of research we've done on AI and Law. Go to IndoPacific.App Always ask yourself Who is actually affected? What changes in behaviour are we seeing? Which impacts are measurable now? What long-term trends are emerging? What "geopolitical" or "geoeconomic" nexus emerging is specific to 1 kind of automation, and what is truly general enough? Is it some old wine in a new bottle, legally, politically, economically or technologically? But as we have dived into AI & Geopolitics, let's take a recap to understand AI, & ML too. Speaking of dictionaries, have you tried our Training Programmes? You Should. artificial intelligence and law fundamentals [level 1] 8,000 INR 6-week Access (Self-paced) 15 Lectures in 4 Modules 50+ Model Exercises Lecture Notes of 280+ pages Check & Enroll Today. artificial intelligence and intellectual property law [level 2] 30,000 INR 12-week Access (Self-paced) 16 Lectures in 3 Modules 70+ Model Exercises 30+ Case studies Lecture Notes of 400+ pages Check & Enroll Today. artificial intelligence and corporate governance [level 2] 35,000 INR 15-week Access (Self-paced) 18 Lectures in 5 Modules 80+ Model Exercises 25+ Case studies Lecture Notes of 400+ pages Check & Enroll Today. Artificial Intelligence (AI) is like the term "transportation." It covers everything from bicycles to airplanes. AI refers to machines designed to mimic human intelligence—like learning, reasoning, problem-solving, and decision-making. But just as "transportation" includes many forms (cars, trains, boats), AI includes various approaches and techniques. By the way, what if we tell you that there is a whole dictionary of AI and Law terms that we have developed? Check out our dictionary, today. Go to our Glossary So, WTF is Machine Learning anyway? Now, there are some basic concepts around artificial intelligence and geopolitics, which have stood the test of time even before the widespread use of large language models and former UK PM Boris Johnson's "chatgibbiti". ML focuses on teaching machines to learn from data rather than being explicitly programmed. Think of it like teaching a dog tricks by showing it treats instead of manually moving its paws. Here are some types of ML you should know. Benchmark Capture Imagine a university ranking that suddenly defines "success" only by test scores—but guess who makes the test? The same institutions that dominate the rankings. Benchmark Capture is when large players dictate the metrics used to judge AI reliability, safety, or fairness—creating evaluation systems they’re already optimized to win. How It Works: As Abhivardhan shows in his work on Normative Emergence , LLMs—despite being unreliable—have become the benchmark reference for all AI evaluation (citing Narayanan & Kapoor 2024; Eriksson et al. 2025). OpenAI, Anthropic, Google, and others create their own tests of factual accuracy or reasoning, but these tests aren’t scientifically grounded or cross-domain verified. Smaller AI systems, or non-LLM architectures like symbolic AI or hybrid systems, are judged by standards not made for them. This normative contagion locks the field into one family of architectures and misrepresents what “safe” or “trustworthy” AI actually means. Perception Dysmorphia Imagine looking in a mirror that distorts your reflection—making you see yourself as either bigger or smaller than you actually are. You make decisions based on that warped image, not reality. Perception Dysmorphia in AI governance is when policymakers, companies, and the public develop a fundamentally distorted view of what AI can do , what risks it poses, and whether governance measures are actually working—leading to regulations built on illusions rather than evidence. How It Works: Large Language Models like ChatGPT have created a false consensus about AI capabilities. Because LLMs can write fluently and mimic reasoning, people assume they're reliable, general-purpose intelligence systems. Governments then create governance frameworks based on LLM behavior—focusing on "hallucinations," "transparency," and "explainability"—and apply these norms to all AI systems, even ones that work completely differently (like computer vision, robotics, or symbolic reasoning systems). This creates a triple distortion: Overestimation: Policymakers think LLMs are more capable and trustworthy than they actually are, so they deploy them in high-stakes settings (legal advice, medical diagnosis, government services) without adequate safeguards. Misapplication: Governance frameworks designed for one type of unreliable AI (LLMs) get imposed on fundamentally different AI architectures that don't share those flaws—creating regulatory mismatch. Gatekeeping by Design: Compliance costs and bureaucratic requirements favor centralized AI labs with massive resources. Meanwhile, decentralized AI communities—independent developers, open-source contributors, federated learning networks—get crushed under regulations, market pressure, peer pressure, costs and maybe confusion they can't afford to manage. The real future of AI will be "extremely distributed, or largely federalised"—with innovations coming from engineering students, small research teams, and open-source communities, not just tech giants. But when governance is designed around Big Tech's LLMs, these distributed innovators face impossible barriers: they can't hire compliance officers, can't afford safety audits designed for billion-dollar models, and can't compete with incumbents who helped write the regulations.

  • Prompt Leaking | Glossary of Terms | Indic Pacific | IPLR

    Prompt Leaking Date of Addition 17 October 2025 An attack vector exploiting prompt injection vulnerabilities where adversaries craft inputs designed to extract proprietary system instructions, hidden prompts, or confidential configuration details embedded in AI applications. This security risk enables competitors or malicious actors to reverse-engineer commercial prompt engineering intellectual property, reveal safety guardrails for subsequent bypass attempts, or expose sensitive business logic encoded in system messages. Prompt leaking represents a unique challenge for LLM-based products where competitive differentiation often relies on carefully crafted instruction sets that cannot be technically protected through traditional access control mechanisms since the model must process both system and user inputs jointly. Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Md Zakir Hussain v. State of Manipur, W.P. (C) No. 1080 of 2023 (Manipur High Court, May 23, 2024) | Indic Pacific | IPLR | indicpacific.com

    Manipur High Court May 2024 judgment using ChatGPT for legal research on VDF service rules resulting in petitioner's reinstatement. Md Zakir Hussain v. State of Manipur, W.P. (C) No. 1080 of 2023 (Manipur High Court, May 23, 2024) Manipur High Court May 2024 judgment using ChatGPT for legal research on VDF service rules resulting in petitioner's reinstatement. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. May 2024 Read the Document Issuing Authority Manipur High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status Enacted Regulatory Stage Miscellaneous Binding Value Legally binding instruments enforceable before courts AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 3 – Classification of Artificial Intelligence Section 3 – Classification of Artificial Intelligence Section 7 – Risk-centric Methods of Classification Section 7 – Risk-centric Methods of Classification Section 8 – Prohibition of Unintended Risk AI Systems Section 8 – Prohibition of Unintended Risk AI Systems

  • Reimaging and Restructuring MeiTY for India [IPLR-IG-007] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Get this Publication 2024 ISBN 978-81-972625-0-0 Author(s) Bhavya Singh, Harinandana V Editor(s) Abhivardhan, Bhavana J Sekhar, Pratejas Tomar IndoPacific.App Identifier (ID) IPLR-IG-007 Tags Abhivardhan, AI regulation, blockchain regulation, compliance costs, cyber security, Data Protection, digital governance, digital infrastructure, Digital Transformation, e-governance, Emerging technologies, ethical governance, Indian Artificial Intelligence Council, Indian Technology Commission, institutional capacity, IoT regulation, MeitY reforms, quantum computing, regulatory burden, technology governance models, Technology Law Tribunal, Technology Policy, technology R&D, transparent governance, whole-of-government approach Related Terms in Techindata.in Explainers Definitions - A - E AI as a Component AI as a Concept AI as an Industry AI as a Juristic Entity AI as a Legal Entity AI as an Object AI as a Subject AI Knowledge Chain AI Literacy AI Supply Chain AI Value Chain AI Workflows AI-based Anthropomorphization Accountability Artificial Intelligence Hype Cycle Automation Definitions - F - J Intended Purpose / Specified Purpose Definitions - K - P Object-Oriented Design Omnipotence Omnipresence Performance Effect Privacy by Default Definitions - Q - U Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 34 Insight(s) on AI Ethics 8 Insight(s) on AI and Competition Law 8 Insight(s) on AI and Copyright Law 8 Insight(s) on AI and media sciences 8 Insight(s) on AI Governance 8 Insight(s) on AI regulation 7 Insight(s) on AI literacy 4 Insight(s) on AI and Evidence Law 3 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading . Previous Item Next Item

  • Deepfakes | Glossary of Terms | Indic Pacific | IPLR

    Deepfakes Explainers The Complete Glossary Deepfakes Date of Addition 22 Mar 2025 Synthetic media where a person's likeness in existing image or video content is replaced with someone else's using artificial intelligence techniques. Modern deepfakes increasingly span multiple modalities, combining manipulated video, audio, and text for greater realism. The multimodal dimension of deepfakes is particularly concerning from a detection standpoint. While early deepfakes focused primarily on visual manipulation, contemporary techniques integrate synchronized speech, realistic facial expressions, and contextually appropriate language to create convincing forgeries across multiple channels. Research into deepfake detection increasingly emphasizes multimodal analysis that integrates visual and auditory data for more comprehensive detection. This approach acknowledges that examining a single modality (such as just analyzing video frames) is insufficient when dealing with sophisticated multimodal deepfakes that maintain consistency across different information channels. Related Long-form Insights on IndoPacific.App [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3 Learn More AIACT.IN Version 3 Quick Explainer Learn More [AIACT.IN V4] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 4 Learn More [AIACT.IN V5] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 5 Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

bottom of page