top of page

Search Results

Results found for empty search

  • Multipolar World | Glossary of Terms | Indic Pacific | IPLR

    Multipolar World Date of Addition 26 April 2024 A multipolar world is a global system in which power is distributed among multiple states, rather than being concentrated in one (unipolar) or two (bipolar) dominant powers. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022) . Related Long-form Insights on IndoPacific.App 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Learn More Global Customary International Law Index: A Prologue [GLA-TR-00X] Learn More An Indian Perspective on Special Purpose Acquisition Companies [GLA-TR-001] Learn More India-led Global Governance in the Indo-Pacific: Basis & Approaches [GLA-TR-003] Learn More Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Global Legalism, Volume 1 Learn More Global Relations and Legal Policy, Volume 1 [GRLP1] Learn More South Asian Review of International Law, Volume 1 Learn More Indian International Law Series, Volume 1 Learn More Global Relations and Legal Policy, Volume 2 Learn More The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Indic Pacific - ISAIL Joint Annual Report, 2022-24 Learn More Paving the Path to an International Model Law on Carbon Taxes [IPLR-IG-012] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • AI as an Entity | Glossary of Terms | Indic Pacific | IPLR

    AI as an Entity Explainers The Complete Glossary AI as an Entity Date of Addition 26 Apr 2024 It means Artificial Intelligence may be considered as a form of electronic personality, in a legal or juristic sense. This idea was proposed in the 2020 Handbook on AI and International Law (2021). Related Long-form Insights on IndoPacific.App Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Section 2 – Definitions | Indic Pacific

    Section 2 – Definitions PUBLISHED Previous Next Section 2 – Definitions In this Act, unless the context otherwise requires — (a) “Artificial Intelligence”, “AI”, “AI technology”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3. (b) “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application. (c) “Algorithmic Bias” includes – (i) the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and (ii) the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to: (a) programming errors; (b) flawed algorithmic logic; and (c) deficiencies in model training and validation, including but not limited to: (1) the incomplete or deficient data used for model training; (d) “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997; (e) “Business end-user” means an end-user that is - (i) engaged in a commercial or professional activity and uses an AI system in the course of such activity; or (ii) a government agency or public authority that uses an AI system in the performance of its official functions or provision of public services. (f) “Combinations of intellectual property protections” means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems; (g) “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including: (i) The source data, models, and algorithms used to generate the content; (ii) The individuals or entities involved in the creation, modification, and distribution of the content; (iii) The date, time, and location of content creation and any subsequent modifications; (iv) The intended purpose, context, and target audience of the content; (v) Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and (vi) The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability. (h) “Corporate Governance” means the system of rules, practices, and processes by which an organisation is directed and controlled, encompassing the mechanisms through which companies, and organisations, ensure accountability, fairness, and transparency in their relationships with stakeholders including but not limited to employees, shareholders, customers, and the public. (i) “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means; (j) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data; (k) “Data portability” means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where: (i) The personal data has been provided to the data fiduciary by the data principal; (ii) The processing is based on consent or the performance of a contract; and (iii) The processing is carried out by automated means. (l) “Data Principal” means the individual to whom the personal data relates and where such individual is— (i) a child, includes the parents or lawful guardian of such a child; (ii) a person with disability, includes her lawful guardian, acting on her behalf; (m) “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023; (n) “Data Scraping” means the automated collection, extraction, or mining of data from websites, online platforms, or digital sources through technical means including but not limited to automated tools, web crawlers, or software applications that extract information from websites or online platforms; (o) “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode; (p) “Digital personal data” means personal data in digital form; (q) “End-user” means - (i) an individual who ultimately uses or is intended to ultimately use an AI system, directly or indirectly, for personal, domestic or household purposes; or (ii) an entity, including a business or organisation, that uses an AI system to provide or offer a product, service, or experience to individuals, whether for a fee or free of charge. (r) “Knowledge asset” includes, but is not limited to: (i) Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs; (ii) Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards; (iii) Tacit knowledge and expertise residing within the organisation’s human capital, such as specialized skills, experiences, and know-how; (iv) Organisational processes, systems, and methodologies that enable the effective capture, organisation, and utilisation of knowledge; (v) Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences; (vi) Knowledge derived from data analysis, including patterns, trends, and predictive models; and (vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives. (s) “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems; (t) “IAIC” means Indian Artificial Intelligence Council; (u) “Inherent Purpose”, and “Intended Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve; (v) “Insurance Policy” means measures and requirements concerning insurance for research & development, production, and implementation of artificial intelligence technologies; (w) “Interoperability considerations” means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments, which include: (i) Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement; (ii) Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good; (iii)Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems; (iv) Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications; (x) “Open-Source Software” means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose. (y) “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical, commercial & risk-based features, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies; (z) “Person” includes— (i) an individual; (ii) a Hindu undivided family; (iii) a company; (iv) a firm; (v) an association of persons or a body of individuals, whether incorporated or not; (vi) the State; and (vii) every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r); (aa) “Post-Deployment Monitoring” means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service; (bb)“Public Interest Use” includes research, education, analysis, journalism, and non-commercial innovation that may qualify under Section 52 of the Copyright Act, 1957 as fair dealing or permitted use. (cc) “Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects; (dd)“Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023; (ee) “Sociotechnical” means the recognition that artificial intelligence systems are not merely technical artifacts but are embedded within broader social contexts, organisational structures, and human-technology interactions, necessitating the consideration and harmonisation of both social and technical aspects to ensure responsible and effective AI governance; (ff) “State” shall be construed as the State defined under Article 12 of the Constitution of India; (gg) “Strategic sector” shall mean any sector classified as strategic under the Foreign Exchange Management (Overseas Investment) Directions, 2022, and shall further include any sector or sub-sector as may be designated by the Central Government, having regard to considerations of national security, economic sovereignty, critical infrastructure, or technological advancement, in accordance with the principles set forth in Section 21A. (hh) “techno-solutionism” means the systematic implementation of artificial intelligence systems or computational technologies as primary solutions to public administration challenges while failing to adequately address underlying non-technical factors; Explanation.—For the purposes of this definition, techno-solutionism includes— (i) implementing automated decision systems that directly impact legally recognized rights without providing affected persons a clear mechanism to present their case before or after such decisions; (ii) deploying AI systems that create demonstrable risk of unfairness by prioritizing computational processing over consideration of individual circumstances; (iii) automating administrative processes in ways that prevent affected persons from obtaining specific explanations for decisions affecting their legal interests; (iv) replacing necessary human evaluation with automated systems in contexts where established law requires case-specific assessment, proportionality testing, or application of discretion; (v) justifying technological implementation primarily based on operational metrics (such as cost-reduction or processing speed) without measuring improvement in addressing the underlying public issue; and (vi) allocating public resources to technological systems for problems that fundamentally result from policy deficiencies, resource limitations, or structural issues that technology alone cannot solve; (ii) “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network; (jj) “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service; (kk)“use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome; Related Indian AI Regulation Sources Information Technology Act, 2000 (IT Act 2000) October 2000 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) April 2011 National Strategy for Artificial Intelligence (#AIforAll) June 2018 Digital Personal Data Protection Act, 2023 (DPDPA) August 2023 Karnataka Global Capability Center (GCC) Policy 2024-2029 November 2024 Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) January 2025

  • Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Get this Publication 2025 ISBN 978-81-977227-7-6 Author(s) Eva Mathur, Oshi Yadav, Rasleen Kaur Dua Editor(s) Abhivardhan IndoPacific.App Identifier (ID) AIPI-V6 Tags Abhivardhan, AI Ethics, Algorithmic Trading, Artificial Intelligence, Blockchain, digital economy, Distributed Ledger Technology, Financial Automation, Future of Legal Profession, India, indic pacific legal research, ISAIL, Law Students, Legal Education, Policy, Regulatory Challenges, Supply Chain Management, Technology Governance Related Terms in Techindata.in Explainers Definitions - A - E AI Literacy AI Supply Chain AI Value Chain Accountability Algorithmic Activities and Operations Automation CEI Classification Definitions - F - J Intended Purpose / Specified Purpose Definitions - K - P Language Model Manifest Availability Model Algorithmic Ethics standards (MAES) Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Object-Oriented Design Proprietary Information Definitions - Q - U Roughdraft AI SOTP Classification Synthetic Content Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Technophobia Definitions - V - Z WANA WENA Whole-of-Government Response Related Articles in Techindata.in Insights 34 Insight(s) on AI Ethics 8 Insight(s) on AI and Competition Law 8 Insight(s) on AI and Copyright Law 8 Insight(s) on AI and media sciences 8 Insight(s) on AI Governance 8 Insight(s) on AI regulation 7 Insight(s) on AI literacy 4 Insight(s) on AI and Evidence Law 3 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading . Previous Item Next Item

  • Operationalizing Principles for Responsible AI (Part 2) | Indic Pacific | IPLR | indicpacific.com

    NITI Aayog's August 2021 implementation guide detailing specific actions for government, private sector, and research institutes to operationalize seven responsible AI principles. Operationalizing Principles for Responsible AI (Part 2) NITI Aayog's August 2021 implementation guide detailing specific actions for government, private sector, and research institutes to operationalize seven responsible AI principles. Previous Next The AIACT.IN India AI Regulation Tracker This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. August 2021 Read the Document Issuing Authority NITI Aayog Type of Legal / Policy Document Guidance documents with normative influence Status Enacted Regulatory Stage Pre-regulatory Binding Value Guidance documents with normative influence AIACT. Regulation Visualiser Find more sources Related Long-form Insights on IndoPacific.App Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More AIACT.IN Version 3 Quick Explainer Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 11 – Registration & Certification of AI Systems Section 11 – Registration & Certification of AI Systems Section 12 – National Registry of Artificial Intelligence Use Cases Section 12 – National Registry of Artificial Intelligence Use Cases Section 13 – National Artificial Intelligence Ethics Code Section 13 – National Artificial Intelligence Ethics Code

  • Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Get this Publication 2025 ISBN 978-81-986924-0-5 Author(s) Abhivardhan, Sanad Arora, Supratim Bapuli Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPLR-IG-015 Tags Agreements, Contracts, Global Jurisprudence, Horizontal Proximity, Indian Law, Indian Policy, Indian Regulatory Contexts, Intermediaries, International Law, Intersectionality, Judicial Institutions, Legal Artifact, Oblique Proximity, recommendations, regulation, Safe Harbour, Self-Regulatory Bodies, Technology Evolution, Technology law, Terms of Use, Transnational Law, Vertical Proximity, Working Conditions Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as a Legal Entity AI as a Subject AI Literacy AI Supply Chain AI Value Chain AI Workflows Definitions - F - J Framework Fatigue Indo-Pacific Intended Purpose / Specified Purpose Issue-to-issue concept classification Definitions - K - P Manifest Availability Model Algorithmic Ethics standards (MAES) Object-Oriented Design Polyvocality Performance Effect Permeable Indigeneity in Policy (PIP) Phenomena-based concept classification Privacy by Default Privacy by Design Definitions - Q - U SOTP Classification Strategic Hedging Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 34 Insight(s) on AI Ethics 26 Insight(s) on artificial intelligence and law 8 Insight(s) on AI and Competition Law 8 Insight(s) on AI and Copyright Law 8 Insight(s) on AI and media sciences 8 Insight(s) on AI Governance 8 Insight(s) on AI regulation 7 Insight(s) on AI literacy 5 Insight(s) on digital competition law 4 Insight(s) on AI and Evidence Law 3 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on digital markets act 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading 1 Insight(s) on Technology Law 1 Insight(s) on governance 1 Insight(s) on ethics 1 Insight(s) on innovation 1 Insight(s) on accountability 1 Insight(s) on safe harbour 1 Insight(s) on media law 0 Insight(s) on customer experience . Previous Item Next Item

  • Global Relations and Legal Policy, Volume 1 [GRLP1] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Global Relations and Legal Policy, Volume 1 [GRLP1] Get this Publication 2020 ISBN 978-93-5407-220-8 Author(s) Akash Manwani, Akshat Mall, Amin Labbafi, Anubhav Banerjee, Arpan Chakravarty, Avishikta Chattopadhyay, Dhanya Visweswaran, Manohar Samal, Mugdha Satpute, Padmja Mishra, Pragya Sharma, Pranay Bhattacharya, Pratham Sharma, Ridhima Bhardwaj, Vasu Sharma Editor(s) Abhivardhan, Amulya Anil IndoPacific.App Identifier (ID) GRLP1 Tags Diplomacy, Geopolitics, Global Relations, Governance, Human Rights, International Law, International Organizations, International Trade, Legal Policy, Policy Related Terms in Techindata.in Explainers Definitions - A - E CEI Classification Class-of-Applications-by-Class-of-Application (CbC) approach Definitions - F - J GAE Indo-Pacific International Algorithmic Law Definitions - K - P Multi-alignment Multipolar World Multipolarity Permeable Indigeneity in Policy (PIP) Phenomena-based concept classification Definitions - Q - U Strategic Autonomy Strategic Hedging Technophobia Definitions - V - Z WANA WENA Whole-of-Government Response Related Articles in Techindata.in Insights 4 Insight(s) on Government Affairs 1 Insight(s) on India-US Relations 1 Insight(s) on governance 1 Insight(s) on Indic Pacific 1 Insight(s) on India 1 Insight(s) on strategic sectors . Previous Item Next Item

  • Indofuturism | Glossary of Terms | Indic Pacific | IPLR

    Indofuturism Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Indofuturism Date of Addition 19 January 2025 A creative and cultural movement that reimagines India through science fiction and futuristic scenarios, particularly using AI-generated art and storytelling. It challenges Western sci-fi tropes by blending Indian cultural elements with futuristic concepts. Key characteristics include: Visual reimagining of Indian scenarios through a sci-fi lens Challenge to the assumption that sci-fi isn't a "desi genre" Creation of new visual vocabulary for Indian science fiction Exploration of alternative historical scenarios (like non-colonized India) This term was conceptualized through the AI artwork and creative direction of Prateek Arora, VP Development at BANG BANG Mediacorp, who popularized the term through his viral AI-generated artworks like "Granth Gothica" and "Disco Antriksh". Related Long-form Insights on IndoPacific.App Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Previous Term Next Term

  • Section 31 – Protection of Action Taken in Good Faith | Indic Pacific

    Section 31 – Protection of Action Taken in Good Faith PUBLISHED Previous Next Section 31 - Protection of Action Taken in Good Faith No suit, prosecution, or other legal proceedings shall lie against the Central Government, the Indian Artificial Intelligence Council (IAIC), the Indian Artificial Intelligence Safety Institute (AISI), their respective Chairpersons, Members, officers, or employees for anything which is done or intended to be done in good faith under the provisions of this Act or the rules made thereunder. Related Indian AI Regulation Sources Information Technology Act, 2000 (IT Act 2000) October 2000

  • Language Model | Glossary of Terms | Indic Pacific | IPLR

    Language Model Date of Addition 22 March 2025 An AI algorithm that uses deep learning techniques and large datasets to understand, summarise, generate, and predict text-based content. Large language models (LLMs) dramatically expand this capability through transformer architectures and massive parameter counts. Modern language models, particularly LLMs, are trained on vast corpora of text data through multiple training stages, typically starting with unsupervised learning on unstructured data followed by fine-tuning with self-supervised learning. They employ transformer neural networks with self-attention mechanisms to understand relationships between words and concepts. This architecture enables them to assign weights to different tokens to determine contextual relationships. Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Section 18 – Third-Party Vulnerability Reporting | Indic Pacific

    Section 18 – Third-Party Vulnerability Reporting PUBLISHED Previous Next Section 18 - Third-Party Vulnerability Reporting [***] This is a repealed draft provision. Please click on "Next" to check the next draft provision / section. Related Indian AI Regulation Sources Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) April 2011 Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market participants January 2019 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) February 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023) April 2023 Digital Personal Data Protection Act, 2023 (DPDPA) August 2023 Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) January 2025

  • Artificial Intelligence & Geopolitics 101 | Indic Pacific | IPLR

    Explore the fundamentals that connect true AI innovation with geopolitical strategy. Understand the languages both communities speak, the priorities that drive their decisions, and why bridging this divide matters for the future of AI governance. TechinData.in Connect Explore More AI & Geopolitics 101 Inspired by South Asian Review of International Law, Volume 1 Inspired by Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Inspired by Paving the Path to an International Model Law on Carbon Taxes [IPLR-IG-012] Inspired by Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Inspired by Indian International Law Series, Volume 1 Inspired by Global Relations and Legal Policy, Volume 2 Inspired by Global Relations and Legal Policy, Volume 1 [GRLP1] Inspired by Global Legalism, Volume 1 Inspired by Global Customary International Law Index: A Prologue [GLA-TR-00X] Inspired by Averting Framework Fatigue in AI Governance [IPLR-IG-013] Inspired by Artificial Intelligence, Market Power and India in a Multipolar World Inspired by AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Inspired by 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Inspired by 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Enjoy the virtual experience to deeply understand the basics of this domain. Still curious? Just binge-read. Let's be honest — the mix of geopolitics and technology is cinema. Peak cinema. But not in the sense of spectacle or fiction. It May seem dense, as if you need jargons. What if we say it is not the case? What is geopolitics for LLMs or any AI? Here's the thing: engineers speak in models and datasets. Diplomats speak in treaties and strategic interests. When they're in the same room, they're often speaking past each other — one worried about algorithmic bias, the other about algorithmic hegemony. Same problem, different vocabulary. And yet AI doesn't develop in a vacuum. Every algorithm trained reflects a technical worldview. It does not need to ultimately have a socio-political view. Yet, market desperation, political posturing, marketing tactics, and manipulation of intellectual property laws create policy friction. At least some of the above mentioned reasons of friction cause the geoeconomic dead-end. It's not entirely political, but it gets there. In short, the tech and geopolitics bubbles speak their own languages, and patterns, making 0 sense. Still, what's "political" about the geopolitics of AI? Nothing? Is it politically different to exert control over specific kinds of AI, by involving bit-by-bit, too specific rules? Is it politically common to choke resources, or talent around AI, whether through companies, or government bodies (soft law or hard law)? If both are yes, then is the geopolitics of AI not about resources and not "politics"? Contrary to popular perception, the economics of compute (semiconductors) affects mobile manufacturing, gaming ecosystems, everything - so why is it mixed with the AI side to call it "the politics of AI and compute"? No, resource economics is not why the geopolitics of AI exists, today. It isn't just a business issue. If saving domestic (and local) constituencies is why most nations actually regulate (even China - yes, even them), then for whatever additional cause - cybersecurity, human rights, business mobility, financial security, etc., why should no nation regulate at all? Or has the idea of warfare changed that all these soft power areas like constitutional morality, and regulatory sovereignty have become "weaponised"? Always remember, if everything is weaponised, then nothing is weaponised, or at least everything is not weaponised like a Kafka-esque multiverse. Some things are weaponised, while some are pushed and pulled around, creating patterns that may give a helicopter view that we all are screaming around as nations around AI or data. Deep down, some answers protect our sanity, around the resource and financing "loops" - while some times, the political positioning is merely a 20th-century or Cold War period-style positioning for dominant powers - both China and the US. Hence, the averages of sane decision-making with some percentiles of insane distortion-enabling political and regulatory decisions kind of can explain the geopolitics of AI, provided we stick to the understanding of AI & geopolitics by limiting it to a few things: The algorithmic infrastructure The trajectories of evolution for different kinds of automation The potential of scientific heuristics and ethics in defining how two kinds of ethics of AI define positions of power: The science behind the ethics of data outflow, and algorithmic infrastructure The market ethics of products, services & infrastructure built around the "AI" system. You see - this is why laws which attack the science of AI to burden regulations around systems, fail against those which target productisable, serviceable, gullible deliverables based on an ideology of regulation. Now that ideology can be Confucianism, Gandhianism, Reaganism, Putinism, or even the Great Bauhaus of the European Union. Always look for the beauty of geopolitics, not just "resource geoeconomics". Individualistic Sovereignty Imagine you write a letter. Someone else reads it, makes copies, sells information about what you wrote, and you have zero say in any of this. Digital Sovereignty is fundamentally about YOUR right to control your own data, your own digital identity, and your own choices online. Of course, when some legal rights are limited, you see country-wise deviations across. How It Works: Every time you use an app, website, or AI tool, you generate data. Digital sovereignty means you—the individual—should have the power to decide what happens to that data. Can companies surveil you? Can foreign governments access it? Can it be sold without your consent? Should you trust in national courts to handle your grievances or another glorified North American senators briefing on tech companies? Ask yourself. Normative Emergence Imagine a few neighbours start composting in their backyards. Others notice, copy them. Soon it's the neighborhood norm. Eventually, the city makes it an official rule. Normative Emergence is when technical practices or informal behaviors gradually become accepted norms, and then sometimes become formal rules. How It Works: In 1994, a Netscape engineer invented cookies as a simple technical hack—just a way to remember items in a shopping cart between page loads. It was purely practical. No policy discussion. No debate. Just code. Other developers saw it, copied it, and started using cookies for their own sites. Within a few years, advertisers discovered they could use "third-party cookies" to track users across multiple websites. By the early 2000s, cookie-based tracking became the invisible foundation of online advertising. Every ad network, every recommendation system, every personalization engine assumed cookies existed and that tracking users across sites was just "how things work". Normative Evasion Imagine your local store sells plastic bags, but the neighboring town bans them. The store simply sets up shop a few meters across the border and keeps selling. Regulatory Arbitrage is when tech companies exploit differences in national laws to continue the same practices under friendlier jurisdictions. How It Works: AI companies locate their data centers, R&D hubs, or headquarters in regions with weaker compliance regimes. This allows them to test, scale, or monetize controversial AI systems—like surveillance analytics or data-intensive recommender algorithms—without violating stricter laws elsewhere. In AI, this means systems banned in one region (e.g., EU’s high-risk classification) can still be trained offshore, then imported as models or services under a different legal label. The result? Normative evasion—a race to the bottom where frameworks exist, but enforcement gaps make them meaningless. Okay, what is Ethics then? Let's understand this. You can say, a kind of shared vocabulary that forces engineers and policymakers to stop pretending they live on different planets. There are some basic principles of ethics, which are quite universally applicable, in the case of artificial intelligence, and even lack of jurisdiction might never be able to undo the need to address them in practice. Transparency Tech sees it as "can you reproduce the results?" Geopolitics sees it as "who gets to see the process?" They're not arguing—they literally mean different things by the same word. Accountability If an AI agent lacks technical reliability, should those who experimented it should be made as an example of "accountability" so that nobody cares to work on technical guardrails? Also, technical accountability sometimes can have economic consequences, if not legal. But markets have been hurt. What to do then? Privacy Tech thinks privacy is solved when data is encrypted or anonymized—a technical problem with a technical fix. Geopolitics sees privacy as "who has access and under what legal authority?"—a sovereignty problem. Engineers say "we secured the database." Diplomats ask "but which government can subpoena it?" Both think they're protecting you; neither realizes the other's solution doesn't address their threat model. Fairness Tech measures fairness as statistical parity across test sets—demographic groups getting equal error rates, equal opportunity, calibrated probabilities. Geopolitics asks "fair according to whom?" One jurisdiction defines discrimination by disparate impact (outcomes), another by disparate treatment (intent), and a third doesn't recognize the category at all. Now, while it may feel that implementing these principles isn't easy, it's not impossible to think of these ideas in the most basic way as may be possible. Let's also ask this. Do you need ethics to understand these tech & geopolitical bubbles? Absolutely. Ethics isn’t about being moral here. It’s about translating between two dialects that don’t align — one coded in math, the other in diplomacy. When technologists and policymakers talk about “values,” they’re both describing control, just through different mediums. Let's now understand the implementation value of AI Frameworks. Every ethical idea around AI boils down to whether it can be implemented or not. Supervised Learning Imagine a teacher giving you a math problem and the correct answer. You learn by mimicking the process. How It Works: Machines are trained on labeled data (input + correct output). Examples: Spam email detection, image recognition. Techniques include Linear regression, decision trees, neural networks. Unsupervised Learning imagine being dropped into a room full of strangers and figuring out who belongs to which group based on their behaviour. How It Works: Machines find patterns in unlabelled data. Examples: Customer segmentation, anomaly detection. Techniques include K-means clustering, principal component analysis (PCA). Reinforcement Learning Think of training a dog with treats. The dog learns which actions get rewards. How It Works: Machines learn by trial and error through rewards and punishments. Examples: Game-playing AIs like AlphaGo, robotics. Techniques include Q-learning, deep reinforcement learning. Semi-Supervised Learning Imagine doing homework where only some answers are given. You figure out the rest based on what you know. How It Works: Combines small labeled datasets with large unlabeled ones. Examples: Medical image classification when labeled data is scarce. There is a huge lack of country-specific AI Safety documentation. Paralysis 2: Lack of Jurisdiction-Specific Documentation on AI Safety Think of building a fire safety system for a city without knowing where fires have occurred or how they started. Without this knowledge, it’s hard to design effective safety measures. Many countries don’t have enough local research or documentation about AI safety incidents—like cases of biased algorithms or data breaches. While governments talk about principles like transparency and privacy in global forums, they often lack concrete, country-specific data or institutions to back up these discussions with real-world evidence. This makes it harder to create effective safety measures tailored to local needs. Neurosymbolic AI Think of it as combining intuition (neural networks) with logic (symbolic reasoning). It’s like solving puzzles using both gut feeling and rules. How It Works: Merges symbolic reasoning (rule-based systems) with neural networks for better interpretability and reasoning. Examples: AI systems for legal reasoning or scientific discovery. Here's some confession: never convert ethics terms into balloonish jargons or they won't work. Paralysis 3: Responsible AI Is Overrated, and Trustworthy AI Is Misrepresented Imagine a company claiming its product is "eco-friendly," but all they’ve done is slap a green label on it without making real changes. This is what happens with "Responsible AI" and "Trustworthy AI." "Responsible AI" sounds great—it’s about accountability and fairness—but in practice, it often becomes a buzzword. Companies use these terms to look ethical while prioritizing profits over real responsibility. For example, they might create flashy ethics boards or policies that don’t actually hold anyone accountable. This dilutes the meaning of these ideals and turns them into empty gestures rather than meaningful governance. The more garbage your questions are on AI, the more garbage will be your policy understanding on AI. Paralysis 4: How AI Awareness Becomes Policy Distraction Imagine everyone panicking about fixing potholes on one road while ignoring that the entire city’s bridges are crumbling. That’s what happens when public awareness drives shallow policymaking. When people become highly aware of visible AI issues—like facial recognition—they pressure governments to act quickly. Governments often respond by creating flashy policies that address these visible problems but ignore deeper challenges like reskilling workers for an AI-driven economy or fixing outdated infrastructure. This creates a distraction from systemic issues that need more attention. Beware: most Gen AI benchmarks are fake. Paralysis 5: Fragmentation in the AI Innovation Cycle and Benchmarking Imagine you’re comparing cars, but each car is tested on different tracks with different rules—one focuses on speed, another on fuel efficiency, and yet another on safety. Without a standard way to compare them, it’s hard to decide which car is actually the best. That’s the problem with AI benchmarking today. In AI development, benchmarks are tools used to measure how well models perform specific tasks. However, not all benchmarks are created equal—they vary in quality, reliability, and what they actually measure. This practice creates confusion because users might assume all benchmarks are equally meaningful, leading to incorrect conclusions about a model’s capabilities. Many benchmarks don’t clearly distinguish between real performance differences (signal) and random variations (noise). A benchmark designed to test factual accuracy might not account for how users interact with the model in real-world scenarios. Without incorporating realistic user interactions or formal verification methods, these benchmarks may provide misleading assessments. Why It Matters : Governments increasingly rely on benchmarks to regulate AI systems and assess compliance with safety standards. However, if these benchmarks are flawed or inconsistent: Policymakers might base decisions on unreliable data. Developers might optimise for benchmarks that don’t reflect real-world needs, slowing meaningful progress. AI Governance priorities sometimes may not be as obvious around privacy & accountability as we know it. Paralysis 6: Organizational Priorities Are Multifaceted and Conflicted Imagine trying to bake a cake while three people shout different instructions: one wants chocolate frosting (investors), another wants it gluten-free (regulators), and the third wants it ready in five minutes (public trust). It’s hard to satisfy everyone. Organizations face conflicting demands when adopting AI: Investors want quick returns on investment (ROI) from AI projects. Regulators require compliance with evolving laws like the EU AI Act. The public expects ethical branding and transparency. These competing priorities make it difficult for companies to create cohesive strategies for responsible AI adoption. Instead, they end up balancing short-term profits with long-term accountability—a juggling act that complicates governance. Here's some truth: it never gets easy for anyone. Paralysis 1: Regulation May or May Not Have a Trickle-Down Effect Imagine writing a rulebook for a game, but when the players start playing, they don’t follow the rules—or worse, the rules don’t actually change how the game is played. That’s what happens when regulations fail to have the intended impact. Governments might pass laws or policies to regulate AI, but these rules don’t always work as planned. For example, a law designed to make AI systems fairer might not actually affect how companies build or use AI because it’s too hard to enforce or doesn’t address real-world challenges. This creates a gap between policy intentions and market realities. Still, there will be geopolitical issues around AI, and one must determine them in a reasonable way. Start with data, and what kind of stakeholders would you need who create that resource equation. However, the funniest (yes, funniest) aspect of AI and geopolitics is that a typical "geoeconomic" or "economic" nexus or equation will try giving a vibe of geopolitical tensions. However, we live in a soft law world, where international rules bend more and might not be binding at all. Another problem that may emerge is how 20th-century-based heuristics and wisdom be applied to understand the "geopolitical game", even if Systemic Effects exist such as: Social inequality amplification Market concentration Governance or Political process interference Cultural homogenisation Instead of abstract risk categories, focus on: Observable Impacts such as documented incidents, user complaints, system failures and performance disparities across target groups Systemic Changes such as market structure shift, behavioural changes & cultural practice alterations in affected populations and environmental impacts Cascading Effects such as secondary economic impacts, social relationship changes, trust in institutions and power dynamics shifts We are glad you made this far to understand the basics of artificial intelligence and law. Wish to read more genuine sources? Go to IndoPacific.App and find a plethora of research we've done on AI and Law. Go to IndoPacific.App Always ask yourself Who is actually affected? What changes in behaviour are we seeing? Which impacts are measurable now? What long-term trends are emerging? What "geopolitical" or "geoeconomic" nexus emerging is specific to 1 kind of automation, and what is truly general enough? Is it some old wine in a new bottle, legally, politically, economically or technologically? But as we have dived into AI & Geopolitics, let's take a recap to understand AI, & ML too. Speaking of dictionaries, have you tried our Training Programmes? You Should. artificial intelligence and law fundamentals [level 1] 8,000 INR 6-week Access (Self-paced) 15 Lectures in 4 Modules 50+ Model Exercises Lecture Notes of 280+ pages Check & Enroll Today. artificial intelligence and intellectual property law [level 2] 30,000 INR 12-week Access (Self-paced) 16 Lectures in 3 Modules 70+ Model Exercises 30+ Case studies Lecture Notes of 400+ pages Check & Enroll Today. artificial intelligence and corporate governance [level 2] 35,000 INR 15-week Access (Self-paced) 18 Lectures in 5 Modules 80+ Model Exercises 25+ Case studies Lecture Notes of 400+ pages Check & Enroll Today. Artificial Intelligence (AI) is like the term "transportation." It covers everything from bicycles to airplanes. AI refers to machines designed to mimic human intelligence—like learning, reasoning, problem-solving, and decision-making. But just as "transportation" includes many forms (cars, trains, boats), AI includes various approaches and techniques. By the way, what if we tell you that there is a whole dictionary of AI and Law terms that we have developed? Check out our dictionary, today. Go to our Glossary So, WTF is Machine Learning anyway? Now, there are some basic concepts around artificial intelligence and geopolitics, which have stood the test of time even before the widespread use of large language models and former UK PM Boris Johnson's "chatgibbiti". ML focuses on teaching machines to learn from data rather than being explicitly programmed. Think of it like teaching a dog tricks by showing it treats instead of manually moving its paws. Here are some types of ML you should know. Benchmark Capture Imagine a university ranking that suddenly defines "success" only by test scores—but guess who makes the test? The same institutions that dominate the rankings. Benchmark Capture is when large players dictate the metrics used to judge AI reliability, safety, or fairness—creating evaluation systems they’re already optimized to win. How It Works: As Abhivardhan shows in his work on Normative Emergence , LLMs—despite being unreliable—have become the benchmark reference for all AI evaluation (citing Narayanan & Kapoor 2024; Eriksson et al. 2025). OpenAI, Anthropic, Google, and others create their own tests of factual accuracy or reasoning, but these tests aren’t scientifically grounded or cross-domain verified. Smaller AI systems, or non-LLM architectures like symbolic AI or hybrid systems, are judged by standards not made for them. This normative contagion locks the field into one family of architectures and misrepresents what “safe” or “trustworthy” AI actually means. Perception Dysmorphia Imagine looking in a mirror that distorts your reflection—making you see yourself as either bigger or smaller than you actually are. You make decisions based on that warped image, not reality. Perception Dysmorphia in AI governance is when policymakers, companies, and the public develop a fundamentally distorted view of what AI can do , what risks it poses, and whether governance measures are actually working—leading to regulations built on illusions rather than evidence. How It Works: Large Language Models like ChatGPT have created a false consensus about AI capabilities. Because LLMs can write fluently and mimic reasoning, people assume they're reliable, general-purpose intelligence systems. Governments then create governance frameworks based on LLM behavior—focusing on "hallucinations," "transparency," and "explainability"—and apply these norms to all AI systems, even ones that work completely differently (like computer vision, robotics, or symbolic reasoning systems). This creates a triple distortion: Overestimation: Policymakers think LLMs are more capable and trustworthy than they actually are, so they deploy them in high-stakes settings (legal advice, medical diagnosis, government services) without adequate safeguards. Misapplication: Governance frameworks designed for one type of unreliable AI (LLMs) get imposed on fundamentally different AI architectures that don't share those flaws—creating regulatory mismatch. Gatekeeping by Design: Compliance costs and bureaucratic requirements favor centralized AI labs with massive resources. Meanwhile, decentralized AI communities—independent developers, open-source contributors, federated learning networks—get crushed under regulations, market pressure, peer pressure, costs and maybe confusion they can't afford to manage. The real future of AI will be "extremely distributed, or largely federalised"—with innovations coming from engineering students, small research teams, and open-source communities, not just tech giants. But when governance is designed around Big Tech's LLMs, these distributed innovators face impossible barriers: they can't hire compliance officers, can't afford safety audits designed for billion-dollar models, and can't compete with incumbents who helped write the regulations.

bottom of page