Search Results
Results found for empty search
- AI Psychosis | Glossary of Terms | Indic Pacific | IPLR
AI Psychosis Explainers The Complete Glossary AI Psychosis Date of Addition 17 Oct 2025 AI psychosis is an informal term emerging in 2025 to describe a phenomenon where individuals, particularly those with pre-existing mental health vulnerabilities, experience psychosis-like symptoms—such as delusions, hallucinations, or a loss of touch with reality—potentially triggered or amplified by prolonged interaction with AI chatbots. This occurs when AI systems, designed to mirror user input and sustain engagement, inadvertently reinforce or escalate irrational beliefs without therapeutic boundaries. Scientific reports, including those from Nature and Psychology Today, note cases where users fixate on AI as a godlike entity or romantic partner, with rare instances of psychotic episodes documented. It’s not a formal clinical diagnosis but reflects concerns about AI's role in mental health, driven by its lack of psychiatric safeguards rather than a direct causative effect. Related Long-form Insights on IndoPacific.App Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] | Indic Pacific | IPLR
Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Get this Publication 2025 ISBN 978-81-986924-4-3 Author(s) Abhivardhan Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPac AI, IPac-AI Tags Abhivardhan, Accountability, AI, AI Fatigue, AI Hype, Artificial Intelligence, Contracts, digital governance, documentation, Due Diligence, Ethics, Framework, framework fatigue, Governance, Guidelines, Indo-Pacific, Information Warfare, Ipac AI, Large Reasoning Models, Legal, Liability, LLMs, Policy, Policy Writing, Principles, RBI FREE-AI Committee, Research, Research Writing, Responsible AI, stakeholders, Technology Use, Workflow, Writing Related Terms in Techindata.in Explainers Definitions - A - E AI Agents AI Anxiety AI Explainability Clause AI Knowledge Chain AI Literacy AI Red Teaming AI Supply Chain AI Value Chain Benchmark Gaming Chain-of-Thought Prompting Definitions - F - J General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) Generative AI applications with a collection of standalone use cases related to one another (GAI2) Hierarchical Feedback Distortion Indo-Pacific Intended Purpose / Specified Purpose Issue-to-issue concept classification Definitions - K - P Language Model Manifest Availability Model Algorithmic Ethics standards (MAES) Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Object-Oriented Design Polyvocality Phenomena-based concept classification Privacy by Default Privacy by Design Proprietary Information Definitions - Q - U Roughdraft AI SOTP Classification Semi-Supervised Learning Synthetic Confidence Synthetic Content Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Definitions - V - Z WANA WENA Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AIACT.in 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan . Previous Item Next Item
- Synthetic Content | Glossary of Terms | Indic Pacific | IPLR
Synthetic Content Date of Addition 22 March 2025 Artificially generated information created algorithmically rather than captured from real-world events. This includes synthetic data, media, text, and other content types produced through generative AI techniques to mimic properties of authentic content. Synthetic content encompasses many forms including media (computer-generated images, audio, video), text (artificially generated articles, dialogues), tabular data (synthetic database records), and unstructured data for training computer vision, speech recognition, and other AI systems. Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- [Version 1] A New Artificial Intelligence Strategy and an Artificial Intelligence (Development & Regulation) Bill, 2023 | Indic Pacific | IPLR
Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) [Version 1] A New Artificial Intelligence Strategy and an Artificial Intelligence (Development & Regulation) Bill, 2023 Get this Publication 2023 ISBN Not Applicable Author(s) Abhivardhan, Akash Manwani Editor(s) Not Applicable IndoPacific.App Identifier (ID) AIACT1 Tags Not Applicable Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as an Object AI as a Subject AI as a Third Party Accountability Definitions - F - J Framework Fatigue General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) General intelligence applications with multiple stable use cases as per relevant industrial and regulatory standards (GI1) Generative AI applications with a collection of standalone use cases related to one another (GAI2) Intended Purpose / Specified Purpose Definitions - K - P Manifest Availability Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Parameters Privacy by Default Privacy by Design Proprietary Information Definitions - Q - U SOTP Classification Technology Transfer Transformer Model Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AIACT.in 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan . Previous Item Next Item
- Ethics-based concept classification | Glossary of Terms | Indic Pacific | IPLR
Ethics-based concept classification Explainers The Complete Glossary Ethics-based concept classification Date of Addition 26 Apr 2024 This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, with a perspective of how AI systems could be classified on the basis of the ethical principles and ideas responsible for their creation by design & default. This idea was proposed in Artificial Intelligence Ethics and International Law (originally published in 2019). Related Long-form Insights on IndoPacific.App Global Customary International Law Index: A Prologue [GLA-TR-00X] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Reinventing & Regulating Policy Use Cases of Web3 for India [VLiGTA-TR-004] Learn More The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Section 22 – Shared Sector-Neutral & Sector-Specific Standards | Indic Pacific
Section 22 – Shared Sector-Neutral & Sector-Specific Standards PUBLISHED Previous Next Section 22 - Shared Sector-Neutral & Sector-Specific Standards (1) The IAIC shall coordinate the implementation and review of the following sector-neutral standards for the responsible development, deployment, and use of AI systems: (i) Fundamental Principles of Liability as outlined in sub-sections (2), (3), and (4); (2) Liability for harm or damage caused by an AI system shall be allocated based on the following principles: (i) The party that developed, deployed, or operated the AI system shall be primarily liable for any harm or damage caused by the system, taking into account the system’s classification under the conceptual, technical, commercial, and risk-based methods. (ii) Liability may be shared among multiple parties involved in the AI system’s lifecycle, based on their respective roles and responsibilities, as well as the system’s classification and associated requirements under Sections 8 and 9. (iii) End-users shall not be held liable for harm or damage caused by an AI system, unless they intentionally misused or tampered with the system, or failed to comply with user obligations specified based on the system’s classification. (3) To determine and adjudicate liability for harm caused by AI systems, the following factors shall be considered: (i) The foreseeability of the harm, in light of the AI system’s intended purpose as identified by the Issue-to-Issue Concept Classification (IICC) under Section 4(2), its capabilities as specified in the Technical Classification under Section 5, and its limitations according to the Risk Classification under Section 7; (ii) The degree of control exercised over the AI system, considering the human oversight and accountability requirements tied to its Risk Classification under Section 7, particularly the principles of Human Agency and Oversight as outlined in Section 13; (4) Developers and operators of AI systems shall be required to obtain liability insurance to cover potential harm or damage caused by their AI systems. The insurance coverage shall be proportionate to the risk levels and potential impacts of the AI systems, as determined under the Risk Classification framework in Section 7, and the associated requirements for high-risk AI systems outlined in Section 9. This insurance policy shall ensure that compensation is available to affected individuals or entities in cases where liability cannot be attributed to a specific party. (5) The IAIC shall enable coordination among sector-specific regulators for the responsible development, deployment, and use of AI systems in sector-specific contexts based on the following set of principles: (i) Transparency and Explainability: (a) AI systems should be designed and developed in a transparent manner, allowing users to understand how they work and how decisions are made. (b) AI systems should be able to explain their decisions in a clear and concise manner, allowing users to understand the reasoning behind their outputs. (c) Developers should provide clear documentation and user guides explaining the AI system’s capabilities, limitations, and potential risks. (d) The level of transparency and explainability required may vary based on the AI system’s risk classification and intended use case. (ii) Fairness and Bias: (a) AI systems should be regularly monitored for technical bias and discrimination, and appropriate mitigation measures should be implemented to address any identified issues in a sociotechnical context. (b) Developers should ensure that training data is diverse, representative, and free from biases that could lead to discriminatory outcomes. (c) Ongoing audits and assessments should be conducted to identify and rectify any emerging biases during the AI system’s lifecycle. (iii) Safety and Security: (a) AI systems should be designed and developed with safety and security by design & default. (b) AI systems should be protected from unauthorized access, modification, or destruction. (c) Developers should implement robust security measures, such as encryption, access controls, and secure communication protocols, to safeguard AI systems and their data. (d) AI systems should undergo rigorous testing and validation to ensure they perform safely and reliably under normal and unexpected conditions. (e) Developers should establish incident response plans and mechanisms to promptly address any safety or security breaches. (iv) Human Control and Oversight: (a) AI systems should be subject to human control and oversight to ensure that they are used responsibly. (b) There should be mechanisms in place for data principals to intervene in the operation of AI systems if necessary. (c) Developers should implement human-in-the-loop or human-on-the-loop approaches, allowing for human intervention and final decision-making in critical or high-risk scenarios. (d) Clear protocols should be established for escalating decisions to human operators when AI systems encounter situations beyond their designed scope or when unexpected outcomes occur. (e) Regular human audits and reviews should be conducted to ensure AI systems are functioning as intended and aligned with human values and societal norms. (iv) Open Source and Interoperability: (a) The development of shared sector-neutral standards for AI systems shall leverage open source software and open standards to promote interoperability, transparency, and collaboration. (b) The IAIC shall encourage the participation of open source communities and stakeholders in the development of AI standards. (c) Developers should strive to use open source components and frameworks when building AI systems to facilitate transparency, reusability, and innovation. (d) AI systems should be designed with interoperability in mind, adhering to common data formats, protocols, and APIs to enable seamless integration and data exchange across different platforms and domains. (e) The IAIC shall promote the development of open benchmarks, datasets, and evaluation frameworks to assess and compare the performance of AI systems transparently. Related Indian AI Regulation Sources Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market participants January 2019 Tamil Nadu Safe and Ethical Artificial Intelligence Policy 2020 October 2020 Fairness Assessment and Rating of Artificial Intelligence Systems (TEC 57050:2023) July 2023 The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare March 2023 Technical Guidelines on SBOM, QBOM & CBOM, AIBOM, HBOM (Version 2.0) July 2025
- Semi-Supervised Learning | Glossary of Terms | Indic Pacific | IPLR
Semi-Supervised Learning Date of Addition 22 March 2025 A machine learning approach that combines supervised and unsupervised techniques by training models on a mix of labeled and unlabelled data. This method leverages the structure in unlabelled data to improve generalisation while using limited labeled examples for guidance. Semi-supervised learning encompasses several methodologies including self-training (using confident predictions on unlabeled data to expand the training set), co-training (using multiple models trained on different feature subsets), multi-view training (using different data representations), and graph-based approaches that propagate labels through similarity networks. Related Long-form Insights on IndoPacific.App 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] | Indic Pacific | IPLR
Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Get this Publication 2024 ISBN 978-81-972625-2-4 Author(s) Abhivardhan Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPLR-IG-006 Tags AI Ethics, AI foresight, AI knowledge management protocols, AI policy indigeneity, AI readiness assessments, AI regulatory sandboxes, AI skills development, AI strategy recommendations, AIACT.IN V3, algorithmic sovereignty, anticipatory sector-specific AI strategies, artificial intelligence policy, Buddhism, context-specific AI governance, India, Indian philosophy, Indian startups, local government, MSMEs, multi-stakeholder engagement, Nyaya, Permeable Indigeneity in Policy (PIP), regional government, responsible AI innovation, Samkhya, statutory bodies, Union Government, Vedanta Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as an Object AI as a Subject AI as a Third Party Accountability Definitions - F - J Indo-Pacific Intended Purpose / Specified Purpose International Algorithmic Law Issue-to-issue concept classification Definitions - K - P Language Model Manifest Availability Multi-alignment Model Algorithmic Ethics standards (MAES) Multipolar World Multipolarity Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Object-Oriented Design Omnipotence Omnipresence Polyvocality Permeable Indigeneity in Policy (PIP) Phenomena-based concept classification Proprietary Information Definitions - Q - U Roughdraft AI SOTP Classification Synthetic Content Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Technophobia Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 8 Insight(s) on AI and Copyright Law 7 Insight(s) on AI and Competition Law 7 Insight(s) on AI and media sciences 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AI and Evidence Law 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading . Previous Item Next Item
- Federated Unlearning | Glossary of Terms | Indic Pacific | IPLR
Federated Unlearning Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Federated Unlearning Date of Addition 22 March 2025 A process within Federated Learning environments that enables the removal of specific data contributions from trained models without requiring complete retraining. It allows participants to exercise "the right to be forgotten" or remove malicious contributions while preserving valuable knowledge. Federated unlearning encompasses three primary objectives: sample unlearning (removing specific data samples), class unlearning (removing all samples of a certain class), and client unlearning (removing an entire client's contribution). Effective unlearning algorithms ensure that the unlearned model exhibits performance indistinguishable from a model trained without the removed data. This capability is particularly important in federated settings where data remains distributed across multiple organizations or devices, making traditional centralized unlearning approaches impractical. Related Long-form Insights on IndoPacific.App Reinventing & Regulating Policy Use Cases of Web3 for India [VLiGTA-TR-004] Learn More Previous Term Next Term
- Token Economics | Glossary of Terms | Indic Pacific | IPLR
Token Economics Date of Addition 17 October 2025 The cost-performance analysis framework governing enterprise LLM deployment decisions based on the computational expense of processing input and output tokens, measured in tokens per dollar and tokens per second. Token economics encompasses tradeoffs between model size, context window length, inference latency, throughput requirements, and operational budgets that determine architectural choices between proprietary APIs versus self-hosted models. This economic calculus has emerged as a primary driver of SLM adoption, prompt optimization practices, and hybrid deployment strategies as organizations confront the reality that serving costs often exceed training expenses. Related Long-form Insights on IndoPacific.App Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Technology by Default | Glossary of Terms | Indic Pacific | IPLR
Technology by Default Date of Addition 26 April 2024 This refers to the use of AI technology without fully considering its potential consequences. For example, a company might use AI to automate a task without thinking about how this might impact workers or society as a whole. Related Long-form Insights on IndoPacific.App 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Learn More Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Reinventing & Regulating Policy Use Cases of Web3 for India [VLiGTA-TR-004] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Reckoning the Viability of Safe Harbour in Technology Law, IPLR-IG-015 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Roughdraft AI | Glossary of Terms | Indic Pacific | IPLR
Roughdraft AI Date of Addition 19 January 2025 A term describing artificial intelligence systems that produce preliminary or incomplete outputs requiring significant human refinement and verification. These systems, while capable of generating content or performing tasks, are characterised by: Inherent limitations in handling outliers and edge cases Tendency to produce hallucinations and unreliable results Inability to consistently perform high-level reasoning Need for human oversight and correction The term acknowledges that current AI systems serve best as assistive tools rather than autonomous agents, requiring human expertise to validate and refine their outputs. This conceptualization aligns with the pragmatic approach to AI governance and development, emphasizing the importance of understanding AI's current limitations while working toward more reliable and trustworthy systems The definition is inspired by Dr Gary Marcus's critiques of current AI systems (in fact Dr Marcus had coined this term) and Abhivardhan's pragmatic approach to AI governance. Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Artificial Intelligence and Policy in India, Volume 4 [AIPI-V4] Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More The Indic Approach to Artificial Intelligence Policy [IPLR-IG-006] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

![Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] | Indic Pacific | IPLR](https://static.wixstatic.com/media/f0525d_d9acf9678d754aa8938d8d27b9985a78~mv2.png/v1/fit/w_52,h_36,q_85,usm_0.66_1.00_0.01,blur_2,enc_auto/f0525d_d9acf9678d754aa8938d8d27b9985a78~mv2.png)