This is a glossary of terms consisting of technical terms related to law, artificial intelligence, policy and digital technologies.
We use these terms in our technical reports and key publications.
A B C D E
AI Agents
An autonomous program designed to perform non-deterministic tasks that require adaptive decision-making and independent action. AI agents are capable of handling unpredictable scenarios, making decisions without predefined rules, and adapting to new variables or changing environments. They often learn from interactions and experiences but may produce less predictable outcomes compared to simpler systems.
This definition was inspired by inputs shared by Alexandre Kantjas @akantjas on X.
AI Anxiety
A psychological state characterized by apprehension and stress about artificial intelligence's increasing presence and influence in various aspects of life. It manifests as concerns about job displacement, ethical implications, privacy issues, and the broader societal impact of AI technologies. The anxiety can range from mild unease to severe distress and often stems from uncertainty about AI's capabilities, fear of obsolescence, and concerns about AI's responsible use.
AI as a Component
It means Artificial Intelligence can exist as a component or constituent in any digital or physical product / service / system offered via electronic means, in any way possible. The AI-related features present in that system explain whether the AI as a component exists by design or default.
AI as a Concept
It means Artificial Intelligence itself could be understood as a concept or defined in a conceptual framework.
The definition is provided in the 2020 Handbook on AI and International Law (2021):
As a concept, AI contributes in developing the field of international technology law prominently, considering the integral nature of the concept with the field of technology sciences. We also know that scholarly research is in course with regards to acknowledging and ascertaining how AI is relatable and connected to fields like international intellectual property law, international privacy law, international human rights law & international cyber law. Thus, as a concept, it is clear to infer that AI has to be accepted in the best possible ways, which serves better checks and balances, and concept of jurisdiction, whether international or transnational, is suitably established and encouraged.
AI as a concept could be further classified in these following ways:
Technical concept classification
Issue-to-issue concept classification
Ethics-based concept classification
Phenomena-based concept classification
Anthropomorphism-based concept classification
AI as an Entity
It means Artificial Intelligence may be considered as a form of electronic personality, in a legal or juristic sense. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
AI as an Industry
It means Artificial Intelligence may be considered as a sector or industry or industry segment (howsoever it is termed) in terms of its economic and social utility. This idea was proposed in the 2020 Handbook on AI and International Law (2021):
As an industry, the economic and social utility of AI has to be in consensus with the three factors: (1) state consequentialism or state interests; (2) industrial motives and interests; and (3) the explanability and reasonability behind the industrial products and services central or related to AI.
AI as a Juristic Entity
It means Artificial Intelligence may be recognised in a specific context, space, or any other frame of reference, such as time, through the legal and administrative machineries of a legitimate government. This idea was proposed in the 2020 Handbook on AI and International Law (2021). Even in the Section 2 (13) (g) of the Digital Personal Data Protection Act, 2023, the definition of "every artificial juristic person" is available, which means providing specific juristic recognition to artificial intelligence in a personalised sense.
AI as a Legal Entity
It means Artificial Intelligence may be recognised in a statutory sense, or a regulatory sense, a legal entity, with its own caveats, features and limits as prescribed by law. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
AI as an Object
It means Artificial Intelligence may be considered as the inhibitor and enabler of an electronic or digital environment, to which a human being is subjected to. This classification is an inverse to the idea of an 'AI as a Subject', assuming that while human environments and natural environments do affect AI processing & outputs, even the design and interface of any AI system could affect and affect a human being as a data subject (as per the GDPR) / data principal (as per the DPDPA). This idea was proposed in the 2020 Handbook on AI and International Law (2021).
AI as a Subject
It means Artificial Intelligence may be legally prescribed or interpreted to be treated as a subject to human environment, inputs and actions. The simplest example could be that of a Generative AI system which is being subjected to human prompting, be it text, visual, sound or any other form of human input, to generate output of proprietary nature. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
AI as a Third Party
It means Artificial Intelligence may have that limited sense of autonomy to behave as a Third Party in a dispute, problem or issue raised. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
AI Explainability Clause
A binding requirement that mandates AI system providers and deployers to ensure that significant decisions made or supported by AI systems can be explained in terms comprehensible to affected parties. This includes disclosure of the system's purpose, capabilities, limitations, data sources, decision criteria, potential biases, and the specific roles of human and automated components in the decision-making process. The explainability standard scales with the potential impact of decisions, requiring greater transparency for systems affecting fundamental rights, safety, or significant economic interests.
Click here to find a Sample Explainability clause
The AI system provider/deployer ("Provider") shall ensure that all significant decisions made or substantially influenced by the AI system ("System") are explainable to affected parties in clear, non-technical language. This explanation shall include, at minimum:
The specific purpose and intended use of the System;
The types and sources of data used by the System;
The key factors or criteria considered in reaching the decision;
Any known limitations or potential biases in the System;
The respective roles of human oversight and automated processes in the final decision;
The potential impact of the decision on the affected party;
Available options for contesting or seeking review of the decision.
The level of detail provided in the explanation shall be proportionate to the potential impact of the decision on fundamental rights, safety, or significant economic interests of the affected party. The Provider shall maintain documentation of the System's decision-making processes sufficient to generate these explanations upon request.
This clause shall be binding and enforceable, with non-compliance potentially resulting in suspension of the System's use until adequate explainability is demonstrated.
AI Knowledge Chain
A structured sequence of information transformation processes that enable AI systems to convert raw data into actionable insights through interconnected stages of knowledge acquisition, representation, reasoning, and application. Knowledge chains encompass both the technical pathways within AI systems and the human-AI information exchanges that facilitate meaningful interpretation of AI outputs. Robust knowledge chains maintain logical coherence between information elements while providing transparent connections between premises and conclusions.
AI Literacy
The ability to distinguish between actual AI capabilities and market hyperbole while understanding the complete lifecycle of AI systems from development through deployment. This includes comprehending one's position within AI value chains, critically evaluating AI outputs and claims, recognising practices involved in govern AI systems, and making informed decisions about AI engagement across personal and professional contexts. True AI literacy enables individuals to differentiate between substantive AI innovation and superficial technological rebranding.
AI Supply Chain
The end-to-end network of resources, technologies, infrastructures, and services required to create, train, deploy, and maintain AI systems. This includes hardware components (processing units, memory, sensors), computational resources (cloud services, data centres), data resources (datasets, knowledge bases), algorithmic frameworks, and human expertise. The AI supply chain encompasses both tangible and intangible assets across global networks of providers that collectively enable AI capabilities for end-users and organisations.
AI Value Chain
The structured network of entities and their associated responsibilities in the development, distribution, and deployment of AI systems. This includes providers who develop AI models, importers who bring systems into regulatory jurisdictions, distributors who make systems commercially available, and deployers who implement systems in specific contexts. Each entity bears distinct legal and ethical responsibilities for risk assessment, documentation, monitoring, and governance appropriate to their position in the chain.
AI Workflows
A structured automation process that integrates Artificial Intelligence, such as Large Language Models (LLMs) like ChatGPT, into specific steps via APIs. AI workflows are ideal for deterministic tasks requiring flexibility, pattern recognition, or the handling of complex rules. They combine traditional automation with AI-enhanced decision-making to address more dynamic needs.
AI-based Anthropomorphization
AI-based anthropomorphization is the process of giving AI systems human-like qualities or characteristics. This can be done in a variety of ways, such as giving the AI system a human-like name, appearance, or personality. It can also be done by giving the AI system the ability to communicate in a human-like way, or by giving it the ability to understand and respond to human emotions. This idea was discussed in the 2020 Handbook on AI and International Law (2021), Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and Promoting Economy of Innovation through Explainable AI, VLiGTA-TR-003 (2023).
Algorithmic Activities and Operations
It means the algorithms of any AI system or machine-learning-based system are capable to perform two kinds of tasks, in a procedural sense of law, i.e., performing normal and ordinary tasks - which could be referred to as 'activities' and methodical and context-specific or technology-specific tasks, called 'operations'. This idea was proposed in Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022).
All-Comprehensive Approach
This means a system having an approach which covers every aspect of its purpose, risks and impact, with broad coverage.
Anthropomorphism-based concept classification
This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, with a perspective of how AI systems could lead to human attribution and enculturation. This idea was proposed in Artificial Intelligence Ethics and International Law (originally published in 2019).
Artificial Intelligence Hype Cycle
An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle. This definition was proposed in Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022).
Automation
A system designed to execute predefined, rule-based tasks automatically without human intervention. Automations excel at deterministic tasks, delivering reliable and consistent outcomes within clearly programmed parameters. They are fast, efficient, and predictable but lack adaptability to new or unforeseen scenarios.
CEI Classification
This is one of the two Classification Methods in which Artificial Intelligence could be recognised as a Concept, an Entity, or an Industry. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
Class-of-Applications-by-Class-of-Application (CbC) approach
The Class-of-Applications-by-Class-of-Application (CbC) approach is a method for developing and managing AI systems that focuses on the specific applications for which the systems will be used. The CbC approach is based on the idea that different applications have different requirements, and that AI systems should be designed and developed to meet those specific requirements. This was originally discussed in Andrea Bertolini's work on ‘Artificial Intelligence and Civil Liability’ published by the European Parliament in 2020. We have analysed this idea in Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022).
Derivative Generative AI Applications, the Generative AI products and services which are derivatives of the main generative AI applications, by virtue of reliance (DGAI)
This is an ontological sub-category of Generative AI applications which implies that a Generative AI application could be built on the basis of a training model, any API or any commercial or technical component of another AI or Generative AI application. Such an application could be called as a Derivative Generative AI Application. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Distributed Ledger
A distributed ledger (also called a shared ledger or distributed ledger technology or DLT) is the consensus of replicated, shared, and synchronized digital data that is geographically spread (distributed) across many sites, countries, or institutions.
Ethics-based concept classification
This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, with a perspective of how AI systems could be classified on the basis of the ethical principles and ideas responsible for their creation by design & default. This idea was proposed in Artificial Intelligence Ethics and International Law (originally published in 2019).
fghij
Framework Fatigue
The mental exhaustion and reduced decision-making capacity experienced when confronted with an overwhelming number of methodological frameworks, guidelines, and standards in a field.
Note: This phenomenon has gained particular significance in the artificial intelligence sector, where the rapid emergence of multiple frameworks for AI governance, ethics, and development has created challenges for effective implementation and compliance across industries.
GAE
GAE is an acronym that stands for "Global American Empire," a term used to describe the worldwide political, economic, military, and cultural influence of the United States beyond its territorial boundaries. This concept characterises America's position as a global hegemon whose influence spans across continents through various mechanisms of power projection rather than through direct colonial control.
The term GAE (Global American Empire) encapsulates a critical perspective on America's position as the dominant global power through its far-reaching military, economic, cultural, and political influence. While not officially acknowledged by the United States government, which "has never officially identified itself and its territorial possessions as an empire", this concept provides a framework for understanding American global hegemony that extends beyond traditional colonial models of empire.
The creation of this term is attributed to Alexei Arora on Substack and X.com.
General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2)
This is an ontological sub-category of Generative AI applications. Such kinds of Generative AI Applications are those which have a lot of test cases and use cases, which are either useful in a short-run or have unclear value as per industrial and regulatory standards. ChatGPT could be considered an example of this sub-category. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
General intelligence applications with multiple stable use cases as per relevant industrial and regulatory standards (GI1)
This is an ontological sub-category of Generative AI applications. Such kinds of Generative AI Applications are those which have a lot of test cases and use cases, which are useful, and considered to be stable as per relevant industrial and regulatory standards. ChatGPT could be considered an example of this sub-category. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Generative AI applications with one standalone use case (GAI1)
This is an ontological sub-category of Generative AI applications. Such Generative AI Applications have a single standalone use case of value. Midjourney could be considered a standalone use case, for example. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Generative AI applications with a collection of standalone use cases related to one another (GAI2)
This is an ontological sub-category of Generative AI applications. Such Generative AI Applications have more than one standalone use cases, which are related to one another. The best example of such a Generative AI Application is that of GPT-4's recent update, which can create text and images based on human prompts, and modify them as per requirements. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Hierarchical Feedback Distortion
The Hierarchical Feedback Distortion Principle operates through a specific mechanism wherein state and central governments respond dramatically to negative feedback, often through public statements, high-profile investigations, or policy announcements. These responses, while highly visible, frequently fail to address the underlying structural issues that enable corruption or administrative failures at the local level. The resulting dynamic creates what can be described as "accountability gaps" – spaces within the governance system where certain actors can operate with relative impunity despite the appearance of oversight.
These accountability gaps form through several interconnected processes. First, the distance between higher levels of government and local administration creates information asymmetries, where central authorities lack detailed knowledge of ground-level operations. Second, the emphasis on negative feedback creates incentives for performative responses that satisfy public demand for action without necessarily changing administrative practices. Third, the hierarchical nature of bureaucratic systems often shields lower-level officials from direct accountability to citizens, instead making them primarily answerable to superiors within the bureaucracy.
In the Indian context, these dynamics are particularly pronounced due to the country's complex multi-level governance structure, which includes central, state, district, and local administrative tiers. Each level operates with different incentives, capacities, and relationships to citizens, creating multiple opportunities for accountability mechanisms to break down. The resulting system can inadvertently create protected spaces where corruption can flourish despite the appearance of active governance and oversight from above.
This principle was created as a matter of inspiration of some of the posts by Pseudokanada, i.e., @hestmatematik on X.
In-context Learning
In-context learning for generative AI is the ability of a generative AI model to learn and adapt to new information based on the context in which it is used. This allows the model to generate more accurate and relevant results, even if it has not been specifically trained on the specific task or topic at hand. For example, an in-context learning generative AI model could be used to generate a poem about a specific topic, such as "love" or "nature." The model would be provided with a few examples of poems about the selected topic, which it would then use to understand the context of the task. The model would then generate a new poem about the topic that is consistent with the context. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Indofuturism
A creative and cultural movement that reimagines India through science fiction and futuristic scenarios, particularly using AI-generated art and storytelling. It challenges Western sci-fi tropes by blending Indian cultural elements with futuristic concepts. Key characteristics include:
Visual reimagining of Indian scenarios through a sci-fi lens
Challenge to the assumption that sci-fi isn't a "desi genre"
Creation of new visual vocabulary for Indian science fiction
Exploration of alternative historical scenarios (like non-colonized India)
This term was conceptualized through the AI artwork and creative direction of Prateek Arora, VP Development at BANG BANG Mediacorp, who popularized the term through his viral AI-generated artworks like "Granth Gothica" and "Disco Antriksh".
Indo-Pacific
A concept relating to the countries and geographies in the Indian Ocean Region and the Pacific Ocean Region, popularised by the former Prime Minister of Japan, Shinzo Abe. The Ministry of External Affairs, Government of India prefers to use this term as a clear replacement to the term, Asia-Pacific, in the context of the South Asian region (or the Indian Subcontinent), the South-East Asian region, East Africa, the Pacific Islands region, Australia, Oceania, and the Far East.
International Algorithmic Law
A newer concept of international law, proposed by Abhivardhan, the Founder of Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law in 2020, in his paper entitled 'International Algorithmic Law: Emergence and the Indications of Jus Cogens Framework and Politics', originally published in Artificial Intelligence and Policy in India, Volume 2 (2021).
The definition in the paper is stated as follows:
The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law.
Issue-to-issue concept classification
This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, in which the conceptual framework or basis of an AI system may be recognised on an issue-to-issue basis, with unique contexts and realities. This was proposed in Artificial Intelligence Ethics and International Law (originally published in 2019).
klmnop
Manifest Availability
The manifest availability doctrine refers to the concept that AI's presence or existence is evident and apparent, either as a standalone entity or integrated into products and services. This term emphasizes that AI is not just an abstract concept but is tangibly observable and accessible in various forms in real-world applications. By understanding how AI is manifested in a given context, one can determine its role and involvement, which leads to a legal interpretation of AI's status as a legal or juristic entity. This is a principle or doctrine, which was proposed in the 2020 Handbook on AI and International Law (2021), and was further explained in the 2021 Handbook on AI and International Law (2022). References of this concept could also be found in Artificial Intelligence Ethics and International Law (originally published in 2019).
Here is a definition of the concept as per the 2020 Handbook on AI and International Law:
So, AI is again conceptually abstract despite having its different definitions and concepts. Also, there are different kinds of products and services, where AI can be present or manifestly available either as a Subject, an Object or that manifest availability is convincing enough to prove that AI resembles or at least vicariously or principally represents itself as a Third Party. Therefore, you need that SOTP classification initially to test the manifest availability of AI (you can do it through analyzing the systemic features of the product/service simply or the ML project), which is then followed by a generic legal interpretation to decide it would be a Subject/an Object/a Third Party (meaning using the SOTP classification again to decide the legal recourse of the AI as a legal/juristic entity).
Multi-alignment
Multi-alignment in foreign policy is a strategy in which a country maintains close ties with multiple major powers, rather than aligning itself with a single power bloc across regions, industry sectors, continents and power centers. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).
Model Algorithmic Ethics standards (MAES)
A concept proposed for private sector stakeholders, such as start-ups, MSMEs and freelancing professionals, in the AI business, to promote market-friendly AI ethics standards for their AI-based or AI-enabled products & services to create adaptive model standards to check its feasibility whether it could be implemented at various stages. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Multipolar World
A multipolar world is a global system in which power is distributed among multiple states, rather than being concentrated in one (unipolar) or two (bipolar) dominant powers. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).
Multipolarity
Multipolarity is a global system in which power is distributed among multiple states, with no single state having a dominant position, be it any sector, geography or level of sovereignty. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).
Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI
Generative AI, a form of artificial intelligence, possesses the capability to generate fresh content, encompassing text, images, and music. It harbors the potential to bring about significant transformations across various industries and sectors. Nevertheless, its emergence also presents a range of legal and ethical dilemmas.
Here is an excerpt from Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023):
First, for a product, service, use case or test case to be considered multivariant, it must have a multi-sector impact. The multi-sector impact could be disruption of jobs, work opportunities, technical & industrial standards and certain negative implications, such as human manipulation.
Second, for a product, service, use case or test case to be considered fungible, it must transform its core purpose by changing its sectoral priorities (like for example, a generative AI product may have been useful for the FMCG sector, but could also be used by companies in the pharmaceutical sector for some reasons). Relevant legal concerns could be whether the shift disrupts the previous sector, or is causing collusion or is disrupting the new sector with negative implications.
Third, for a product, service, use case or test case to be disruptive, it must affect the status quo of certain industrial and market practices of a sector. For example, maybe a generative AI tool could be capable of creating certain work opportunities or rendering them dysfunctional for human employees or freelancers. Even otherwise, the generative AI tool could be capable in shaping work and ethical standards due to its intervention.
This phrase was proposed in the case of Generative AI use cases and test cases in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Object-Oriented Design
Object-oriented design (OOD) is a software design methodology that organizes software around data, or objects, rather than functions and logic. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Omnipotence
In the context of Artificial Intelligence, this implies that any AI system, due to its inherent yet limited features of processing and generating outputs, could be effective in shaping multiple sectors, eventualities and legal dilemmas. In short, any omnipotent AI system could have first, second & third order effects due to its actions. This was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019), Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and many key publications by ISAIL & VLiGTA.
Omnipresence
In the context of Artificial Intelligence, this implies that any AI system, due to its inherent yet limited features of processing and generating outputs, could be present or relevant in multiple frames of reference such as sectors, timelines, geographies, realities, levels of sovereignty, and many other factors. This was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019), Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and many key publications by ISAIL & VLiGTA.
Permeable Indigeneity in Policy (PIP)
This concept, simply means, in proposition [...] that whatsoever legal and policy changes happen, they must be reflective, and largely circumscribing of the policy realities of the country. PIP cannot be a set of predetermined cases of indigeneity in a puritan or reductionist fashion, because in both of such cases, the nuance of being manifestly unique from the very churning of policy analysis, deconstruction & understanding, is irrevocably (and maybe in some cases, not irrevocably) lost. This was proposed in Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021).
Phenomena-based concept classification
This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, in which, beyond technical and ethical questions, it is possible that AI systems may render purpose based on natural and human-related phenomena. This idea was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019).
Privacy by Default
Privacy by Default means that once a product or service has been released to the public, the strictest privacy settings should apply by default, without any manual input from the end user. This was largely proposed in the Article 25 of the General Data Protection Regulation of the European Union.
Privacy by Design
Privacy by Design states that any action a company undertakes that involves processing personal data must be done with data protection and privacy in mind at every step. This was largely proposed in the Article 25 of the General Data Protection Regulation of the European Union.
Proprietary Information
Proprietary information in the context of generative AI applications is any information that is not publicly known and that gives a company or individual a competitive advantage. This can include information about the generative AI model itself, such as its training data, architecture, and parameters. It can also include information about the specific applications for the generative AI model, such as the products or services that it is used to create. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
qrstu
Roughdraft AI
A term describing artificial intelligence systems that produce preliminary or incomplete outputs requiring significant human refinement and verification. These systems, while capable of generating content or performing tasks, are characterised by:
Inherent limitations in handling outliers and edge cases
Tendency to produce hallucinations and unreliable results
Inability to consistently perform high-level reasoning
Need for human oversight and correction
The term acknowledges that current AI systems serve best as assistive tools rather than autonomous agents, requiring human expertise to validate and refine their outputs. This conceptualization aligns with the pragmatic approach to AI governance and development, emphasizing the importance of understanding AI's current limitations while working toward more reliable and trustworthy systems
The definition is inspired by Dr Gary Marcus's critiques of current AI systems (in fact Dr Marcus had coined this term) and Abhivardhan's pragmatic approach to AI governance.
Rule Engine
A rule engine is a type of software program that aids in automating decision- making processes by applying a predefined set of rules to a given dataset. It is commonly employed alongside generative AI tools to enhance the overall quality and consistency of the generated output. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
SOTP Classification
This is one of the two Classification Methods in which Artificial Intelligence could be recognised as a Subject, an Object or a Third Party in a legal issue or dispute. This idea was proposed in the 2020 Handbook on AI and International Law (2021).
Strategic Autonomy
Strategic autonomy in Indian foreign policy is the ability of India to pursue its national interests and adopt its preferred foreign policy without being beholden to any other country. This means that India should be able to make its own decisions about foreign policy, even if those decisions are unpopular with other countries. India should also be able to maintain its own security and economic interests, without having to rely on other countries for help. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).
Strategic Hedging
Strategic hedging means a state spreads its risk by pursuing two opposite policies towards other countries via balancing and engagement, to prepare for all best and worst case scenarios, with a calculated combination of its soft power & hard power. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).
Technical concept classifcation
This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, as this method covers all technical features of Artificial Intelligence that have evolved in the history of computer science. Such a classification approach is helpful in estimating legal and policy risks associated with technical use cases of AI systems at a conceptual level. This idea was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019).
Technology by Default
This refers to the use of AI technology without fully considering its potential consequences. For example, a company might use AI to automate a task without thinking about how this might impact workers or society as a whole.
Technology by Design
This refers to the deliberate use of AI technology to achieve specific goals. For example, a company might design an AI system to help them identify and recruit the best candidates for a job.
Technology Distancing
This refers to the process of creating AI systems that are more transparent, accountable, and equitable. This can be done by involving stakeholders in the design and development of AI systems, and by making sure that AI systems are aligned with human values.
Technology Transfer
This refers to the process of sharing AI knowledge and technology between different organizations or individuals. This can be done through formal channels such as research collaborations or licensing agreements, or through informal channels such as conferences and online communities.
Technophobia
An irrational or disproportionate fear, aversion, or resistance to advanced technologies, technological change, and digital innovation. Manifests as psychological and physiological responses ranging from mild anxiety to severe distress when interacting with or contemplating technological systems. Often characterised by:
Cognitive resistance to learning new technological skills
Physical symptoms when forced to use technology
Avoidance behaviours toward digital tools and platforms
Distinguished from rational technology criticism by its emotional rather than analytical basis. Particularly relevant in contexts of rapid technological transformation, AI adoption, and digital transformation initiatives.
vwxyz
WANA
WANA is an official term used by the Government of India to refer to the West Asia and North Africa region. The Ministry of External Affairs (MEA) of India has a dedicated WANA Division that handles "all matters relating to Algeria, Djibouti, Egypt, Israel, Libya, Lebanon, Morocco, Syria, Palestine, Sudan, South Sudan, Somalia, Jordan and Tunisia".
India's Ministry of Commerce and Industry also has a WANA Division that deals with India's trade relations with 19 countries in this region, including Bahrain, Kuwait, Oman, Qatar, Iraq, UAE, Saudi Arabia, Egypt, Sudan, Algeria, Morocco, Tunisia, Syria, Jordan, Israel, Lebanon, Yemen, Libya and South Sudan.
The term is formally recognized in diplomatic contexts, as evidenced by the first India-France Consultations on West Asia and North Africa Region held on April 12, 2022, where Dr. Pradeep Singh Rajpurohit, Joint Secretary (WANA), MEA, represented India.
According to Indian foreign policy framework, the WANA region encompasses all Arab nations as well as South Sudan, with North Africa being considered a direct extension of the Middle East.
WENA
An acronym for Western Europe and Northern America, referring to the geographically and economically developed regions that include countries in Western and Central Europe, the United Kingdom, the United States, and Canada. Coined by satirist Karl Sharro, and popularised by Indian journalist Nirmalya Dutta, WENA is used to satirically critique the analytical frameworks often applied to different global regions, particularly in comparison to the Middle East and North Africa (MENA).
This term challenges the notion of Western exceptionalism by advocating for the same rigorous scrutiny of social, political, and cultural issues in WENA that is commonly directed towards MENA, promoting a more balanced and equitable examination of diverse regions.
Whole-of-Government Response
A whole-of-government response under the (proposed) Digital India Act is a coordinated approach to the governance of digital technologies and issues. It involves the participation of all relevant government ministries and agencies, as well as other stakeholders such as industry and academia. The goal of a whole-of-government response is to ensure that digital technologies are used in a way that is beneficial to society, while also mitigating any potential risks. This may involve developing new policies and regulations, investing in research and development, and raising awareness of digital issues. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).
Zero Knowledge Systems
Zero-knowledge systems (ZKSs) are cryptographic protocols that allow one party (the prover) to prove to another party (the verifier) that a statement is true, without revealing any information about the statement itself or how it is proven. ZKSs are based on the idea that it is possible to prove the possession of knowledge without revealing the knowledge itself. This was discussed in Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004 (2023).
Zero Knowledge Taxes
Zero-knowledge taxes (ZKTs) are a hypothetical type of tax that could be implemented using ZKSs. ZKTs would allow taxpayers to prove to the government that they have paid their taxes without revealing their income or other financial information. This was discussed in Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004 (2023).
terms of use
This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use:
-
You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary.
-
Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example):
Indic Pacific Legal Research LLP, 'The Indic Pacific Glossary of Terms' (Indic Pacific Legal Research, 2023) <https://www.indicpacific.com/glossary>
-
You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research.
-
The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary.
-
You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary.
If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
Law & Policy 101
This section offers free & basic explainers on certain concepts of Law and Policy for general understanding.