top of page

This is a glossary of terms consisting of technical terms related to law, artificial intelligence, policy and digital technologies. 

We use these terms in our technical reports and key publications.

A B C D E

AI as a Component

It means Artificial Intelligence can exist as a component or constituent in any digital or physical product / service / system offered via electronic means, in any way possible. The AI-related features present in that system explain whether the AI as a component exists by design or default.

AI as a Concept

It means Artificial Intelligence itself could be understood as a concept or defined in a conceptual framework.  

The definition is provided in the 2020 Handbook on AI and International Law (2021):

As a concept, AI contributes in developing the field of international technology law prominently, considering the integral nature of the concept with the field of technology sciences. We also know that scholarly research is in course with regards to acknowledging and ascertaining how AI is relatable and connected to fields like international intellectual property law, international privacy law, international human rights law & international cyber law. Thus, as a concept, it is clear to infer that AI has to be accepted in the best possible ways, which serves better  checks  and  balances,  and  concept  of  jurisdiction,  whether international or transnational, is suitably established and encouraged.

AI as a concept could be further classified in these following ways:

  1. Technical concept classification

  2. Issue-to-issue concept classification

  3. Ethics-based concept classification

  4. Phenomena-based concept classification

  5. Anthropomorphism-based concept classification

AI as an Entity

It means Artificial Intelligence may be considered as a form of electronic personality, in a legal or juristic sense. This idea was proposed in the 2020 Handbook on AI and International Law (2021).

AI as an Industry

It means Artificial Intelligence may be considered as a sector or industry or industry segment (howsoever it is termed) in terms of its economic and social utility. This idea was proposed in the 2020 Handbook on AI and International Law (2021):

As  an  industry, the  economic and  social  utility of  AI  has  to  be  in consensus with the three factors: (1) state consequentialism or state interests; (2) industrial motives and interests; and (3) the explanability and reasonability behind the industrial products and services central or related to AI.

AI as a Juristic Entity

It means Artificial Intelligence may be recognised in a specific context, space, or any other frame of reference, such as time, through the legal and administrative machineries of a legitimate government. This idea was proposed in the 2020 Handbook on AI and International Law (2021). Even in the Section 2 (13) (g) of the Digital Personal Data Protection Act, 2023, the definition of "every artificial juristic person" is available, which means providing specific juristic recognition to artificial intelligence in a personalised sense.

AI as a Legal Entity

It means Artificial Intelligence may be recognised in a statutory sense, or a regulatory sense, a legal entity, with its own caveats, features and limits as prescribed by law. This idea was proposed in the 2020 Handbook on AI and International Law (2021).

AI as an Object

It means Artificial Intelligence may be considered as the inhibitor and enabler of an electronic or digital environment, to which a human being is subjected to. This classification is an inverse to the idea of an 'AI as a Subject', assuming that while human environments and natural environments do affect AI processing & outputs, even the design and interface of any AI system could affect and affect a human being as a data subject (as per the GDPR) / data principal (as per the DPDPA). This idea was proposed in the 2020 Handbook on AI and International Law (2021).

AI as a Subject

It means Artificial Intelligence may be legally prescribed or interpreted to be treated as a subject to human environment, inputs and actions. The simplest example could be that of a Generative AI system which is being subjected to human prompting, be it text, visual, sound or any other form of human input, to generate output of proprietary nature. This idea was proposed in the 2020 Handbook on AI and International Law (2021).

AI as a Third Party

It means Artificial Intelligence may have that limited sense of autonomy to behave as a Third Party in a dispute, problem or issue raised. This idea was proposed in the 2020 Handbook on AI and International Law (2021).

AI-based Anthropomorphization

AI-based anthropomorphization is the process of giving AI systems human-like qualities or characteristics. This can be done in a variety of ways, such as giving the AI system a human-like name, appearance, or personality. It can also be done by giving the AI system the ability to communicate in a human-like way, or by giving it the ability to understand and respond to human emotions. This idea was discussed in the 2020 Handbook on AI and International Law (2021), Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and Promoting Economy of Innovation through Explainable AI, VLiGTA-TR-003 (2023).

Algorithmic Activities and Operations

It means the algorithms of any AI system or machine-learning-based system are capable to perform two kinds of tasks, in a procedural sense of law, i.e., performing normal and ordinary tasks - which could be referred to as 'activities' and methodical and context-specific or technology-specific tasks, called 'operations'. This idea was proposed in Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022).

All-Comprehensive Approach

This means a system having an approach which covers every aspect of its purpose, risks and impact, with broad coverage.

fghij

General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2)

This is an ontological sub-category of Generative AI applications. Such kinds of Generative AI Applications are those which have a lot of test cases and use cases, which are either useful in a short-run or have unclear value as per industrial and regulatory standards. ChatGPT could be considered an example of this sub-category. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

General intelligence applications with multiple stable use cases as per relevant industrial and regulatory standards (GI1)

This is an ontological sub-category of Generative AI applications. Such kinds of Generative AI Applications are those which have a lot of test cases and use cases, which are useful, and considered to be stable as per relevant industrial and regulatory standards. ChatGPT could be considered an example of this sub-category. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Generative AI applications with one standalone use case (GAI1)

This is an ontological sub-category of Generative AI applications. Such Generative AI Applications have a single standalone use case of value. Midjourney could be considered a standalone use case, for example. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Generative AI applications with a collection of standalone use cases related to one another (GAI2)

This is an ontological sub-category of Generative AI applications. Such Generative AI Applications have more than one standalone use cases, which are related to one another. The best example of such a Generative AI Application is that of GPT-4's recent update, which can create text and images based on human prompts, and modify them as per requirements. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

In-context Learning

In-context learning for generative AI is the ability of a generative AI model to learn and adapt to new information based on the context in which it is used. This allows the model to generate more accurate and relevant results, even if it has not been specifically trained on the specific task or topic at hand. For example, an in-context learning generative AI model could be used to generate a poem about a specific topic, such as "love" or "nature." The model would be provided with a few examples of poems about the selected topic, which it would then use to understand the context of the task. The model would then generate a new poem about the topic that is consistent with the context. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Indo-Pacific

A concept relating to the countries and geographies in the Indian Ocean Region and the Pacific Ocean Region, popularised by the former Prime Minister of Japan, Shinzo Abe. The Ministry of External Affairs, Government of India prefers to use this term as a clear replacement to the term, Asia-Pacific, in the context of the South Asian region (or the Indian Subcontinent), the South-East Asian region, East Africa, the Pacific Islands region, Australia, Oceania, and the Far East. 

International Algorithmic Law

A newer concept of international law, proposed by Abhivardhan, the Founder of Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law in 2020, in his paper entitled 'International Algorithmic Law: Emergence and the Indications of Jus Cogens Framework and Politics', originally published in Artificial Intelligence and Policy in India, Volume 2 (2021).

The definition in the paper is stated as follows:

The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law.

Issue-to-issue concept classification

This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, in which the conceptual framework or basis of an AI system may be recognised on an issue-to-issue basis, with unique contexts and realities. This was proposed in Artificial Intelligence Ethics and International Law (originally published in 2019).

klmnop

Manifest Availability

The manifest availability doctrine refers to the concept that AI's presence or existence is evident and apparent, either as a standalone entity or integrated into products and services. This term emphasizes that AI is not just an abstract concept but is tangibly observable and accessible in various forms in real-world applications. By understanding how AI is manifested in a given context, one can determine its role and involvement, which leads to a legal interpretation of AI's status as a legal or juristic entity. This is a principle or doctrine, which was proposed in the 2020 Handbook on AI and International Law (2021), and was further explained in the 2021 Handbook on AI and International Law (2022). References of this concept could also be found in Artificial Intelligence Ethics and International Law (originally published in 2019).

Here is a definition of the concept as per the 2020 Handbook on AI and International Law:

 

So, AI is again conceptually abstract despite having its different definitions and concepts. Also, there are different kinds of products and services, where AI can be present or manifestly available either as a Subject, an Object or that manifest availability is convincing  enough  to  prove  that  AI  resembles  or  at  least  vicariously  or principally represents itself as a Third Party. Therefore, you need that SOTP classification initially to test the manifest availability of AI (you can do it through analyzing the systemic features of the product/service simply or the ML project), which is then followed by a generic legal interpretation to decide it would be a Subject/an Object/a Third Party (meaning using the SOTP classification again to decide the legal recourse of the AI as a legal/juristic entity).

Multi-alignment

Multi-alignment in foreign policy is a strategy in which a country maintains close ties with multiple major powers, rather than aligning itself with a single power bloc across regions, industry sectors, continents and power centers. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).

Model Algorithmic Ethics standards (MAES)

A concept proposed for private sector stakeholders, such as start-ups, MSMEs and freelancing professionals, in the AI business, to promote market-friendly AI ethics standards for their AI-based or AI-enabled products & services to create adaptive model standards to check its feasibility whether it could be implemented at various stages. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Multipolar World

A multipolar world is a global system in which power is distributed among multiple states, rather than being concentrated in one (unipolar) or two (bipolar) dominant powers. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).

Multipolarity

Multipolarity is a global system in which power is distributed among multiple states, with no single state having a dominant position, be it any sector, geography or level of sovereignty. This was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).

Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI

Generative AI, a form of artificial intelligence, possesses the capability to generate fresh content, encompassing text, images, and music. It harbors the potential to bring about significant transformations across various industries and sectors. Nevertheless, its emergence also presents a range of legal and ethical dilemmas.

Here is an excerpt from Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023):

  • First, for a product, service, use case or test case to be considered multivariant, it must have a multi-sector impact. The multi-sector impact could be disruption of jobs, work opportunities, technical & industrial standards and certain negative implications, such as human manipulation.

  • Second, for a product, service, use case or test case to be considered fungible, it must transform its core purpose by changing its sectoral priorities (like for example, a generative AI product may have been useful for the FMCG sector, but could also be used by companies in the pharmaceutical sector for some reasons). Relevant legal concerns could be whether the shift disrupts the previous sector, or is causing collusion or is disrupting the new sector with negative implications.

  • Third, for a product, service, use case or test case to be disruptive, it must affect the status quo of certain industrial and market practices of a sector. For example, maybe a generative AI tool could be capable of creating certain work opportunities or rendering them dysfunctional for human employees or freelancers. Even otherwise, the generative AI tool could be capable in shaping work and ethical standards due to its intervention.

This phrase was proposed in the case of Generative AI use cases and test cases in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Object-Oriented Design

Object-oriented design (OOD) is a software design methodology that organizes software around data, or objects, rather than functions and logic. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Omnipotence

In the context of Artificial Intelligence, this implies that any AI system, due to its inherent yet limited features of processing and generating outputs, could be effective in shaping multiple sectors, eventualities and legal dilemmas. In short, any omnipotent AI system could have first, second & third order effects due to its actions. This was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019), Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and many key publications by ISAIL & VLiGTA.

Omnipresence

In the context of Artificial Intelligence, this implies that any AI system, due to its inherent yet limited features of processing and generating outputs, could be present or relevant in multiple frames of reference such as sectors, timelines, geographies, realities, levels of sovereignty, and many other factors. This was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019), Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and many key publications by ISAIL & VLiGTA.

Permeable Indigeneity in Policy (PIP)

This concept, simply means, in proposition [...] that whatsoever legal and policy changes happen, they must be reflective, and largely circumscribing of the policy realities of the country. PIP cannot be a set of predetermined cases of indigeneity in a puritan or reductionist fashion, because in both of such cases, the nuance of being manifestly unique from the very churning of policy analysis, deconstruction & understanding, is irrevocably (and maybe in some cases, not irrevocably) lost. This was proposed in Regulatory Sovereignty in India: Indigenizing Competition- Technology Approaches, ISAIL-TR-001 (2021).

Phenomena-based concept classification

This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, in which, beyond technical and ethical questions, it is possible that AI systems may render purpose based on natural and human-related phenomena. This idea was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019).

Privacy by Default

Privacy by Default means that once a product or service has been released to the public, the strictest privacy settings should apply by default, without any manual input from the end user. This was largely proposed in the Article 25 of the General Data Protection Regulation of the European Union.

qrstu

Rule Engine

A rule engine is a type of software program that aids in automating decision- making processes by applying a predefined set of rules to a given dataset. It is commonly employed alongside generative AI tools to enhance the overall quality and consistency of the generated output. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

SOTP Classification

This is one of the two Classification Methods in which Artificial Intelligence could be recognised as a Subject, an Object or a Third Party in a legal issue or dispute. This idea was proposed in the 2020 Handbook on AI and International Law (2021).

Strategic Autonomy

Strategic autonomy in Indian foreign policy is the ability of India to pursue its national interests and adopt its preferred foreign policy without being beholden to any other country. This means that India should be able to make its own decisions about foreign policy, even if those decisions are unpopular with other countries. India should also be able to maintain its own security and economic interests, without having to rely on other countries for help. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).

Strategic Hedging

Strategic hedging means a state spreads its risk by pursuing two opposite policies towards other countries via balancing and engagement, to prepare for all best and worst case scenarios, with a calculated combination of its soft power & hard power. This idea was discussed in India-led Global Governance in the Indo-Pacific: Basis & Approaches, GLA-TR-003 (2022).

Technical concept classifcation

This is one of the sub-categorised methods to classify Artificial Intelligence as a Concept, as this method covers all technical features of Artificial Intelligence that have evolved in the history of computer science. Such a classification approach is helpful in estimating legal and policy risks associated with technical use cases of AI systems at a conceptual level. This idea was discussed in Artificial Intelligence Ethics and International Law (originally published in 2019).

Technology by Default

This refers to the use of AI technology without fully considering its potential consequences. For example, a company might use AI to automate a task without thinking about how this might impact workers or society as a whole.

Technology by Design

This refers to the deliberate use of AI technology to achieve specific goals. For example, a company might design an AI system to help them identify and recruit the best candidates for a job.

Technology Distancing

This refers to the process of creating AI systems that are more transparent, accountable, and equitable. This can be done by involving stakeholders in the design and development of AI systems, and by making sure that AI systems are aligned with human values.

Technology Transfer

This refers to the process of sharing AI knowledge and technology between different organizations or individuals. This can be done through formal channels such as research collaborations or licensing agreements, or through informal channels such as conferences and online communities.

vwxyz

Whole-of-Government Response

A whole-of-government response under the (proposed) Digital India Act is a coordinated approach to the governance of digital technologies and issues. It involves the participation of all relevant government ministries and agencies, as well as other stakeholders such as industry and academia. The goal of a whole-of-government response is to ensure that digital technologies are used in a way that is beneficial to society, while also mitigating any potential risks. This may involve developing new policies and regulations, investing in research and development, and raising awareness of digital issues. This was discussed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023).

Zero Knowledge Systems

Zero-knowledge systems (ZKSs) are cryptographic protocols that allow one party (the prover) to prove to another party (the verifier) that a statement is true, without revealing any information about the statement itself or how it is proven. ZKSs are based on the idea that it is possible to prove the possession of knowledge without revealing the knowledge itself. This was discussed in Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004 (2023).

Zero Knowledge Taxes

Zero-knowledge taxes (ZKTs) are a hypothetical type of tax that could be implemented using ZKSs. ZKTs would allow taxpayers to prove to the government that they have paid their taxes without revealing their income or other financial information. This was discussed in Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004 (2023).

terms of use

This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use:

  • You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary.

  • Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example):

Indic Pacific Legal Research LLP, 'The Indic Pacific Glossary of Terms' (Indic Pacific Legal Research, 2023) <https://www.indicpacific.com/glossary>

  • You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research.

  • The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary.

  • You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary.

 

If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

bottom of page