top of page

Search Results

Results found for empty search

  • Indofuturism | Glossary of Terms | Indic Pacific | IPLR

    Indofuturism Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Indofuturism Date of Addition 19 January 2025 A creative and cultural movement that reimagines India through science fiction and futuristic scenarios, particularly using AI-generated art and storytelling. It challenges Western sci-fi tropes by blending Indian cultural elements with futuristic concepts. Key characteristics include: Visual reimagining of Indian scenarios through a sci-fi lens Challenge to the assumption that sci-fi isn't a "desi genre" Creation of new visual vocabulary for Indian science fiction Exploration of alternative historical scenarios (like non-colonized India) This term was conceptualized through the AI artwork and creative direction of Prateek Arora, VP Development at BANG BANG Mediacorp, who popularized the term through his viral AI-generated artworks like "Granth Gothica" and "Disco Antriksh". Related Long-form Insights on IndoPacific.App Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Previous Term Next Term

  • Averting Framework Fatigue in AI Governance [IPLR-IG-013] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Averting Framework Fatigue in AI Governance [IPLR-IG-013] Get this Publication 2025 ISBN 978-81-977227-5-2 Author(s) Abhivardhan Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPLR-IG-013 Tags Abhivardhan, AI Ethics, AI ethics training, AI governance, AI governance compliance, AI governance models, AI governance tools, AI governance training, AI policy developments, AI policy timeline (2023-2025), Algorithmic accountability, collaborative policymaking, compliance challenges, compliance roadmaps, corporate governance, decision paralysis, dynamic governance, ethical AI frameworks, ethical compliance, ethical decision-making, ethical governance strategies, ethical guidelines, framework fatigue, framework integration, framework optimization, framework proliferation, framework simplification., governance best practices, governance challenges, governance clarity, governance exercises, governance frameworks, governance priorities, governance roadmaps, governance strategies, Hyderabad workshop 2025, industry standards, legal frameworks, multijurisdictional frameworks, operational alignment, policy fragmentation, policy harmonization, policy implementation, policy overload, prioritization models, RBI FREE-AI Committee, regulatory alignment, regulatory compliance, risk mitigation, stakeholder alignment, stakeholder engagement, strategic prioritization, technical standards Related Terms in Techindata.in Explainers Definitions - A - E AI as an Industry AI Literacy Benchmark Gaming Data as Noise Distributed Ledger Definitions - F - J Framework Fatigue General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) Generative AI applications with a collection of standalone use cases related to one another (GAI2) Intended Purpose / Specified Purpose International Algorithmic Law Issue-to-issue concept classification Definitions - K - P Klarna Effect Language Model Model Algorithmic Ethics standards (MAES) Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Neurosymbolic AI Object-Oriented Design Omnipotence Omnipresence Parameters Performance Effect Phenomena-based concept classification Definitions - Q - U Roughdraft AI SOTP Classification Semi-Supervised Learning Synthetic Confidence Technical concept classifcation Technology by Default Technology by Design Technology Distancing Technology Transfer Technophobia Definitions - V - Z WENA Whole-of-Government Response Related Articles in Techindata.in Insights 26 Insight(s) on artificial intelligence and law 13 Insight(s) on artificial intelligence ethics 9 Insight(s) on artificial intelligence hype 7 Insight(s) on RBI FREE-AI Committee 5 Insight(s) on AI Governance 3 Insight(s) on AI literacy 1 Insight(s) on deepfakes 1 Insight(s) on IndiaAI 1 Insight(s) on responsibility 0 Insight(s) on RoughDraft AI . Previous Item Next Item

  • Draft Digital Competition Bill, 2024 for India: Feedback Report [IPLR-IG-003] | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Draft Digital Competition Bill, 2024 for India: Feedback Report [IPLR-IG-003] Get this Publication 2024 ISBN 978-81-970837-0-9 Author(s) Abhivardhan, Krati Singh Bhadouriya, Shresh Kiran Narang, Vaishnavi Singh Editor(s) Not Applicable IndoPacific.App Identifier (ID) IPLR-IG-003 Tags 2024, Abhivardhan, ai accountability, Antitrust Legislation, Competition law, competition policy, Consumer Protection, digital economy, Digital Platforms, Draft Digital Competition Bill, Economic Regulation, feedback report, Government Legislation, India, Industry Feedback, Legal Reform, Market Dynamics, Market Regulation, Policy Development, Regulatory Framework, Technology Sector Related Terms in Techindata.in Explainers Definitions - A - E AI Supply Chain AI Value Chain Compute Definitions - F - J Framework Fatigue Indo-Pacific Definitions - K - P Object-Oriented Design Omnipotence Omnipresence Polyvocality Performance Effect Phenomena-based concept classification Definitions - Q - U SOTP Classification Technology Transfer Transformer Model Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 8 Insight(s) on AI and Copyright Law 7 Insight(s) on AI and Competition Law 7 Insight(s) on AI and media sciences 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AI and Evidence Law 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading . Previous Item Next Item

  • Hierarchical Feedback Distortion | Glossary of Terms | Indic Pacific | IPLR

    Hierarchical Feedback Distortion Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Hierarchical Feedback Distortion Date of Addition 5 March 2025 The Hierarchical Feedback Distortion Principle operates through a specific mechanism wherein state and central governments respond dramatically to negative feedback, often through public statements, high-profile investigations, or policy announcements. These responses, while highly visible, frequently fail to address the underlying structural issues that enable corruption or administrative failures at the local level. The resulting dynamic creates what can be described as "accountability gaps" – spaces within the governance system where certain actors can operate with relative impunity despite the appearance of oversight. These accountability gaps form through several interconnected processes. First, the distance between higher levels of government and local administration creates information asymmetries, where central authorities lack detailed knowledge of ground-level operations. Second, the emphasis on negative feedback creates incentives for performative responses that satisfy public demand for action without necessarily changing administrative practices. Third, the hierarchical nature of bureaucratic systems often shields lower-level officials from direct accountability to citizens, instead making them primarily answerable to superiors within the bureaucracy. In the Indian context, these dynamics are particularly pronounced due to the country's complex multi-level governance structure, which includes central, state, district, and local administrative tiers. Each level operates with different incentives, capacities, and relationships to citizens, creating multiple opportunities for accountability mechanisms to break down. The resulting system can inadvertently create protected spaces where corruption can flourish despite the appearance of active governance and oversight from above. This principle was created as a matter of inspiration of some of the posts by Pseudokanada, i.e., @hestmatematik on X . Related Long-form Insights on IndoPacific.App Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Previous Term Next Term

  • KMG Wires Private Limited vs. National Faceless Assessment Centre & Others, Writ Petition (L) No. 24366 of 2025, Bombay High Court, Order dated October 6, 2025 | Indic Pacific | IPLR | indicpacific.com

    Bombay High Court October 2025 landmark judgment quashing National Faceless Assessment Centre's ₹27.91 crore tax assessment against KMG Wires for relying on non-existent AI-generated case laws and violating natural justice principles in faceless assessment proceedings. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. KMG Wires Private Limited vs. National Faceless Assessment Centre & Others, Writ Petition (L) No. 24366 of 2025, Bombay High Court, Order dated October 6, 2025 Bombay High Court October 2025 landmark judgment quashing National Faceless Assessment Centre's ₹27.91 crore tax assessment against KMG Wires for relying on non-existent AI-generated case laws and violating natural justice principles in faceless assessment proceedings. Previous Next October 2025 Issuing Authority Bombay High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status Enacted Regulatory Stage Miscellaneous Binding Value Legally binding instruments enforceable before courts Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More Artificial Intelligence and Policy in India, Volume 6 [AIPI-V6] Learn More Related draft AI Law Provisions of aiact.in Section 15 – Guidance Principles for AI-related Agreements Section 15 – Guidance Principles for AI-related Agreements Section 16 – Guidance Principles for AI-related Corporate Governance Section 16 – Guidance Principles for AI-related Corporate Governance

  • Section 2 – Definitions | Indic Pacific

    Section 2 – Definitions PUBLISHED Previous Next Section 2 – Definitions In this Act, unless the context otherwise requires — (a) “Artificial Intelligence”, “AI”, “AI technology”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3. (b) “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application. (c) “Algorithmic Bias” includes – (i) the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and (ii) the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to: (a) programming errors; (b) flawed algorithmic logic; and (c) deficiencies in model training and validation, including but not limited to: (1) the incomplete or deficient data used for model training; (d) “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997; (e) “Business end-user” means an end-user that is - (i) engaged in a commercial or professional activity and uses an AI system in the course of such activity; or (ii) a government agency or public authority that uses an AI system in the performance of its official functions or provision of public services. (f) “Combinations of intellectual property protections” means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems; (g) “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including: (i) The source data, models, and algorithms used to generate the content; (ii) The individuals or entities involved in the creation, modification, and distribution of the content; (iii) The date, time, and location of content creation and any subsequent modifications; (iv) The intended purpose, context, and target audience of the content; (v) Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and (vi) The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability. (h) “Corporate Governance” means the system of rules, practices, and processes by which an organisation is directed and controlled, encompassing the mechanisms through which companies, and organisations, ensure accountability, fairness, and transparency in their relationships with stakeholders including but not limited to employees, shareholders, customers, and the public. (i) “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means; (j) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data; (k) “Data portability” means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where: (i) The personal data has been provided to the data fiduciary by the data principal; (ii) The processing is based on consent or the performance of a contract; and (iii) The processing is carried out by automated means. (l) “Data Principal” means the individual to whom the personal data relates and where such individual is— (i) a child, includes the parents or lawful guardian of such a child; (ii) a person with disability, includes her lawful guardian, acting on her behalf; (m) “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023; (n) “Data Scraping” means the automated collection, extraction, or mining of data from websites, online platforms, or digital sources through technical means including but not limited to automated tools, web crawlers, or software applications that extract information from websites or online platforms; (o) “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode; (p) “Digital personal data” means personal data in digital form; (q) “End-user” means - (i) an individual who ultimately uses or is intended to ultimately use an AI system, directly or indirectly, for personal, domestic or household purposes; or (ii) an entity, including a business or organisation, that uses an AI system to provide or offer a product, service, or experience to individuals, whether for a fee or free of charge. (r) “Knowledge asset” includes, but is not limited to: (i) Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs; (ii) Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards; (iii) Tacit knowledge and expertise residing within the organisation’s human capital, such as specialized skills, experiences, and know-how; (iv) Organisational processes, systems, and methodologies that enable the effective capture, organisation, and utilisation of knowledge; (v) Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences; (vi) Knowledge derived from data analysis, including patterns, trends, and predictive models; and (vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives. (s) “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems; (t) “IAIC” means Indian Artificial Intelligence Council; (u) “Inherent Purpose”, and “Intended Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve; (v) “Insurance Policy” means measures and requirements concerning insurance for research & development, production, and implementation of artificial intelligence technologies; (w) “Interoperability considerations” means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments, which include: (i) Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement; (ii) Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good; (iii)Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems; (iv) Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications; (x) “Open-Source Software” means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose. (y) “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical, commercial & risk-based features, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies; (z) “Person” includes— (i) an individual; (ii) a Hindu undivided family; (iii) a company; (iv) a firm; (v) an association of persons or a body of individuals, whether incorporated or not; (vi) the State; and (vii) every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r); (aa) “Post-Deployment Monitoring” means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service; (bb)“Public Interest Use” includes research, education, analysis, journalism, and non-commercial innovation that may qualify under Section 52 of the Copyright Act, 1957 as fair dealing or permitted use. (cc) “Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects; (dd)“Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023; (ee) “Sociotechnical” means the recognition that artificial intelligence systems are not merely technical artifacts but are embedded within broader social contexts, organisational structures, and human-technology interactions, necessitating the consideration and harmonisation of both social and technical aspects to ensure responsible and effective AI governance; (ff) “State” shall be construed as the State defined under Article 12 of the Constitution of India; (gg) “Strategic sector” shall mean any sector classified as strategic under the Foreign Exchange Management (Overseas Investment) Directions, 2022, and shall further include any sector or sub-sector as may be designated by the Central Government, having regard to considerations of national security, economic sovereignty, critical infrastructure, or technological advancement, in accordance with the principles set forth in Section 21A. (hh) “techno-solutionism” means the systematic implementation of artificial intelligence systems or computational technologies as primary solutions to public administration challenges while failing to adequately address underlying non-technical factors; Explanation.—For the purposes of this definition, techno-solutionism includes— (i) implementing automated decision systems that directly impact legally recognized rights without providing affected persons a clear mechanism to present their case before or after such decisions; (ii) deploying AI systems that create demonstrable risk of unfairness by prioritizing computational processing over consideration of individual circumstances; (iii) automating administrative processes in ways that prevent affected persons from obtaining specific explanations for decisions affecting their legal interests; (iv) replacing necessary human evaluation with automated systems in contexts where established law requires case-specific assessment, proportionality testing, or application of discretion; (v) justifying technological implementation primarily based on operational metrics (such as cost-reduction or processing speed) without measuring improvement in addressing the underlying public issue; and (vi) allocating public resources to technological systems for problems that fundamentally result from policy deficiencies, resource limitations, or structural issues that technology alone cannot solve; (ii) “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network; (jj) “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service; (kk)“use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome; Related Indian AI Regulation Sources Information Technology Act, 2000 (IT Act 2000) October 2000 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) April 2011 National Strategy for Artificial Intelligence (#AIforAll) June 2018 Digital Personal Data Protection Act, 2023 (DPDPA) August 2023 Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) January 2025

  • [AIACT.IN V5] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 5 | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) [AIACT.IN V5] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 5 Get this Publication 2025 ISBN Not Applicable Author(s) Abhivardhan Editor(s) Not Applicable IndoPacific.App Identifier (ID) AIACT5 Tags Abhivardhan, Accountability, AI applications, AI Development, AI Education, AI Ethics, AI Future, AI governance, AI Impact, AI Industry, AI Innovations, AI literacy, AI regulation, AI Research, AI Resources, AI Solutions, AI Technology, AI Tools, AI Training, AI Trends, aiact.in, AIACT.IN V4, Artificial Intelligence, Compliance, content provenance, Data ethics, Employment, ethics code, Governance, high-risk AI, India, Indic Pacific, Legal Framework, Machine Learning, National Ethics Code, penalties, risk classification, Strategic Sectors, Transparency, Version 5.0, watermarking Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as an Object AI as a Subject AI as a Third Party Accountability Deepfakes Definitions - F - J General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) General intelligence applications with multiple stable use cases as per relevant industrial and regulatory standards (GI1) Generative AI applications with one standalone use case (GAI1) Intended Purpose / Specified Purpose Definitions - K - P Manifest Availability Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Parameters Privacy by Default Privacy by Design Proprietary Information Definitions - Q - U SOTP Classification Technology Transfer Transformer Model Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AIACT.in 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan . Previous Item Next Item

  • Section 13 – National Artificial Intelligence Ethics Code | Indic Pacific

    Section 13 – National Artificial Intelligence Ethics Code PUBLISHED Previous Next Section 13 – National Artificial Intelligence Ethics Code (1) A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilisation of artificial intelligence technologies; (2) The NAIEC shall be based on the following core ethical principles: (i) AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process. (ii) AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes, including caste and class . (iii) AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content. (iv) AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration. (v) AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented. (vi) AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented; (vii) AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed. (viii) AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilisation while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment. (3) The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end: (i) The use of OSS shall be guided by a clear understanding of the open source development model, its scope, constraints, and the varying implementation approaches across different socio-economic and organisational contexts. (ii) AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny, while also ensuring that sensitive components and intellectual property are adequately protected. (iii)AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems, and shall implement appropriate governance, quality assurance, and risk management processes. (4) The Ethics Code shall provide guidance on intellectual property and ownership considerations related to AI-generated content. To this end: (i) Specific considerations shall include recognizing the role of human involvement in developing and deploying the AI systems, establishing guidelines on copyrightability and patentability of AI-generated works and inventions, addressing scenarios where AI builds upon existing protected works, safeguarding trade secrets and data privacy, balancing incentives for AI innovation with disclosure and access principles, and continuously updating policies as AI capabilities evolve. (ii) The Ethics Code shall encourage transparency and responsible practices in managing intellectual property aspects of AI-generated content across domains such as text, images, audio, video and others. (iii)In examining IP and ownership issues related to AI-generated content, the Ethics Code shall be guided by the conceptual classification methods outlined in Section 4, particularly the Anthropomorphism-Based Concept Classification to evaluate scenarios where AI replicates or emulates human creativity and invention. (iv) The technical classification methods described in Section 5, such as the scale, inherent purpose, technical features, and limitations of the AI system, shall inform the assessment of IP and ownership considerations for AI-generated content. (v) The commercial classification factors specified in the sub-section (1) of Section 6, including the user base, market influence, data integration, and revenue generation of the AI system, shall also be taken into account when determining IP and ownership rights over AI-generated content. (5) The Ethics Code shall provide guidance on frugal prompt engineering practices for the development of AI systems, ensuring efficiency, accessibility, and the equitable advancement of artificial intelligence, as follows: (i) Encourage the use of concise and well-structured prompts that specify desired outputs and constraints, minimizing unnecessary complexity in AI interactions; (ii) Recommend the adoption of transfer learning and pre-trained models to reduce the need for extensive fine-tuning, thereby conserving computational resources; (iii)Promote the use of data-efficient techniques, such as few-shot learning or active learning, to decrease the volume of training data required for effective model performance; (iv) Suggest the implementation of early stopping mechanisms to prevent overfitting and enhance model generalisation, ensuring robust performance with minimal training; (v) Advocate for the use of techniques such as model compression, quantisation, or distillation to reduce computational complexity and resource demands, making AI development more sustainable; (vi) Require the documentation and maintenance of records on prompt engineering practices, detailing the techniques used, performance metrics achieved, and any trade-offs between efficiency and effectiveness, to ensure transparency and accountability; (vii) Declare that prompt engineering, as a fundamental practice for optimizing AI systems, constitutes a global commons and a shared resource for the benefit of all humanity, and as such: (a) Shall not be monetized, commercialized, or subject to proprietary claims, ensuring that the knowledge and techniques of prompt engineering remain freely accessible to all; (b) Shall be treated as a universal public good, akin to principles established in international agreements governing shared resources, to foster global collaboration and innovation in AI development & education. (6) The Ethics Code shall provide guidance on ensuring fair access rights for all stakeholders involved in the AI value and supply chain, including: (i) All stakeholders should have fair and transparent access to datasets necessary for training and developing AI systems. This includes promoting equitable data-sharing practices that ensure smaller entities or research institutions are not unfairly disadvantaged in accessing critical datasets. (ii) Ethical use of computational resources should be promoted by ensuring that all stakeholders have transparent access to these resources. Special consideration should be given to smaller entities or research institutions that may require preferential access or pricing models to support innovation. (iii) Ethical guidelines should ensure that ownership rights over trained models, derived outputs, and intellectual property are clearly defined and respected. Stakeholders involved in the development process must have a clear understanding of their rights and obligations regarding the usage and commercialization of AI technologies. (iv) The benefits derived from AI technologies should be distributed ensuring that smaller players contributing critical resources like proprietary datasets or specialized algorithms are fairly compensated. (7) Adherence to the NAIEC shall be voluntary for all AI systems, as well as those exempted under the sub-section (3) of Section 11. (8) Strategic Sector Safeguards: AI systems deployed in strategic sectors, particularly those classified as high-risk under Section 7, shall adhere to heightened ethical standards that prioritize: (i) Safety Imperative: Developers and operators of AI systems shall design, implement, and maintain robust safety measures that minimize potential harm to individuals, property, society, and the environment throughout the system's lifecycle; (ii) Security by Design: AI systems shall incorporate security measures from the earliest stages of development to protect against unauthorized access, manipulation, or misuse, with particular emphasis on safeguarding data integrity and system confidentiality; (iii) Reliability and Resilience: All AI systems shall demonstrate consistent, accurate, and dependable performance through rigorous testing, validation, and continuous monitoring, with enhanced requirements for systems in critical infrastructure or essential services; (iv) Transparent Operations: AI systems shall implement mechanisms that enable appropriate stakeholder understanding of underlying algorithms, data sources, and decision-making processes, adhering to disclosure needs in line with intellectual property protections; (v) Accountable Governance: Clear lines of responsibility shall be established for AI system outcomes, with specified channels for redress and remediation in cases of adverse impacts, particularly for systems affecting fundamental rights or public welfare; (vi) Legitimate Purpose Alignment: AI systems shall be developed and deployed exclusively for purposes that comply with the legitimate uses framework established under Section 7 of the Digital Personal Data Protection Act, 2023 and shall not be repurposed for unauthorized applications without appropriate review. Related Indian AI Regulation Sources Principles for Responsible AI (Part 1) February 2021 Operationalizing Principles for Responsible AI (Part 2) August 2021 Fairness Assessment and Rating of Artificial Intelligence Systems (TEC 57050:2023) July 2023 The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare March 2023 Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report August 2025

  • Sections 4-9, AiACT.IN V4 Infographic Explainers | Indic Pacific | IPLR

    Liked our Work? Search it now on IndoPacific.App Get Searching Our Research Know more about our Knowledge Base, years of accumulated and developed in-house research at Indic Pacific Legal Research. Search our Research Treasure on IndoPacific.App. :) Sections 4-9, AiACT.IN V4 Infographic Explainers Get this Publication 2024 ISBN Not Applicable Author(s) Abhivardhan Editor(s) Not Applicable IndoPacific.App Identifier (ID) AIACTINV4S4-9 Tags ai accountability, AI Act 2023, AI Classification, AI Development, AI Ethics, AI governance, AI Impact, AI policy, AI regulation, AI Risk, ai transparency, aiact.in, Commercial Classification, Conceptual Classification, High-Risk AI Systems, India, Risk-based Classification, Strategic Sectors, Technical Classification, Version 4 Related Terms in Techindata.in Explainers Definitions - A - E AI as a Concept AI as an Object AI as a Subject AI as a Third Party Accountability Definitions - F - J Framework Fatigue General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) General intelligence applications with multiple stable use cases as per relevant industrial and regulatory standards (GI1) Generative AI applications with a collection of standalone use cases related to one another (GAI2) Intended Purpose / Specified Purpose Definitions - K - P Manifest Availability Multivariant, Fungible & Disruptive Use Cases & Test Cases of Generative AI Parameters Privacy by Default Privacy by Design Proprietary Information Definitions - Q - U SOTP Classification Technology Transfer Transformer Model Definitions - V - Z Whole-of-Government Response Related Articles in Techindata.in Insights 29 Insight(s) on AI Ethics 8 Insight(s) on AI and Copyright Law 7 Insight(s) on AI and Competition Law 7 Insight(s) on AI and media sciences 7 Insight(s) on AI regulation 5 Insight(s) on AI Governance 3 Insight(s) on AI and Evidence Law 3 Insight(s) on AI literacy 2 Insight(s) on Abhivardhan 2 Insight(s) on AI and Intellectual Property Law 1 Insight(s) on AI and Securities Law 1 Insight(s) on Algorithmic Trading . Previous Item Next Item

  • Skirmish Propaganda Capacity Destruction | Glossary of Terms | Indic Pacific | IPLR

    Skirmish Propaganda Capacity Destruction Date of Addition 11 May 2025 "Skirmish propaganda capacity destruction" refers to the significant weakening or dismantling of a group's ability to spread manipulative narratives following a brief, localized conflict. This disruption often occurs through the exposure of coordinated networks, such as social media influencers or content creators, that were used to shape public perception, coupled with actions like increased scrutiny, discrediting of sources, or platform bans, ultimately reducing their influence over narratives in the conflict's aftermath. This definition is derived from the context and implications of the phrase as used by Kushal Mehra in his X post on May 11, 2025. Related Long-form Insights on IndoPacific.App The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023 Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com

  • Grounded AI Safety | Glossary of Terms | Indic Pacific | IPLR

    Grounded AI Safety Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com Grounded AI Safety Date of Addition 18 May 2025 Grounded AI Safety is a principle-driven approach adopted by the Indian Society of Artificial Intelligence and Law for The Bharat Pacific Stack , to managing risks in AI systems, rooted in the fundamental understanding that current AI, such as large language models, functions as statistical pattern-matchers without true comprehension or reasoning ability. This approach: Anchors in Observable Limitations : Risk mitigation begins with empirical evidence of AI’s inherent constraints, such as struggles with tasks requiring conceptual understanding—like misinterpreting time differences across regions or failing to follow rules in strategic games—focusing on these measurable shortcomings rather than assumed capabilities. Centers on Human-Driven Risks : The primary dangers arise from human over-reliance on or misuse of these limited systems, such as deploying them in critical areas like scheduling or decision-making where their errors could lead to significant consequences, rather than from AI autonomously causing catastrophic outcomes. Rejects Speculative Existential Narratives : AI safety must exclude unproven predictions of AI-driven doomsday scenarios that lack evidence and inflate AI’s potential, as these narratives misguide priorities and empower those who might exploit fear for profit, influence, or excessive control. Prioritises Evidence-Based Safeguards : Solutions involve systematic testing to identify and address specific failure modes—like errors in visual representations or logical reasoning—paired with transparent improvements, ensuring AI systems are used responsibly within their known boundaries. This definition is inspired by a post by Dr Gary Marcus, on X . Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term

  • General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) | Glossary of Terms | Indic Pacific | IPLR

    General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com General intelligence applications with multiple short-run or unclear use cases as per industrial and regulatory standards (GI2) Date of Addition 26 April 2024 This is an ontological sub-category of Generative AI applications. Such kinds of Generative AI Applications are those which have a lot of test cases and use cases, which are either useful in a short-run or have unclear value as per industrial and regulatory standards. ChatGPT could be considered an example of this sub-category. This idea was proposed in Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023). Related Long-form Insights on IndoPacific.App 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Learn More Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India [ISAIL-TR-002] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Deciphering Regulative Methods for Generative AI [VLiGTA-TR-002] Learn More Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003] Learn More Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005 Learn More [Version 1] A New Artificial Intelligence Strategy and an Artificial Intelligence (Development & Regulation) Bill, 2023 Learn More [Version 2] Draft Artificial Intelligence (Development & Regulation) Act, 2023 Learn More [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3 Learn More AIACT.IN Version 3 Quick Explainer Learn More The Legal and Ethical Implications of Monosemanticity in LLMs [IPLR-IG-008] Learn More Navigating Risk and Responsibility in AI-Driven Predictive Maintenance for Spacecraft, IPLR-IG-009, First Edition, 2024 Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Sections 4-9, AiACT.IN V4 Infographic Explainers Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More [AIACT.IN V4] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 4 Learn More [AIACT.IN V5] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 5 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More Normative Emergence in Cyber Geographies: International Algorithmic Law in a Multipolar Technological Order, First Edition Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Learn More Previous Term Next Term

bottom of page