top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

Search Results

125 results found

  • Indo-Pacific Research Principles on Use of Large Reasoning Models & 2 Years of IndoPacific.App

    The firm is delighted to first announce the launch of Indo Pacific Research Principles on Large Reasoning Models, and also announces two successful years of IndoPacific App, our legal and policy literature archive since 2023. The first section covers the firm's reasoning to introduce these principles on large reasoning models. The Research Principles: Their Purpose Now, Large Reasoning Models as developed by AI companies across the globe, whose examples include Deeper Search & Deep Research by XAI (Grok), Deep Research by Perplexity, Gemini's Deep Research and even OpenAI's own Deep Reasoning tool, are supposed to mimic reasoning abilities of human beings. The development of LRMs emerged from the recognition that standard LLMs often struggle with complex reasoning tasks despite their impressive language generation capabilities. Researchers observed, or say, supposed , that prompting LLMs to "think step by step" or to break down problems into smaller components often improved performance on mathematical, logical, and algorithmic challenges. Models like DeepSeek R1, Claude, and GPT-4 are frequently cited examples that incorporate reasoning-focused architectures or training methodologies. These models are trained to produce intermediate steps that suppose to 'resemble' human reasoning processes before arriving at final answers. These models, which include systems like Claude 3.7, DeepSeek R1, GPT-4, and others, claim to exhibit reasoning capabilities that mimic human thought processes, often displaying their work through "reasoning traces" or step-by-step explanations of their thought processes. However, recent research has begun to question these claims and identify significant limitations in how these models actually reason. While LRMs have shown some performance on certain benchmarks, researchers have found substantial evidence suggesting that what appears to be reasoning may actually be sophisticated pattern matching rather than genuine logical processing. The Anthropomorphisation Trap A critical issue in evaluating LRMs is what researchers call the " anthropomorphisation trap " - the tendency to interpret model outputs as reflecting human-like reasoning processes simply because they superficially resemble human thought patterns. The inclusion of phrases like "hmm..," "aha..," "let me think step by step..," may create the impression of deliberative thinking, but these are more likely stylistic imitations of human reasoning patterns present in training data rather than evidence of actual reasoning. This trap is particularly concerning because it can lead researchers and users to overestimate the models' capabilities. When LRMs produce human-like reasoning traces that appear thoughtful and deliberate, we may incorrectly attribute sophisticated reasoning abilities to them that don't actually exist. Here is a table that gives you an overview of the limitations associated with large reasoning models. Reasoning Limitation in Large Reasoning Models Description Lack of True Understanding LRMs operate by predicting the next token based on patterns they've learned during training, but they fundamentally lack a deep understanding of the environment and concepts they discuss. This limitation becomes apparent in complex reasoning tasks that demand true comprehension rather than pattern recognition. Contextual and Planning Limitations Although modern language models excel at grasping short contexts, they often struggle to maintain coherence over extended conversations or larger text segments. This can result in reasoning errors when the model must connect information from various parts of a dialogue or text. Additionally, LRMs frequently demonstrate an inability to perform effective planning for multi-step problems. Deductive vs. Inductive Reasoning Research indicates that LRMs particularly struggle with deductive reasoning, which requires deriving specific conclusions from general principles with a high degree of certainty and logical consistency. Their probabilistic nature makes achieving true deductive closure difficult, creating significant limitations for applications requiring absolute certainty. Even in the paper co-authored by Prof. Subbarao Kambhampati, entitled, " A Systematic Evaluation of the Planning and Scheduling Abilities of the Reasoning Model o1 " directly addresses critical themes from our earlier discussion about Large Reasoning Models (LRMs). Here are some quotes from this paper: While o1-mini achieves 68% accuracy on IPC domains compared to GPT-4's 42%, its traces show non-monotonic plan construction patterns inconsistent with human problem-solving [...] At equivalent price points, iterative LLM refinement matches o1's performance, questioning the need for specialized LRM architectures. [...] Vendor claims about LRM capabilities appear disconnected from measurable reasoning improvements. The Indo-Pacific Research Principles on Use of Large Reasoning Models Based on the evidence we have collected, and the insights received, we have proposed the following research principles on use of large reasoning models by Indic Pacific Legal Research. Principle 1: Emphasise Formal Verification Train LRMs to produce verifiable reasoning traces, like A* dynamics or SoS, for rigorous evaluation. Principle 2: Be Cautious with Intermediate Traces Recognise traces may be misleading; do not rely solely on them for trust or understanding. Principle 3: Avoid Anthropomorphisation Focus on functional reasoning, not human-like traces, to prevent false confidence. Principle 4: Evaluate Both Process and Outcome Assess both final answer accuracy and reasoning process validity in benchmarks. Principle 5: Transparency in Training Data Be clear about training data, especially human-like traces, to understand model behaviour. Principle 6: Modular Design Use modular components for flexibility in reasoning structures and strategies. Principle 7: Diverse Reasoning Structures Experiment with chains, trees, graphs for task suitability, balancing cost and effectiveness. Principle 8: Operator-Based Reasoning Implement operators (generate, refine, prune) to manipulate and refine reasoning processes. Principle 9: Balanced Training Use SFT and RL in two-phase training for foundation and refinement. Principle 10: Process-Based Evaluation Evaluate the entire reasoning process for correctness and feedback, not just outcomes. Principle 11: Integration with Symbolic AI Combine LRMs with constraint solvers or planning algorithms for enhanced reasoning. Principle 12: Interactive Reasoning Design LRMs for environmental interaction, using feedback to refine reasoning in sequential tasks. Please note that all principles are purely consultative and constitute no binding value on the members of Indic Pacific Legal Research. We also permit the use of these principles, provided that we are well-cited and referenced, for strict non-commercial use. IndoPacific.App Celebrates its Glorious 2 Years The IndoPacific App, launched under Abhivardhan's leadership in 2023 was a systemic reform undergone at Indic Pacific Legal Research to document our research publications, and contributions better. In our partnership with the Indian Society of Artificial Intelligence and Law, ISAIL.IN 's publications, and documentations are also registered at the IndoPacific App, under the AiStandard.io Alliance, and even otherwise as well. As this archive of legal (mostly) and policy literature completes its 2 years of existence under Indic Pacific's VLiGTA research & innovation division, we are glad to put some statistics clear for everyone's purview, which is updated as of April 12, 2025, and verified by manual means, after our use of Generative AI tools. This means that the stats is double-checked: We host publications and documentations of exactly 238 original authors (1 error removed). Our founder, Mr Abhivardhan's own publications constitute around 10% (approx.) The number of publications on IndoPacific App stand at 85, however, the number of chapters or contributory sections or articles, if we add numbers within research collections, then the number of research contributions stand at 304 unique contributions, which is a historic figure. Now, if we attribute these 304 unique contributions to each author (which are in the form of chapters to a collection of research or handbook, or a report, or a brief, for instance) - then the number of individual author credits will cross 300 as per our approximate estimates. This means something simple, honest and obvious. The IndoPacific.App , started by our Founder, Abhivardhan, is the biggest technology law archive of mostly Indian authors, with around 238 original authors documented in this archive, and 304 unique contributions (published) featured. There is no law firm, consultancy, think tank or institution with such a huge technology law archive, with independent support, and we are proud to have achieved this feat in the 5 years span of existence of both Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law. Thank you for becoming a part of this research community either through Indic Pacific Legal Research, or the Indian Society of Artificial Intelligence and Law. It is our honour and duty to safeguard this archive for all, which is 99% (except the handbooks) free. So, don't wait, go and download some amazing works from the IndoPacific.app , today.

  • Crafting the Future: Gratitude to DNLU Jabalpur and the Pivotal Role of aiact.in in Shaping AI Governance

    At Indic Pacific Legal Research LLP, we are thrilled to extend our heartfelt gratitude to Dharmashastra National Law University (DNLU), Jabalpur, for their remarkable initiative in hosting a Legislative Drafting Competition centered on the "Artificial Intelligence (Development and Regulation) Act." It’s a moment of pride and affirmation for us to witness a leading Indian law school engage with the critical intersection of AI and law—a space we’ve been passionately shaping since our inception in 2019. DNLU’s efforts to nurture innovative legal thinking align beautifully with our mission to foster responsible AI development and governance in India, and we wish them resounding success in this endeavour. This moment also shines a light on the journey of aiact.in —our flagship project, the Artificial Intelligence (Development & Regulation) Act, 2023, spearheaded by our founder, Abhivardhan. Launched in November 2023 with no grand expectations, this privately proposed AI bill has grown into a pivotal resource, inspiring conversations like the one at DNLU. What began as a vision to craft an India-centric framework for AI regulation has, in just over a year, garnered appreciation from developers, judges, and technologists alike. Its strength lies in its feedback-driven approach—offering a practical, adaptable blueprint that stakeholders can refine and build upon. Seeing it spark a legislative drafting competition at DNLU is a testament to its relevance and potential to influence India’s AI policy landscape. For us at Indic Pacific, aiact.in  is more than a draft—it’s a cornerstone of our commitment to pioneering technology law solutions with an Indo-Pacific lens. Despite early skepticism (including a dismissive encounter with a law firm that overlooked its originality), this initiative has proven its worth by amplifying Indian perspectives in a global discourse often dominated by Western frameworks. It embodies our ethos of salience, persistence, and adaptivity, driving dialogue among startups, MSMEs, and policymakers. Through our Research & Innovation Division, VLiGTA®, we’ve ensured aiact.in  remains a dynamic tool—evolving with insights from industry and academia, as evidenced by its recognition in DNLU’s competition. We’re deeply grateful to DNLU Jabalpur for not only embracing this theme but also acknowledging our efforts in shaping AI governance. Your competition is a powerful step toward building a future where AI is harnessed responsibly, and we at Indic Pacific are honoured to be part of this narrative. Here’s to continued collaboration and innovation—may DNLU’s students and faculty inspire the next wave of legal brilliance!

  • The Version 5 of Artificial Intelligence (Development & Regulation) Act, 2023 is Launched

    Indic Pacific Legal Research, under the stewardship of Abhivardhan, proudly presents Version 5.0 of the Draft Artificial Intelligence (Development & Regulation) Act, 2023 ( AIACT.in ). This iteration introduces pivotal amendments, with Section 23 leading as a freshly revised cornerstone, alongside updates to Section 7, Section 9, Section 13, Section 20A, and the newly enacted Section 24-A. These changes underscore Indic Pacific’s commitment to ethical, transparent, and inclusive AI regulation in India. Section 23: Content Provenance and Identification (Key Highlight) Indic Pacific has reimagined Section 23 to set a gold standard for AI-generated content. The amendment mandates watermarking with detailed metadata—covering scraping methods, data origins, and licensing—while enforcing ethical data practices limited to consented or public sources. Developers of high-impact systems must secure insurance up to ₹50 crores, ensuring accountability and curbing misuse. This positions Indic Pacific at the forefront of content integrity. Section 7: Strengthened Risk Classification Indic Pacific refines AI risk tiers—Narrow, Medium, High, and Unintended—banning the latter and intensifying scrutiny on High-Risk systems. This amendment safeguards against unpredictable technologies, reinforcing public trust and security. Section 9: Oversight in Strategic Sectors High-risk AI in designated strategic sectors now falls under tailored regulations, with this Act prevailing over conflicting rules. Indic Pacific ensures robust governance where it matters most. Section 13: Enhanced National AI Ethics Code The updated ethics code prioritizes transparency, fairness, and human oversight, offering a clear roadmap for responsible AI. Indic Pacific champions ethical innovation with this refresh. Section 20A: Transparency in Public AI Initiatives Government and partnership AI projects must now disclose objectives, funding, and algorithms, backed by audits and public explanations. Indic Pacific drives accountability in the public sphere. Section 24-A: Right to AI Literacy Introduced A landmark addition, this section grants every individual access to AI literacy—covering concepts, impacts, and recourse options. Indic Pacific empowers citizens for an AI-driven future. These amendments, with Section 23 as the flagship, exemplify Indic Pacific’s vision for a balanced, responsible AI ecosystem. Please give your feedback on this version of the bill at vligta@indicpacific.com .

  • Decoding the AI Competency Triad for Public Officials: A Deep Dive into India’s Strategic Framework

    The Ministry of Electronics and Information Technology (MeitY) recently launched its AI Competency Framework, aiming to equip public officials with the skills needed to responsibly integrate artificial intelligence into governance processes. Our latest report, "Decoding the AI Competency Triad for Public Officials" (IPLR-IG-014), provides an in-depth analysis of this framework and its implications for India’s public sector. This report is authored by Abhivardhan, Founder & Managing Partner, and interns at the Indian Society of Artificial Intelligence and Law, Yashita Parashar, Sneha Binu, and Gargi Mundotia. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/ Why This Framework Matters India is at a pivotal moment in its AI journey, with initiatives like the IndiaAI Mission positioning the country as a global leader in ethical and inclusive AI adoption. The competency framework identifies three core skill areas—behavioral, functional, and domain-specific—that are essential for public officials navigating the complexities of AI governance. Key Highlights from the Report Behavioral Competencies Focuses on systems thinking, ethical governance, and innovative leadership to address complex societal challenges through AI. Functional Competencies Covers practical skills like risk assessment, procurement oversight, and data governance necessary for effective implementation of AI projects. Domain-Specific Competencies Tailored to high-impact sectors like healthcare, education, agriculture, urban mobility, and environmental management. Strategic Recommendations The report also provides actionable insights across three critical legal-policy dimensions: Data Policy Alignment: Ensuring privacy-by-design principles are embedded in every stage of AI deployment. Intellectual Property Management: Addressing gaps in knowledge sharing while safeguarding innovation rights. Accountability & Transparency: Establishing robust oversight mechanisms to ensure ethical use of AI technologies. Who Should Read This? This report is designed for policymakers, entrepreneurs, public officials, and citizens who want to understand how India is building capacity for responsible AI integration while addressing global challenges like bias mitigation and data privacy. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/

  • ciarb Guideline on the Use of AI in Arbitration (2025), Explained

    This insight is co-authored by Vishwam Jindal, Chief Executive Officer, WebNyay. The Chartered Institute of Arbitrators (CIArb) guideline on the use of AI in arbitration, published in 2025, provides a detailed framework for integrating AI into arbitration proceedings. This analysis covers every chapter, highlighting what each includes and identifying potential gaps. Below, we break down the key sections for clarity, followed by a detailed survey note for a deeper understanding. Chapter-by-Chapter Analysis Part I: Benefits and Risks: Details AI's advantages (e.g., legal research, data analysis) and risks (e.g., confidentiality, bias), providing a broad overview. Part II: General Recommendations: Advises on due diligence, risk-benefit analysis, legal compliance, and maintaining accountability for AI use. Part III: Parties’ Use of AI: Covers arbitrators' powers to regulate AI, party autonomy in agreeing on its use, and disclosure requirements for transparency. Part IV: Use of AI by Arbitrators: Allows discretionary AI use for efficiency, prohibits decision delegation, and emphasizes transparency through party consultation. Appendices: Includes templates for AI use agreements and procedural orders, aiding practical implementation. Definitions: Provides clear definitions for terms like AI, hallucination, and tribunal, based on industry standards. On definitions, it could have been better that ciarb could have opted definitions associated on AI from third-party technical forums like IEEE, Creative Commons, ISO etc., instead of IBM. Part I: Benefits and Risks Part I provides a balanced view of AI's potential benefits and risks in arbitration. The benefits section (1.1-1.10) highlights efficiency gains through legal research enhancement, data analysis capabilities, text generation assistance, evidence collection streamlining, and translation/transcription improvements. Notably, section 1.10 acknowledges AI's potential to remedy "inequality of arms" by providing affordable resources to under-resourced parties. The risks section (2.1-2.9) addresses significant concerns including confidentiality breaches when using third-party AI tools, data integrity and cybersecurity vulnerabilities, impartiality issues arising from algorithmic bias, due process risks, the "black box" problem of AI opacity, enforceability risks for arbitral awards, and environmental impacts of energy-intensive AI systems. Benefits Now, AI offers transformative potential in arbitration by enhancing efficiency and quality across various stages of the process: Legal Research : AI-powered tools outperform traditional search engines with their adaptability and predictive capabilities, enabling faster and more precise research. Data Analysis : AI tools can process large datasets to identify patterns, correlations, and inconsistencies, aiding in case preparation. Text Generation : Tools can draft, summarize, and refine documents while ensuring grammatical accuracy and coherence. Translation and Transcription : AI facilitates multilingual arbitration by translating documents and transcribing hearings at lower costs. Case Analysis : Predictive analytics provide insights into case outcomes and procedural strategies. Evidence Collection : AI streamlines evidence gathering and verification, including detecting deep fakes or fabricated evidence. Risks Despite its advantages, AI introduces several risks: Confidentiality : Inputting sensitive data into third-party AI tools raises concerns about data security and misuse. Bias : Algorithmic bias can compromise impartiality if datasets or algorithms are flawed. Due Process : Over-reliance on AI tools may undermine parties' ability to present their cases fully. "Black Box" Problem : The opaque nature of some AI algorithms can hinder transparency and accountability. Enforceability : The use of banned or restricted AI tools in certain jurisdictions could jeopardise the validity of arbitral awards. Limitations in Part 1 Part I exhibits several significant limitations that undermine its comprehensiveness: Incomplete treatment of risks : While identifying key risk categories, the guidelines lack depth in addressing bias detection and mitigation strategies, transparency mechanisms, and AI explainability challenges. Gaps in benefits coverage : The incomplete presentation of sections 1.5-1.9 suggests missing analysis of potential benefits such as evidence gathering and authentication applications. Absence of risk assessment framework : No structured methodology is provided for quantitatively evaluating the likelihood and severity of identified risks, leaving arbitrators without clear guidance on risk prioritisation. Limited forward-looking analysis : The section focuses primarily on current AI capabilities without adequately addressing how rapidly evolving AI technologies might create new benefits or risks in the near future. Part II: General Recommendations The CIArb guidelines emphasise a cautious yet proactive approach to AI use: Due Diligence : Arbitrators and parties should thoroughly understand any AI tool's functionality, risks, and legal implications before using it. Balancing Benefits and Risks : Users must weigh efficiency gains against potential threats to due process, confidentiality, or fairness. Accountability : The use of AI should not diminish the responsibility or accountability of parties or arbitrators. In summary, Part II establishes broad principles for AI adoption in arbitration. It encourages participants to conduct reasonable inquiries about AI tools' technology and function (3.1), weigh benefits against risks (3.2), investigate applicable AI regulations (3.3), and maintain responsibility despite AI use (3.4). The section addresses critical issues like AI "hallucinations" (factually incorrect outputs) and prohibits arbitrators from delegating decision-making responsibilities to AI systems. Part II provides general advice on due diligence, risk assessment, legal compliance, and accountability for AI use. However, it has notable gaps: Lack of Specific Implementation Guidance: The recommendations, such as conducting inquiries into AI tools (3.1), are broad and lack practical tools like checklists or frameworks. For example, it could include a step-by-step guide for evaluating AI tool security or a risk-benefit analysis template, aiding users in application. Insufficient technical implementation guidance : The recommendations remain abstract without providing specific technical protocols for different types of AI tools or use cases. No Examples or Hypothetical / Real Case Studies: Without real-world scenarios or even comparable hypothetical scenarios, such as how a party assessed an AI tool for confidentiality risks, practitioners may struggle to apply the recommendations. Hypothetical examples could bridge this gap, enhancing understanding. Absence of AI literacy standards : No baseline competency requirements are established for arbitration participants using AI tools, creating potential disparities in understanding and application. Missing protocols for AI transparency : The guidelines don't specify concrete mechanisms to make AI processes comprehensible to all parties, particularly important given the "black box" problem acknowledged elsewhere. No Mechanism for Periodic Review: Similar to Part I, there is no provision for regularly updating the recommendations, such as a biennial review process, which is critical given AI's rapid evolution, like the advent of generative AI models. Lack of Input from Technology Experts: The guideline does not indicate consultation with AI specialists or technologists, such as input from organizations like the IEEE ( IEEE AI Ethics ), which could ensure the recommendations reflect current industry practices and technological realities. Part III: Parties’ Use of AI Arbitrators’ Powers Arbitrators have broad authority to regulate parties' use of AI: They may issue procedural orders requiring disclosure of AI use if it impacts evidence or proceedings. Arbitrators can appoint experts to assess specific AI tools or their implications for a case. Party Autonomy Parties retain significant autonomy to agree on the permissible scope of AI use in arbitration. Arbitrators are encouraged to facilitate discussions about potential risks and benefits during case management conferences. Disclosure Requirements Parties may be required to disclose their use of AI tools to preserve procedural integrity. Non-compliance with disclosure obligations could lead to adverse inferences or cost penalties. In summary, Part III establishes a framework for regulating parties' AI use. Section 4 outlines arbitrators' powers to direct and regulate AI use, including appointing AI experts (4.2), preserving procedural integrity (4.3), requiring disclosure (4.4), and enforcing compliance (4.7). Section 5 respects party autonomy in AI decisions while encouraging proactive discussion of AI parameters. Sections 6 and 7 address rulings on AI admissibility and disclosure requirements respectively. Part III contains several problematic gaps: Ambiguity in Private vs. Procedural AI Use: Section 4.5 states arbitrators cannot regulate private use unless it interferes with proceedings, but the boundary is vague. For example, using AI for internal strategy could blur lines, and clearer definitions are needed. Inadequate dispute resolution mechanisms : Despite acknowledging potential disagreements over AI use, the guidelines lack specific procedures for efficiently resolving such disputes. Disclosure framework tensions : The optional nature of disclosure creates uncertainty about when transparency should prevail over party discretion, potentially undermining procedural fairness. Absence of cost allocation guidance : The guidelines don't address how costs related to AI tools or AI-related disputes should be allocated between parties. Limited cross-border regulatory guidance : Insufficient attention is paid to navigating conflicts between different jurisdictions' AI regulations, a critical issue in international arbitration. Potential Issues with Over-Reliance on Party Consent: The emphasis on party agreement (Section 5) might limit arbitrators’ ability to act decisively if parties disagree, especially if one party lacks technical expertise, potentially undermining procedural integrity. Need for Detailed Criteria for Selecting AI Experts: While arbitrators can appoint AI experts, there are no specific criteria, such as qualifications in AI ethics or experience in arbitration, which could ensure expert suitability and consistency. Part IV: Use of AI by Arbitrators Discretionary Use Arbitrators may leverage AI tools to enhance efficiency but must ensure: Independent judgment is maintained. Tasks such as legal analysis or decision-making are not delegated entirely to AI. Transparency Arbitrators are encouraged to consult parties before using any AI tool. If parties object, arbitrators should refrain from using that tool unless all concerns are addressed. Responsibility Regardless of AI involvement, arbitrators remain fully accountable for all decisions and awards issued. In summary, Part IV addresses arbitrators' AI usage, establishing that arbitrators may employ AI to enhance efficiency (8.1) but must not relinquish decision-making authority (8.2), must verify AI outputs independently (8.3), and must assume full responsibility for awards regardless of AI assistance (8.4). Section 9 emphasises transparency through consultation with parties (9.1) and other tribunal members (9.2). Part IV exhibits several notable limitations: Inadequate technical implementation guidance : The section provides general principles without specific technical protocols for different AI applications in arbitrator decision-making. Missing AI literacy standards for arbitrators : No baseline competency requirements are established to ensure arbitrators sufficiently understand the AI tools they employ. Insufficient documentation requirements : The guidelines don't specify how arbitrators should document AI influence on their decision-making process in awards or orders. Absence of practical examples : Without concrete illustrations of appropriate versus inappropriate AI use by arbitrators, the guidance remains abstract and difficult to apply. Underdeveloped bias mitigation framework : While acknowledging potential confirmation bias, the guidelines lack specific strategies for detecting and counteracting such biases. Appendix A: Agreement on the Use of AI in Arbitration Appendix A provides a template agreement for parties to formalize AI use parameters, including sections on permitted AI tools, authorized uses, disclosure obligations, confidentiality preservation, and tribunal AI use1. Critical Deficiencies Appendix A falls short in several areas: Excessive generality : The template may be too generic for complex or specialised AI applications, potentially failing to address nuanced requirements of different arbitration contexts. Limited customisation guidance : No framework is provided for adapting the template to different types of arbitration or technological capabilities of the parties. Poor institutional integration : The template doesn't adequately address how it interfaces with various institutional arbitration rules that may have their own technology provisions. Static nature : No provisions exist for updating the agreement as AI capabilities evolve during potentially lengthy proceedings. Insufficient technical validation mechanisms : The template lacks provisions for verifying technical compliance with agreed AI parameters. Appendix B: Procedural Order on the Use of AI in Arbitration Appendix B provides both short-form and long-form templates for arbitrators to issue procedural orders on AI use, introducing the concept of "High Risk AI Use" requiring mandatory disclosure, establishing procedural steps for transparency, and enabling parties to comment on proposed AI applications. Critical Deficiencies Appendix B contains several notable gaps: Technology adaptation limitations : The templates lack mechanisms for addressing emerging AI technologies that may develop during proceedings. Enforcement uncertainty : Limited guidance is provided on monitoring and enforcing compliance with AI-related orders. Insufficient technical validation : The templates don't establish concrete mechanisms for verifying adherence to AI usage restrictions. Absence of update protocols : No provisions exist for modifying orders as AI capabilities evolve during proceedings. Limited remedial options : Beyond adverse inferences and costs, few specific remedies are provided for addressing non-compliance. Conclusion: Actionable Recommendations for Enhancement The CIArb AI Guideline represents a significant first step toward establishing a framework for AI integration in arbitration, demonstrating awareness of both benefits and risks while respecting party autonomy. However, to transform this preliminary framework into a robust and practical tool, several enhancements are necessary: Technical Implementation Framework : Develop supplementary technical guidelines with specific protocols for AI verification, validation, and explainability across different arbitration contexts and AI applications. AI Literacy Standards : Establish minimum competency requirements and educational resources for arbitrators and practitioners to ensure informed decision-making about AI tools. Adaptability Mechanisms : Implement a formal revision process with specific timelines for guideline updates to address rapidly evolving AI capabilities. Transparency Protocols : Create more detailed transparency requirements with clearer thresholds for mandatory disclosure to balance flexibility with procedural fairness. Risk Assessment Methodology : Develop a quantitative framework for systematically evaluating AI risks in different arbitration contexts. Practical Examples Library : Supplement each section with concrete case studies illustrating appropriate and inappropriate AI applications in arbitration. Institutional Integration Guidance : Provide specific recommendations for aligning these guidelines with existing institutional arbitration rules.

  • Does Art Law in India Require Regulation? Maybe.

    India, a country with an unparalleled artistic heritage, faces unique legal challenges in regulating its growing art market. While existing laws protect antiquities and govern intellectual property, the lack of a dedicated regulatory body for art has led to gaps in dispute resolution, authentication, taxation, and trade compliance. Moreover the rise of digital art and NFTs has introduced complexities and intricacies that Indian laws are yet to fully address and dealt with. Without proper oversight, artists, collectors, and investors navigate a market that is often ambiguous and vulnerable to exploitation. This article highlights these pressing issues and the crucial role of arbitration, mediation, and regulatory reforms in shaping a more structured and secure art ecosystem. India's Art Industry at Loggerheads? The Indian art industry remains largely unregulated, leading to issues such as forgery, misrepresentation, and an unclear dispute resolution mechanism. Without a formal authentication authority, buyers and collectors often struggle to verify the provenance of artworks, increasing the risk of fraud, duplicate art works . This lack of oversight has allowed counterfeit artworks to flood the market, eroding trust and making transactions riskier for both buyers and sellers. Adding to these concerns is the absence of regulated pricing and taxation policies, making it difficult for artists and buyers to navigate legal obligations. Unlike other industries that benefit from structured oversight, art transactions in India remain fragmented, leading to inconsistent taxation and compliance challenges. Many deals occur in informal markets, where tax evasion and opaque pricing structures prevail. Without a dedicated Art Regulatory Authority, buyers rely on informal networks for provenance verification, and disputes often escalate into prolonged litigation. The lack of streamlined governance and regulations in the art market highlights the requirement for a structured regulatory framework that can ensure transparency, fairness, and accountability in all aspects of art trade and ownership. In India however, art arbitration looks at an interplay between, intellectual property rights and arbitration laws. As per the Arbitration and Conciliation Act, awards are unenforceable if they arise out of an “in-arbitrable” dispute. Art disputes involve issues of ownership, authenticity and copyright infringement, succession, and testamentary matter, therefore are often contested as being in-arbitrable   Art disputes often involve complex issues, including authorship claims, forgery allegations, and breach of contractual terms. Given the time-consuming nature of traditional litigation, arbitration and mediation have become preferred modes of dispute resolution in the global art market. These mechanisms provide a faster, more cost-effective, and confidential approach to resolving conflicts without jeopardizing artistic or commercial relationships. Mediation allows parties to reach a mutually acceptable settlement while preserving professional relationships. This is particularly useful in cases involving artist-gallery disputes, copyright infringements, and ownership claims. A mediated resolution ensures that creative partnerships remain intact, preventing long legal battles from hindering artistic growth. Arbitration, on the other hand, ensures confidential, specialised, and enforceable decisions, making it ideal for high-value transactions. Art-related disputes often involve international parties, and arbitration provides a neutral forum for resolution. Institutions such as the Delhi International Arbitration Centre (DIAC) and Mumbai Centre for International Arbitration (MCIA) have begun handling art-related disputes, yet India still lacks dedicated arbitral rules for art transactions. By integrating alternative dispute resolution mechanisms into the art industry, India can ensure faster dispute resolution, and stronger legal safeguards for artists, collectors. With the rise of blockchain technology, digital art and NFTs (Non-Fungible Tokens) have opened new avenues for artists to monetise their work. However, Indian law remains silent on key aspects, leading to challenges in ownership rights, resale royalties, and tax implications. One of the biggest concerns is ownership rights who holds the copyright for an NFT, the artist or the buyer? Traditional art markets recognize artists' rights to their works, but if we talk about in the digital space, the legal standing of NFT ownership is still unregulated . More of that, there is ambiguity surrounding resale royalties, where artists often do not receive reward and compensation  when their NFT is resold in the secondary market. In the absence of clear legal provisions, artists are often at the mercy of marketplace policies. Tax implications also remain undecided. Are NFTs classified as goods, securities, or digital assets under Indian law? The lack of proper classification results in taxation challenges, leaving buyers and sellers in a legal gray area. Without a defined legal framework, NFT transactions could potentially fall under multiple tax regulations, leading to confusion and unintended liabilities. A lack of regulation has led to instances of digital art theft, plagiarism, and   unauthorized commercial use, leaving artists vulnerable. The rise of AI-generated art and digital manipulation further complicates the legal landscape.   The international art trade is heavily regulated, and India has multiple laws governing the import and export of artworks. However, enforcement gaps have led to a thriving underground market where valuable artifacts bypass legal scrutiny. The Foreign Exchange Management Act (FEMA) 1999 governs cross-border transactions. Restrictions on foreign direct investment (FDI) in the art sector limit global collaborations, while compliance with Reserve Bank of India (RBI) regulations is mandatory. The Goods and Services Tax (GST) applies to artworks. Original paintings and sculptures attract 12% GST, while prints and reproductions are taxed at 18% GST (Ministry of Finance, Government of India). High taxes encourage informal trade and underreporting, impacting transparency. The Consumer Protection Act, 2019 protects buyers from misrepresentation and fraud, particularly in online sales (Department of Consumer Affairs, India). However, lack of a formal certification authority makes enforcement difficult. The Customs Tariff Act, 1975, governs import duties and requires special permits for antique exports (Central Board of Indirect Taxes and Customs). Stronger inter-agency collaboration is needed to curb illegal art trafficking and reclaim stolen heritage. Conclusion Art law in India is at a crossroads, requiring urgent regulatory intervention to balance cultural preservation with modern commercial needs. By establishing a dedicated regulatory body, modernizing legal frameworks, and integrating alternative dispute resolution mechanisms, India can create a more structured and globally competitive art market.

  • Excited to share: The Indo-Pacific Principles of Legal & Policy Writing - Our Blueprint for Tech Law Excellence! ✨📊🤖

    At Indic Pacific Legal Research, we're thrilled to present a set of guiding writing standards crafted to elevate legal and policy communication in the tech space. Get the writing guidelines at indicpacific.com/guidelines . In today's complex AI governance landscape, clear communication isn't just nice-to-have - it's essential. 🔍 Why these principles matter now more than ever: 1️⃣ Precision Over Prolixity ⚡ - As India develops its AI regulatory framework (like MeitY's recent report on AI Governance Guidelines), our work requires communication that cuts through complexity. Every word must earn its place! 2️⃣ Nuance and Novelty Matter 💡 - Our projects demonstrate our commitment to original thinking over redundant reviews. 3️⃣ Be Unassailable 🛡️ - Our consultancy work demands arguments sharp enough to cut through noise yet grounded in reality - essential when advising on AI governance frameworks. 4️⃣ Clarity Is Authority 📣 - We've learned that complex tech law ideas demand simple expressions. If readers struggle, we haven't mastered our craft! 5️⃣ Visuals Amplify Words 📊 - Our "Graphics-powered Insights" service exemplifies how diagrams and visuals can enhance understanding of complex AI governance issues. 6️⃣ Always Anchor in Relevance 🎯 - Our approach to "whole-of-government" AI regulation demonstrates how every idea must drive home a purpose. 7️⃣ Respect the Reader's Time ⏱️ - We prioritise purposeful precision that both informs and engages. These principles guide our advisory work with tech companies and government stakeholders as we navigate India's evolving AI ecosystem. They're not just writing rules - they're the foundation of responsible tech governance! 🌐 As our founder Abhivardhan  says: "Complex ideas demand simple expressions." This philosophy powers our work in technology law, AI governance, and policy development across India and beyond. What principles guide YOUR communication in the tech policy space? Share below! 👇

  • New Publication: Artificial Intelligence and Policy in India, Volume 6

    Proud to announce our latest publication: "Artificial Intelligence and Policy in India, Volume 6," edited by Abhivardhan! 🎉📘 This research collection represents our continued commitment to exploring the frontier of AI governance and implementation in India. 🇮🇳🤖 Read this collection at https://indopacific.app/product/artificial-intelligence-and-policy-in-india-volume-6-aipi-v6/ In collaboration with the Indian Society of Artificial Intelligence and Law (ISAIL), we've brought together four exceptional papers from talented ISAIL interns: 🔹 Rasleen Kaur Dua tackles ethical and regulatory challenges in AI-driven supply chains 🔹 Parvathy Arun explores how algorithms are revolutionizing financial trading 🔹 Oshi Yadav investigates blockchain's transformative role in our digital economy 🔹 Eva Mathur examines how legal education must evolve in the age of technology This volume is essential reading for anyone interested in understanding how AI is reshaping India's policy landscape across multiple sectors. 📊⚖️💡 Available now! Tag someone who needs this resource in their professional library.

  • New Report: Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001

    We are eager to release the Vidhitsa Law Institute's first technical report, on artificial intelligence hype and its legal-economic risks. Bhavana J Sekhar, Principal Researcher and Poulomi Chatterjee, Contributing Researcher have co-authored this report with me. In this work, we have addressed the issue of hype cycles caused by artificial intelligence technologies in detail. This report is an initial research contribution developed by the team of Vidhitsa Law Institute of Global and Technology Affairs (VLiGTA) as a part of the efforts in the Artificial Intelligence Resilience department. We have continued our work which we had started in the Indian Society of Artificial Intelligence and Law (ISAIL) since 2021, on formalising ethics research on the trend of Artificial Intelligence hype. In my discussions and consultations with Dr Jeffrey Funk , a former Faculty at the National University of Singapore, Bogdan Grigorescu , a tech industry expert and an ISAIL Alumnus and Dr Richard Self from the University of Derby, I realised that it is necessary to cater to encapsulate the scope and extent of Artificial Intelligence hype beyond competition policy and data privacy issues, which many developed countries in the D9 group of countries have already faced. Many technology companies inflate their valuations and use Artificial Intelligence to hype their products and services’ value. This however can be done by influencing stocks, distorting perceptions, misdirecting demand and credibility concerns and other methods as well. The key to the exploitative nature of AI hype as we know them is based on the interconnectedness of the information and digital economy and how minuscule economic and ethical innovations in AI as a technology, can be abused. Bhavana’s Market Analysis is succinct and focuses on the points of convergence and Poulomi’s evaluation of the ethics of Artificial Intelligence is appreciated. I express my special regards to Sanad Arora from the Vidhitsa Law Institute and Ayush Kumar Rathore from Indic Pacific’s Technology Team for their moral support. Some of the key aspects discussed in report are about the perpetuation of the hype cycles and their formalisation in the legal rubric for regulators. We have also focused with a soft law perception to address larger economic and technical issues and offered recommendations. Based on our research, we have formulated seven working conditions to determine artificial intelligence hype , which are based on a set of stages: Stage 1: Influence or Generation Determination An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle. Stage 2: Influencing or Generating Market Perceptions & Conditions The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view. The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices. Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern. Stage 3: Uninformed or Disinformed Markets The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases. Stage 4: Misdirected Perceptions in the Information & Digital Economy The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed. Stage 5: Estimation of the Hype Cycle through Risk Determination In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach. Further interpretation and explanations have been provided in the report. Recommendations in this Report Companies must make it clear to the regulatory bodies on the investment and ethical design of the products and services which involve narrow AI and high-intensive AI technologies. Maintaining efficient knowledge management systems catering to IP issues is important. It is essential that the economic and ethical repercussions of the biproducts of knowledge management are addressed carefully due to the case that many Artificial Intelligence technologies still would remain inexplicable due to reasons including ethical ambiguity. If Artificial Intelligence technologies are included at any managerial level groups, departments and divisions, which also includes the board of directors for consultative, reliance or any other tangible cause, then regardless of their attribution to the knowledge management systems maintained by the company itself, including concerns on intellectual property, a risk-oriented practice of maintaining legitimate and viable transparency on issues around data protection & privacy and algorithmic activities & operations must be adopted. Regulators can adopt for self-regulatory directives or solutions. In case regulatory sandboxes are necessary to be used, there must be separate guidelines (since they are not products or services) for such kinds of technologies by virtue of their use case in the realm of corporate governance. The transboundary flow of data, based on some commonalities of ethical and quality assessment, can be agreed amongst various countries subject to their data localisation and quality policies. When it comes to Artificial Intelligence technologies, to reduce or detect the impact and aftermath of Artificial Intelligence hype cycles – governments must negotiate on agreeing for an ethical free flow of data and by mapping certain algorithmic activities & operations which affect public welfare on a case-to-case basis. We propose that the Working Conditions to Determine Artificial Intelligence Hype can be regarded in a consultative sense a framework to intermix competition policy and technology governance concerns, by various stakeholders. We are open to consultation, feedback and alternate opinions. We also propose that the Model Algorithmic Ethics Standards (MAES) to be put into use, so that some estimations, can be made at a preliminary level as regulatory sandboxes are subject to procurement. The Report is available here . Price: 200 INR

  • The UK Government Brief on AI and Copyright Law (2024), Explained

    The author of this insight was a Research Intern at the Indian Society of Artificial Intelligence and Law. Made via Luma AI. The UK economy is driven by many creative industries, including TV and film, advertising, performing arts, music publishing and video games contributing nearly 124.8 billion GVA to the economy annually. The rapid development of AI over the recent years has sparked a debate globally and within the UK about various challenges and opportunities it brings. It led to massive concerns within the creative and media industries about their work being used to train AI without their permission and media organizations not being able to secure remuneration to licensing agreements. There has also been a lack of transparency from the AI developers about the content that is being used to train the models while these firms also raise their own concerns about the lack of clarity over how they can legally access the data to train the models. These concerns are hindering AI, adoption, stunting innovation, and holding back the UK from fully utilizing the potential AI holds. The UK government consultation document highlights the need for working in partnership with both the AI sector and media sector ensuring greater transparency from AI developers, to build trust between developers and the creative industry. Focus Areas of the Consultation The key pillars of the UK government focused on the approach to copyright and AI policy include transparency, technical standards, contracts and licensing, labelling, computer generated works, digital replicas and emerging issues. The government aims to tackle the challenges with AI in terms of copyright by ensuring that AI developers are transparent about the use of training data for their AI models. The government seeks views on the level of transparency required to ensure that there is a trust built between AI companies and organisations in the creative industry. Establishing technical standards will help improve and standardise the tools, making it easier for creators and developers to exercise their reserving rights. Moreover, licensing frameworks need to be strengthened to ensure that the creators receive fair remuneration while the AI developers also get access to necessary training material. Labelling measures help distinguish the AI generated content from the human created work, which will foster clarity for consumers. Additionally, the protection of consumer generated work needs to align with modern AI capabilities so that fairness is ensured. Finally, addressing digital replicas, such as deepfakes is essential to protect individuals’ identity from misuse.      Figure 1: Key pillars of Copyright and AI policy Overcoming Challenges in AI Training and Copyright Protection The government’s consultation document looks at the problem of using copyrighted works to train AI models. AI developers use large amounts of data, including copyrighted works, to train their models but many creators don’t get paid for the use of their work. The consultation highlights the issue of transparency as creators often don’t know if their work is in the AI training datasets. The government acknowledges the conflict between copyright law and AI development especially when AI outputs reproduce substantial parts of copyrighted works without permission which could be copyright infringement. The Getty Images vs Stability AI case is being debated but it may take years to resolve. The government is looking at legislation to clarify the rules around AI training and outputs to get the balance right between creators and AI developers. Figure 2: A Venn Diagram discussing intersectional aspects around AI Training & Data Mining and Copyright Ownership & Creator Rights   Exceptions with rights reservation Key features and scope The data mining exception and rights reservation package that is under consideration would have features pertaining to increased transparency by AI firms in use of training data, ensuring right holders get fair payment upon use of their work by AI firms and addresses the need for licensing. The proposed solutions aim to regulate data mining activities ensuring lawful access to data building trust and partnership between AI firms and media and creative organisations. Figure 3: Proposed exceptions to Data Mining and its Scope. Addressing Challenges in Developing and Implementing Technical Standards There is a requirement and growing need for standardization for copyright and AI so publishers of content on the Internet can reserve the rights while AI developers have access to training data that does not infringe on the rights of publishers. Regulation is needed to support the adoption of such standards, which will ensure that protocols are recognised and complied with. There are multiple generative AI web crawlers that flag data unavailable for training to the developer. Many firms and data set owners also keep themselves open to be notified more directly by organisations if they don’t want their work to be used for training an AI model. However, even the most widely adopted standard, which is the robots.txt cannot provide granular control over the use of works that right holders seek. Robots.txt does not allow a massive degree of control because content that is being used for search indexing or language training may not be recognized for generative AI. The consultation proposes the need for standardisation that ensure that developers have legal access to training data and that protocols protecting data privacy of content are met.   Figure 4: Key focus areas to achieve technical standardisation Contracts and licensing Contracts and licensing for AI training often involve creators licensing their works through collective management organizations (CMOs) or directly to developers, but creators sometimes lack control over how their work is used. Broad or vague contractual terms and industry expectations can make it challenging for creators to protect their rights. CMOs play a crucial role in efficiently licensing large collections of works, ensuring fair remuneration for creators while simplifying access for AI developers. However, new structures may be needed to aggregate and license data for AI training. The government aims to support good licensing practices, fair remuneration, and mechanisms like text and data mining (TDM) exceptions to balance the needs of right holders and AI developers. Additionally, copyright and AI in education require consideration to protect pupils’ intellectual property while avoiding undue burdens on educators. Ensuring Transparency: Tackling Challenges in Openness and Accountability  Transparency is crucial for building trust in AI and copyright frameworks. Right holders face challenges in determining whether their works are used for AI training, as some developers do not disclose or provide limited information about training data sources. Greater transparency can help enforce copyright law, assess legal liabilities, and foster consumer confidence in AI systems. Potential measures include requiring AI firms to disclose datasets, web crawler details, and compliance with rights reservations. However, transparency must be balanced with practical challenges, trade secret protections, and proportionality. International approaches, such as the EU’s AI Act and California’s AB 2013, offer insights into implementing effective transparency standards, which the UK will consider for global alignment. Enhancing Accountability Through Effective AI Output Labelling Standards Labelling AI-generated outputs enhances transparency and benefits copyright owners, service providers, and consumers by providing clear attribution and informed choices. Industry initiatives like Meta’s ‘AI info’ label exemplify current efforts, but consistent regulation may be needed to ensure uniformity and effectiveness. Challenges include defining the threshold for labelling, scalability, and preventing manipulation or removal of labels. International developments, such as the EU AI Act’s rules for machine-readable labels, offer valuable insights. The UK government will explore supporting research and development for robust labelling tools to promote transparency and facilitate copyright compliance.   Figure 5: AI Labelling, depicted.   Navigating Challenges in Regulating Digital Replicas The use of AI to create “digital replicas” of actors and singers—realistic images, videos, and audio replicating their voice or appearance has raised significant concerns within the creative industries. These replicas are often made without consent, using AI tools trained on an individual’s likeness or voice. Existing protections in the UK, such as intellectual property rights, performers’ rights under the CDPA 1988, and data protection laws, offer some control over the misuse of personal data or unauthorized reproductions. However, concerns remain about AI’s ability to imitate performances or create synthetic reproductions, prompting calls for stronger legal protections, such as the introduction of personality rights. The government acknowledges these concerns and is exploring whether the current legal framework adequately protects individuals’ control over their personality and likeness while monitoring international developments, such as proposed federal laws in the US. Policy Analysis and The Way Ahead The UK government's Copyright and AI consultation is a critical moment for policy to strike the balance between technological innovation and the protection of creative industries, generally, the proposal aims to solve a complicated thicket of legal issues on AI model training. This would allow access to copyrighted works by AI developers unless rights holders specifically opt out, addressing considerably grey areas of uncertainty that still lurk over AI developments. The consultation accepts that the fast development in technology no longer fits very well with the existing copyright framework, thus putting the UK in danger of losing its edge in the field of global AI innovations. An opt-out mechanism in place for copyright rules would help stimulate policymakers who otherwise could not be sure how to protect intellectual property in an environment conducive to technological improvements. Creative industries express grave concerns that unlicensed use of their works by new AI firms, arising from a notion of fair use protections, will undermine personal ownership. AI companies counter that without complete access to the training data required for the construction of sophisticated models of machine learning, through either licensing or exceptions, they won't be able to continue with their work. The intentions of these consultations are to find some common ground, a solution which looks set to simultaneously ensure AI's continued development and provide some control and possible remuneration to content creators that would help de-escalate conflicts between these two groups. Arising out of a more long-term vision, these consultations represent the beginning of an attempt to get ahead of the curve in shaping copyright law, technology development, and IP issues in an increasingly AI-governed world.   References UK Government. (2021, December 16). Copyright and artificial intelligence . GOV.UK . Retrieved December 25, 2024, from https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence

  • Decoding the Second Trump Admin AI & Law Approach 101: the Jan 23 Executive Order

    On January 23, 2025, US President Donald Trump signed an executive order titled "Removing Barriers to American Leadership in Artificial Intelligence," marking a significant shift in U.S. AI policy. This order revokes and replaces key elements of the Biden administration's approach to AI governance. Here's a simple and comprehensive breakdown of the executive order's main provisions in this quick insight. Core Framework and Purpose President Trump signed the executive order " Removing Barriers to American Leadership in Artificial Intelligence " with the fundamental aim of maintaining U.S. leadership in AI innovation through free markets, research institutions, and entrepreneurial spirit. The order explicitly establishes a policy directive to enhance America's global AI dominance for promoting human flourishing, economic competitiveness, and national security. Definition of Artificial Intelligence Relied Upon The order refers to a specific definition of artificial intelligence based on 15 USC § 9401(3) : The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to— (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action. The definition specifies that AI systems must use both machine and human-based inputs to perform three key functions: Perceive real and virtual environments Abstract these perceptions into models through automated analysis Use model inference to formulate options for information or action Significance and Context of the AI Definition This definition choice is notable for some reasons: The order maintains continuity with existing legal frameworks by using the established definition from the National Artificial Intelligence Initiative Act of 2020 . However, this definition has limitations - it may not clearly encompass generative AI unless broadly interpreted, yet such broad interpretation could include basic computer systems not typically considered AI. The definition's scope is particularly relevant as it serves as the foundation for all provisions and implementations under the new executive order, including the development of the mandated AI action plan within 180 days. Implementation Structure The order creates an implementation framework centered around key officials including David Sacks as Special Advisor for AI and Crypto, working alongside the Assistant to the President for Science and Technology and National Security Affairs. Within 180 days, these officials must develop a comprehensive AI action plan, coordinating with economic policy advisors, domestic policy advisors, the OMB Director, and relevant agency heads. Regulatory Changes and Review Process A significant aspect of the order is its systematic dismantling of previous AI governance structures. The Office of Management and Budget has been given 60 days to revise Memoranda M-24-10 and M-24-18, which were cornerstone AI policy documents under the Biden administration. Agency heads are directed to suspend, revise or rescind actions that conflict with the new policy direction. Nik Marda, who had formerly worked in the Office of Science and Technology Policy at the Biden Administration, had remarked : "That's not only bad policy given the many real bias and discrimination risks from AI, but it's also a significant departure from both the bipartisan law that mandates this guidance and Trump's own 2020 executive order on AI — both of which name civil rights as an objective for federal use of AI." Now, for some context: the OMB Memoranda M-24-10 and M-24-18 were two critical policy documents issued in 2024 under the Biden administration that established the federal government's AI governance framework. M-24-10: Core Governance Framework Released in March 2024, Memorandum M-24-10 "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" had established fundamental AI governance structures across federal agencies. Key elements included : Creation of Chief AI Officer (CAIO) positions in federal agencies Establishment of the Chief AI Officer Council (CAIOC) for interagency coordination Requirements for agencies to develop compliance plans for AI governance Framework for identifying and managing "rights-impacting" and "safety-impacting" AI systems To note, M-24-10 never included common commercial products with relevant embedded AI functionality. M-24-18: AI Acquisition Guidelines Issued on October 3, 2024, Memorandum M-24-18 "Advancing the Responsible Acquisition of Artificial Intelligence in Government" focused specifically on federal AI procurement. The memorandum: Set requirements for contracts awarded after March 23, 2025 Required agencies to disclose if AI systems were rights-impacting or safety-impacting Mandated 72-hour reporting of serious AI incidents Established contractor requirements for AI system documentation and training data transparency Implementation Status of the Memoranda Before Trump's executive order, agencies were actively implementing these memoranda. For example : The USDA had developed and published its compliance plan in September 2024 Federal agencies were required to identify contracts involving rights-impacting AI by November 1, 2024 The framework was being integrated with national security considerations through the October 2024 National Security Memorandum on AI Trump's January 23, 2025 executive order now requires OMB to revise both memoranda within 60 days. Most probably, it could be a significant shift away from Biden's emphasis on AI safety and governance toward reduced regulatory oversight. Ideological Shift in AI Development The order marks a stark departure from previous approaches by emphasizing AI systems "free from ideological bias or engineered social agendas". This represents a fundamental shift from the Biden administration's focus on AI safety and ethical considerations. However, the ideological shift may also signal that this administration is not interested in DEI-related and over-regulatory Responsible AI ideas, which have been under consideration in the previous administration. It must be noted that the previous administration's obsession with a few companies to merely focus on AI innovation & AI safety was also counterproductive, despite the fact that the main Executive Order of the Biden administration keenly focused on capacity building on relevant responsible AI measures for their own government personnel. Energy Infrastructure Expansion Trump declared a national energy emergency to fast-track the approval of energy projects critical for AI development. During his address at the World Economic Forum in Davos, Switzerland , he emphasized that the U.S. must double its current energy capacity to meet the demands of AI technologies. This includes constructing power plants specifically dedicated to supporting AI data centres. These facilities will not be subject to traditional climate objectives, with Trump allowing them to use any fuel source, including coal as a backup. He highlighted the reliability of coal, describing it as "a great backup" due to its resilience against weather and other disruptions. The president also proposed a co-location model , where power plants are built directly adjacent to AI data centers. This approach bypasses reliance on the traditional electricity grid, which Trump described as "old and vulnerable." By connecting power plants directly to data centers, companies can ensure uninterrupted energy supply for their operations. The Stargate Initiative In conjunction with these measures, Trump announced the Stargate Initiative , a joint venture between OpenAI, Oracle, and SoftBank, aimed at investing up to $500 billion over four years in building AI infrastructure. The initiative begins with a $100 billion investment in data centers, starting with a flagship facility in Texas. The project is supposed to be expected to create over 100,000 jobs and significantly enhance U.S. computing capabilities for AI. Industry Impact The executive order also introduces sweeping changes aimed at reducing regulatory oversight on AI companies, which Trump has characterised as "unnecessarily burdensome" under Biden's administration. Deregulation Trump's order repealed Biden's 2023 executive order that mandated stricter oversight of AI development. Key provisions eliminated include: Requirements for AI developers to submit safety testing data before deploying high-risk technologies. Obligations for federal agencies to assess and mitigate potential harms caused by government use of AI tools. Mandatory transparency measures for companies building powerful AI models. By removing these guardrails, the Trump administration seeks to foster innovation by reducing compliance costs and accelerating deployment timelines. This deregulatory approach aligns with Trump's broader economic agenda of empowering private sector-led growth in technology industries. What will happen to the AI Diffusion Export Control Rule? Reevaluation of Licensing Requirements The AI Diffusion Export Control Rule currently imposes global licensing requirements for exporting advanced computing integrated circuits (ICs) and AI model weights trained with more than 10 to the power of 26 computational operations. These restrictions are designed to prevent U.S. adversaries, particularly China, from accessing cutting-edge AI technologies while allowing controlled exports to trusted allies under specific license exceptions.Trump's executive order, which emphasises reducing regulatory barriers and fostering U.S. innovation, may lead to: Relaxation of Licensing Rules : The administration could ease licensing requirements for U.S. companies exporting AI technologies to middle-tier countries (e.g., India, Brazil) to boost international market competitiveness. Focus on Deregulation : There is a strong likelihood that Trump will reduce compliance burdens for U.S. firms by eliminating or simplifying reporting and monitoring obligations tied to export licenses. Strategic Adjustments to Country Tiers The Biden administration's framework divides countries into three tiers: Top-tier allies (e.g., NATO members, Japan, South Korea) enjoy unrestricted access. Middle-tier nations (e.g., India, Saudi Arabia) face quotas on computing power imports. Adversarial nations (e.g., China, Russia) remain blocked from accessing advanced U.S. AI technologies. Trump's transactional foreign policy approach suggests potential adjustments: Reclassification of Countries : Middle-tier nations like India and Israel could be moved into the top tier as part of bilateral negotiations or geopolitical alliances. Use as Diplomatic Leverage : Trump may leverage access to U.S. AI technologies as a bargaining chip in trade agreements or security partnerships. Private Sector Benefits The rollback of regulations has been welcomed by industry leaders who argue that excessive oversight stifles innovation and hinders competitiveness. For example, OpenAI CEO Sam Altman has praised the administration's focus on enabling rapid development of computing infrastructure through initiatives like Stargate. Trump’s administration has framed these changes as essential for maintaining U.S. leadership in AI while countering international competitors like China. The emphasis on deregulation reflects a shift toward prioritizing economic competitiveness over ethical or safety considerations. Conclusion The January 23 executive order demonstrates President Trump's aggressive push for U.S. dominance in artificial intelligence by combining massive infrastructure investments with significant deregulation of the industry. While these measures promise economic growth and technological advancement, they come with potential risks related to environmental impact and reduced oversight. If you have any questions or need assistance regarding this policy or its implications, feel free to contact us at vligta@indicpacific.com . Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • Beyond the AI Garage: India's New Foundational Path for AI Innovation + Governance in 2025

    This is quite a long read. India's artificial intelligence landscape stands at a pivotal moment, where critical decisions about model training capabilities and research directions will shape its technological future. The discourse was recently energised by Aravind Srinivas, CEO of Perplexity AI, who highlighted two crucial perspectives that challenge India's current AI trajectory. T he Wake-up Call Figure 1: The two posts on X.com on Strategic Perspectives, by Aravind Srinivas. Srinivas emphasises that India's AI community faces a critical choice: either develop model training capabilities or risk becoming perpetually dependent on others' models. His observation that "Indians cannot afford to ignore model training" stems from a deeper understanding of the AI value chain. The ability to train models represents not just technical capability, but technological sovereignty. A significant revelation comes from DeepSeek's recent achievement . Their success in training competitive models with just 2,048 GPUs challenges the widespread belief that model development requires astronomical resources. This demonstrates that with strategic resource allocation and expertise, Indian organisations can realistically pursue model training initiatives. India's AI ecosystem currently focuses heavily on application development and use cases . While this approach has yielded short-term benefits, it potentially undermines long-term technological independence. The emphasis on building applications atop existing models, while important, shouldn't overshadow the need for fundamental research and development capabilities. In short, Srinivas attempts to highlight 3 key issues, through his posts, on the larger tech development and application layer debate in India: Limited hardware infrastructure for AI model training Concentration of model training expertise in select global companies Over-reliance on foreign AI models and frameworks This insight fixates itself on legal and policy perspectives around building necessary capabilities around innovating in core AI models, and also focusing on building use case capitals in India, including in Bengaluru and other places. In addition, this long insight covers recommendations to the Ministry of Electronics and Information Technology, Government of India on the Report on AI Governance Guidelines Development , in the concluding sections. The Policy Imperative: Balancing Use Cases and Foundational AI Development What is pointed out by Aravind Srinivas about AI development avenues in India's scenario is also backed by policy & industry realities. The recent repeal of the former Biden Administration (US Government)'s Executive Order on Artificial Intelligence by the Trump Administration hours ago demonstrated that the US Government's focus has pivoted on hard resource considerations around AI development, such as data centres, semiconductors, and talent. India has no choice but to keep both ideas - building use case capitals in India, and focus on foundational AI research alternatives, at the same time. Moving Beyond Use-Case Capitalism India's current AI strategy has been heavily skewed toward application development—leveraging existing foundational models like those from OpenAI or Google to build domain-specific use cases. While this approach has yielded quick wins in sectors like healthcare, agriculture, and governance, it risks creating along-term dependency on foreign technologies. This"use-case capitalism" prioritises short-term gains over the strategic imperative of building indigenous capabilities. Technological Dependence : Relying on pre-trained foundational models developed abroad limits India's ability to innovate independently and negotiate favourable terms in the global AI ecosystem. Economic Vulnerability : By focusing on applications rather than foundational research, India risks being relegated to a secondary role in the AI value chain, capturing only a fraction of the economic value generated by AI technologies. Missed Opportunities for Sovereignty : Foundational models are critical for ensuring control over data, algorithms, and intellectual property. Without them, India remains vulnerable to external control over critical AI infrastructure. Building Indigenous Model Training Capabilities The ability to train foundational models domestically is essential for achieving technological independence. Foundational models—large-scale pre-trained systems—are the backbone of generative AI applications. Building such capabilities in India requires addressing key gaps in infrastructure, talent, and data availability. Key Challenges Infrastructure Deficit : Training large-scale models requires significant computational resources (e.g., GPUs or TPUs). India's current infrastructure lags behind global leaders like the US and China. Initiatives like CDAC’s AIRAWAT supercomputer are steps in the right direction but need scaling up. Talent Shortage : While India has a large tech workforce (420,000 professionals), expertise in training large language models (LLMs) remains concentrated in a few institutions like IITs and IISc. Collaboration with global experts and targeted upskilling programs are necessary to bridge this gap. Data Limitations : High-quality datasets for Indian languages are scarce, limiting the ability to train effective multilingual models. Efforts like Bhashini have made progress but need expansion to include diverse domains such as agriculture, healthcare, and governance. From Lack of Policy Clarity to AI Safety Diplomacy The Indian government has positioned itself as a participant in the global conversation on artificial intelligence (AI) regulation, emphasising its leadership on equity issues relevant to the Global South while proposing governance frameworks for AI. However, these initiatives often appear inconsistent and lack coherence. For instance, in April 2023, former Minister of State of Electronics & Information Technology, Rajeev Chandrasekhar had asserted that India would not regulate AI, aiming to foster a pro-innovation environment. Yet, by June of the same year, he shifted his stance, advocating for regulations to mitigate potential harms to users. India’s approach to AI regulation is thereby fragmented and overreactive, with overlapping initiatives and strategies across various government entities. Now, we have to understand 2 important issues here. The AI-related approaches, and documents adopted by statutory bodies, regulators, constitutional authorities under Article 324 of the Indian Constitution and even non-tech ministries have some specificity and focus, as compared to MeiTY. For example, the Election Commission of India came up with an advisory on content provenance of any synthetic content/ deepfakes being used during election campaigning. MeiTY, on the other hand, had been rushing on AI governance initiatives, thanks to the Advisory they had published on March 1, 2024, replaced by a subsequent advisory in the 2nd week of March of the same year. By October 2024, however, the Government of India had pivoted its approach towards AI regulation in 2 facets: (1) the Principle Scientific Advisor's Office had taken over major facets of AI regulation policy conundrums ; and (2) MeiTY narrows down its goals of AI regulation to AI Safety by considering options to develop an Artificial Intelligence Safety Institute . Unlike many law firms, chambers and think tanks who might have been deceptive in their discourse on India's AI regulation & data protection landscape, the reality is simple that the Government of India keeps publishing key AI industry, and policy / governance guidelines without showing any clear intent to regulate AI per se. The focus towards over-regulation / regulatory capture in the field of artificial intelligence doesn't exist. In fact, the Government has taken a patient approach by stating that some recognition of key liability issues around certain mainstream AI applications (like deepfakes ) can be addressed by tweaking existing criminal law instruments. Here's what was quoted from S Krishnan's statement at the Global AI Conclave of end-November 2024. Deepfakes is not a regulatory issue, it is a technology issue. However we may need few tweaks in regulations, as opposed to a complete overhaul. In fact, this statement by MeiTY Secretary S Krishnan, is truly appreciative: We need to allow some maturity to come into the market so that the industry doesn’t get spooked that the government is stepping in and doing all kinds of things. It has to be need based. We are currently working on a voluntary code of conduct through the AI Safety Institute. Intellectual Property (IP) Challenges: Wrappers and Foundational Models The reliance on "wrappers"—deliverables or applications built on top of existing foundational AI models—raises significant intellectual property (IP) concerns. These challenges are particularly pronounced in the Indian context, where businesses and end-users face risks associated with copyright, trade secrets, and patentability. Copyright Issues with AI-Generated Content AI-generated content, often used in wrappers, presents a fundamental challenge to traditional copyright frameworks. Lack of Ownership Clarity : Determining ownership of AI-generated content is contentious. For example, does the copyright belong to the developer of the foundational model, the user providing input prompts, or the organisation deploying the wrapper? This ambiguity undermines enforceability. Attribution Gaps : Foundational models often use vast datasets without proper attribution during training. This creates potential liabilities for businesses using wrappers built on these models if outputs closely resemble copyrighted material. These uncertainties make it difficult for Indian businesses to claim exclusive rights over wrapper-based deliverables, exposing them to potential legal disputes and economic risks. Trade Secrets and Proprietary Model Risks Trade secrets are critical for protecting proprietary algorithms, datasets, and other confidential information embedded within foundational models. However, wrappers built on these models face unique vulnerabilities: Reverse Engineering : Competitors can potentially reverse-engineer wrappers to uncover proprietary algorithms or techniques from the foundational models they rely on. This compromises the confidentiality essential for trade secret protection. Data Security Threats : Foundational models often retain input data for further training or optimization. Wrappers that interface with such models risk exposing sensitive business data to unauthorised access or misuse. Algorithmic Biases : Biases embedded in foundational models can inadvertently compromise trade secret protections by revealing patterns or vulnerabilities during audits or legal disputes. Insider Threats : Employees with access to wrappers and underlying foundational models might misuse confidential information, especially in industries with high turnover rates. These risks are exacerbated by India's lack of a dedicated trade secret law, relying instead on common law principles and contractual agreements like non-disclosure agreements (NDAs). Patentability Challenges The patent system in India poses significant hurdles for innovations involving foundational models and their wrappers due to restrictive interpretations under Section 3(k) of the Patents Act, 1970. Key challenges include: Subject Matter Exclusions : Algorithms and computational methods integral to foundational models are excluded from patent eligibility unless they demonstrate a "technical effect." This limits protections for innovations derived from these models. Inventorship Dilemmas : Indian patent law requires inventors to be natural persons. This creates a legal vacuum when foundational models autonomously generate novel solutions integrated into wrappers. Global Disparities : While jurisdictions like the U.S. and EU have begun adapting their patent frameworks for AI-related inventions, India's outdated approach discourages investment in foundational model R&D. Economic Risks : Without clear patent protections, Indian businesses may struggle to attract funding for wrapper-based innovations that rely on foundational model advancements. These challenges highlight the systemic barriers preventing Indian innovators from fully leveraging foundational models while protecting their intellectual property. Negotiation Leverage Over Foundational Models Indian businesses relying on foreign-owned foundational models face additional risks tied to access rights and licensing terms: Restrictive Licensing Agreements : Multinational corporations (MNCs) controlling foundational models often impose restrictive terms that limit customization or repurposing by Indian businesses. Data Ownership Conflicts : Foundational models trained on Indian datasets may not grant reciprocal rights over outputs generated using those datasets, creating an asymmetry in value capture. Supply Chain Dependencies : Dependence on global digital value chains exposes Indian businesses to geopolitical risks, price hikes, or service disruptions that could impact access to critical AI infrastructure. These legal-policy issues are critical and cannot be ignored by the Government of India, nor by major Indian companies, emerging AI companies, as well as research labs & other market cum technical stakeholders. The Case for a CERN-Like Body for AI Research: Moving Beyond the "AI Garage" India's current positioning as an " AI Garage " for developing and emerging economies, as outlined in its AI strategy of 2018, emphasizes leveraging AI to solve practical, localized problems. While this approach has merit in addressing immediate societal challenges, it risks limiting India's role to that of an application developer rather than a leader in foundational AI research. To truly establish itself as a global AI powerhouse, India must advocate for and participate in the creation of a CERN-like body for artificial intelligence research. The Limitations of the "AI Garage" Approach The "AI Garage" concept, promoted by NITI Aayog, envisions India as a hub for scalable and inclusive AI solutions tailored to the needs of emerging economies. While this aligns with India's socio-economic priorities, it inherently focuses on downstream applications rather than upstream foundational research. This approach creates several limitations: Dependence on Foreign Models : By focusing on adapting existing foundational models (developed by global tech giants like OpenAI or Google), India remains dependent on external technologies and infrastructure. Missed Opportunities for Leadership : The lack of investment in foundational R&D prevents India from contributing to groundbreaking advancements in AI, relegating it to a secondary role in the global AI value chain. Limited Global Influence : Without leadership in foundational research, India's ability to shape global AI norms, standards, and governance frameworks is diminished. The Vision for a CERN-Like Body A CERN-like body for AI research offers an alternative vision—one that emphasizes international collaboration and foundational R&D. Gary Marcus , a prominent AI researcher and critic of current industry practices, has long advocated for such an institution since 2017 . He argues that many of AI's most pressing challenges—such as safety, ethics, and generalization—are too complex for individual labs or profit-driven corporations to address effectively. A collaborative body modeled after CERN (the European Organization for Nuclear Research) could tackle these challenges by pooling resources, expertise, and data across nations. Key features of such a body include: Interdisciplinary Collaboration : Bringing together experts from diverse fields such as computer science, neuroscience, ethics, and sociology to address multifaceted AI challenges. Open Research : Ensuring that research outputs, datasets, and foundational architectures are publicly accessible to promote transparency and equitable benefits. Focus on Public Good : Prioritising projects that address global challenges—such as climate change, healthcare disparities, and education gaps—rather than narrow commercial interests. Why India Needs to Lead or Participate India is uniquely positioned to champion the establishment of a CERN-like body for AI due to its growing digital economy, vast talent pool, and leadership in multilateral initiatives like the Global Partnership on Artificial Intelligence (GPAI). However, if the United States remains reluctant to pursue such an initiative on a multilateral basis, India must explore partnerships with other nations like the UAE or Singapore. Strategic Benefits for India : Anchor for Foundational Research : A CERN-like institution would provide India with access to cutting-edge research infrastructure and expertise. Trust-Based Partnerships : Collaborative research fosters trust among participating nations, creating opportunities for equitable technology sharing. Global Influence : By playing a central role in such an initiative, India can shape global AI governance frameworks and standards. Why UAE or Singapore Could Be Viable Partners : The UAE has already demonstrated its commitment to becoming an AI leader through initiatives like its National Artificial Intelligence Strategy 2031. Collaborating with India would align with its policy goals while providing access to India's talent pool. Singapore's focus on innovation-driven growth makes it another strong candidate for partnership. Its robust digital infrastructure complements India's strengths in data and software development. The Need for Large-Scale Collaboration As Gary Marcus has pointed out, current approaches to AI research are fragmented and often driven by secrecy and competition among private corporations. This model is ill-suited for addressing fundamental questions about AI safety, ethics, and generalization. A CERN-like body would enable large-scale collaboration that no single nation or corporation could achieve alone. For example: AI Safety : Developing frameworks to ensure that advanced AI systems operate reliably and ethically across diverse contexts. Generalization : Moving beyond narrow task-specific models toward systems capable of reasoning across domains. Equitable Access : Ensuring that advancements in AI benefit all nations rather than being concentrated in a few tech hubs. India's current "AI Garage" approach is insufficient if the country aims to transition from being a consumer of foundational models to a creator of them. Establishing or participating in a CERN-like body for AI research represents a transformative opportunity—not just for India but also for the broader Global South. The Case Against an All-Comprehensive AI Regulation in India As the author of this insight has also developed India's first privately proposed AI regulation, aiact.in , the experience and feedback from stakeholders has been that a sweeping, all-encompassing AI Act is not the right course of action for India at this moment. While the rapid advancements in AI demand regulatory attention, rushing into a comprehensive framework could lead to unintended consequences that stifle innovation and create more confusion than clarity. Why an AI Act is Premature India’s AI ecosystem is still in its formative stages, with significant gaps in foundational research, infrastructure, and policy coherence. Introducing a broad AI Act now risks overregulating an industry that requires flexibility and room for growth. Moreover: Second-Order Effects : Feedback from my work on AI regulation highlights how poorly designed laws can have ripple effects on innovation, investment, and adoption. For example, overly stringent rules could discourage startups and SMEs from experimenting with AI solutions. Sectoral Complexity : The diverse applications of AI—ranging from healthcare to finance—demand sector-specific approaches rather than a one-size-fits-all regulation. Recommendations on the Report on AI Governance Guidelines Development by IndiaAI of January 2025 Part 1: Feedback on the AI Governance Principles as proposed and stated to align with OECD, NITI and NASSCOM's efforts. Principle Feedback Transparency : AI systems should be accompanied with meaningful information on their development, processes, capabilities & limitations, and should be interpretable and explainable, as appropriate. Users should know when they are dealing with AI. The focus on interpretability & explainability with a sense of appropriate considerations, is appreciated. Accountability : Developers and deployers should take responsibility for the functioning and outcomes of AI systems and for the respect of user rights, the rule of law, & the above principles. Mechanisms should be in place to clarify accountability. The principle remains clear with a pivotal focus on rule of law & the respect of user rights, and is therefore appreciated. Safety, reliability & robustness : AI systems should be developed, deployed & used in a safe, reliable, and robust way so that they are resilient to risks, errors, or inconsistencies, the scope for misuse and inappropriate use is reduced, and unintended or unexpected adverse outcomes are identified and mitigated. AI systems should be regularly monitored to ensure that they operate in accordance with their specifications and perform their intended functions. The reference to acknowledge the link between any AI system's intended functions (or intended purpose) and specifications, with safety, reliability and robustness considerations is appreciated. Privacy & security : AI systems should be developed, deployed & used in compliance with applicable data protection laws and in ways that respect users’ privacy. Mechanisms should be in place to ensure data quality, data integrity, and ‘security-by-design’. The indirect reference via the term 'security-by-design' is to the security safeguards under the National Data Management Office framework under the IndiaAI Expert Group Report of October 2023, and the Digital Personal Data Protection Act, 2023, in spirit. This is also appreciated. Fairness & non-discrimination : AI systems should be developed, deployed, & used in ways that are fair and inclusive to and for all and that do not discriminate or perpetuate biases or prejudices against, or preferences in favour of, individuals, communities, or groups. The principle's wording seems fine, but it could have also emphasised upon technical biases, and not just biases causing socio-economic or socio-technical disenfranchisement or partiality. Otherwise, the wording is pretty decent & normal. Human-centred values & ‘do no harm’ : AI systems should be subject to human oversight, judgment, and intervention, as appropriate, to prevent undue reliance on AI systems, and address complex ethical dilemmas that such systems may encounter. Mechanisms should be in place to respect the rule of law and mitigate adverse outcomes on society. The reference to the phrase 'do no harm' is the cornerstone of this principle, in context of what appropriate human oversight, judgment and intervention may be feasible. Since this is a human-centric AI principle, the reference to 'adverse outcomes on society' was expected, and is appreciated. Inclusive & sustainable innovation : The development and deployment of AI systems should look to distribute the benefits of innovation equitably. AI systems should be used to pursue beneficial outcomes for all and to deliver on sustainable development goals. The distributive aspect of innovation benefits that the development and deployment of AI systems may be agreed as a generic international law principle to promote AI Safety Research, and evidence-based policy making & diplomacy. Digital by design governance : The governance of AI systems should leverage digital technologies to rethink and re-engineer systems and processes for governance, regulation, and compliance to adopt appropriate technological and techno-legal measures, as may be necessary, to effectively operationalise these principles and to enable compliance with applicable law. This principle suggests that the infusion of AI governance by default should be driven by digital technologies by virtue of leverage. It is therefore recommended that the reference to "technological and techno-legal measures" is not viewed as solution-obsessed AI governance measure. Compliance technologies cannot replace technology law adjudication and human autonomy, which is why despite using the word 'appropriate', the reference to the 'measures' seems solution-obsessed, because obsessing with solution-centricity does not address issues of technology maintenance & efficiency by default. Maybe the intent to promote 'leverage' and 'rethink' should be considered as a consultative aspect of the principle, and not a fundamental one. Part B: Feedback on Considerations to operationalise the principles Examining AI systems using a lifecycle approach While governance efforts should keep the lifecycle approach as an initial consideration, the ecosystem view of AI actors should take precedence over all considerations to operationalise the principles. The best reference to the lifecycle approach can be technical, and management-centric, about best practices, and not about policy. As stated in the document, the lifecycle approach should merely be considered 'useful', and nothing else. The justification of 'Diffusion' as a stage is very unclear, since "examining the implications of multiple AI systems being widely deployed and used across multiple sectors and domains" by default should be specific to the intended purpose of AI systems and technologies. Some tools or applications might not have a multi-sectoral effect. Therefore, 'Diffusion' by virtue of its meaning should be considered as an add-on stage. In fact the reference to Diffusion would only suit General Purpose AI systems, if we take the OECD and European Union definitions of General Purpose AI systems into consideration. This should either be an add-on phase, or should be taken into direct correlation with how intended purpose of AI systems is banked upon. Otherwise, the reference to this phase may justify regulatory capture. Taking an ecosystem-view of AI actors The listing of actors as a matter of exemplification in the context of foundation model seems appropriate. However, in the phrase "distribution of responsibilities and liabilities", the term distribution should be replaced with apportionment, and the phrase "responsibilities and liabilities" should be replaced with accountability and responsibility only. Here is an expanded legal reasoning, put forth: Precision in Terminology : The term distribution implies an equal or arbitrary allocation of roles and duties, which may not accurately reflect the nuanced and context-specific nature of governance within an AI ecosystem. Apportionment , on the other hand, suggests a deliberate and proportional assignment of accountability and responsibility based on the roles, actions, and influence of each actor within the ecosystem. This distinction is critical in ensuring that governance frameworks are equitable and just. Legal Clarity on Liability : The inclusion of liabilities in the phrase "responsibilities and liabilities" introduces ambiguity, as liability is not something that can be arbitrarily allocated or negotiated among actors. Liability regimes are determined by statutory provisions and case law precedents, which rely on existing legal frameworks rather than ecosystem-specific governance agreements. Therefore, retaining accountability and responsibility —terms that are more flexible and operational within governance frameworks—avoids conflating governance mechanisms with legal adjudication. Role of Statutory Law : Liability pertains to legal consequences that arise from breaches or harms, which are adjudicated based on statutory laws or judicial interpretations. Governance frameworks cannot override or preempt these legal determinations but can only clarify roles to ensure compliance with existing laws. By focusing on accountability and responsibility , the framework aligns itself with operational clarity without encroaching upon the domain of statutory liability. Ecosystem-Specific Context : Foundation models involve multiple actors—developers, deployers, users, regulators—each with distinct roles. Using terms like apportionment emphasizes a tailored approach where accountability is assigned proportionally based on each actor's influence over outcomes, rather than a blanket distribution that risks oversimplification. Avoiding Overreach : Governance frameworks should aim to clarify operational responsibilities without overstepping into areas governed by statutory liability regimes. This ensures that such frameworks remain practical tools for managing accountability while respecting the boundaries set by law. Leveraging technology for governance The conceptual approach outlined in the provided content, while ambitious and forward-looking, suffers from several critical weaknesses that undermine its practicality and coherence: Overly Broad and Linguistically Ambiguous: The conceptual approach is excessively verbose and broad, which dilutes its clarity and focus. The lack of precision in defining key terms, such as "techno-legal approach," creates interpretative ambiguity. This undermines its utility as a concrete governance framework and risks being dismissed as overly theoretical or impractical. Misplaced Assumption About Techno-Legal Governance: The assertion that a "complex ecosystem of AI models, systems, and actors" necessitates a techno-legal approach is unsubstantiated. The government must clarify whether this approach is a solution-obsessed strategy focused on leveraging compliance technology as a panacea for governance challenges. If so, this is deeply flawed because: It assumes technology alone can address systemic issues without adequately considering the socio-political and institutional capacities required for effective implementation. It risks prioritizing tools over outcomes, leading to a governance framework that is reactive rather than adaptive. Counterproductive Supplementation of Legal Regimes: Merely supplementing legal and regulatory regimes with "appropriate technology layers" is insufficient to oversee or promote growth in a rapidly expanding AI ecosystem. This mirrors the whole-of-government approach proposed under the Digital India Act (of March 2023) but lacks the necessary emphasis on capacity-building among legal and regulatory institutions. Without consistent capacity-building efforts: The techno-legal approach remains an abstract concept with no actionable pathway. It risks becoming verbal gibberish that fails to translate into meaningful governance outcomes. Practical Flaws in Risk Mitigation and Compliance Automation: While using technology to mitigate risks, scale, and automate compliance appears conceptually appealing, it is fraught with practical challenges: Risk estimation, scale estimation, and compliance objectives are not standardised across sectors or jurisdictions in India. Most government agencies, regulatory bodies, and judicial authorities lack the technical expertise to understand or implement automation effectively. There is no settled legal or technical framework to support AI safety measures in a techno-legal manner. This undermines evidence-based policymaking and contradicts the ecosystem view of AI actors by failing to account for their diverse roles and responsibilities. As a result, this aspect of the conceptual approach is far-fetched and impractical in India's current regulatory landscape. Unrealistic Assumptions About Liability Allocation: The proposal to use technology for allocating regulatory obligations and apportioning liability across the AI value chain is premature and lacks empirical support. According to the Ada Lovelace Institute , assigning responsibilities and liabilities in AI supply chains is particularly challenging due to the novelty, complexity, and rapid evolution of AI systems. To add further: No country has successfully implemented such a system, not even technologically advanced nations like the US, China, or UAE. The availability of sufficient data and content to process such allocations remains uncertain. Terms like "lightweight but gradually scalable" are misleading in the context of India's complex socio-economic fabric. Without robust evidence-based policymaking or AI safety research, this proposal risks being overly optimistic and disconnected from ground realities. Overreach in Proposing Tech Artefacts for Liability Chains: The suggestion to create tech artefacts akin to consent artefacts for establishing liability chains is problematic on multiple fronts: No jurisdiction has achieved such standardisation or automation of liability allocation. Automating or augmenting liability apportionment without clear standards violates principles of natural justice by potentially assigning liability arbitrarily. The idea of "spreading and distributing liability" among participants oversimplifies complex legal doctrines and undermines rule-of-law principles. Liability cannot be treated as a quantifiable commodity subject to technological augmentation. This aspect reflects an overreach that fails to consider the legal complexities involved in enforcing liability chains. Assumptions About Objective Technical Assistance: The proposal for extending "appropriate technical assistance" assumes that the techno-legal approach is inherently objective. However: The lack of standardisation in techniques undermines this assumption. Without clear benchmarks or criteria for what constitutes "appropriate" assistance, this aspect becomes self-referential and overly broad. While there may be merit in leveraging technical assistance for governance purposes, it requires acknowledgment of the need for standardisation before implementation. Part C: Feedback on Gap Analysis The need to enable effective compliance and enforcement of existing laws On deepfakes, the report is accurate to mention that existing laws should be effectively endorsed. In the case of deepfakes, while the justification to adopt technological measures is reasonable, the overreliance on watermarking techniques might not be feasible necessarily: Watermarking Vulnerabilities : As highlighted in the 2024 Seoul AI Summit Scientific Report , watermarks can be removed or tampered with by malicious actors, rendering them unreliable as a sole mechanism for deepfake detection. This limits their effectiveness in ensuring accountability across the lifecycle of AI-generated content. Scalability Issues : Implementing immutable identities for all participants in a global AI ecosystem would require unprecedented levels of standardisation and cooperation. Such a system is unlikely to function effectively without robust international agreements, which currently remain fragmented. Make deepfake detection methods open-source and keep improving upon the human and technical forensic methods to detect AI-generated content, be it multi-modal or not, should be the first step. This approach helps in improving upon the relied methods of content provenance. Cyber security The reference to cybersecurity law obligations in the report is accurate. However, the report does not give enough emphasis on cybersecurity measures at all, which is disappointing. Intellectual property rights Training models on copyrighted data and liability in case of infringement: The report fails to provide any definitive understanding on copyright aspects, and leaves it to mere questions, which is again disappointing. AI led bias and discrimination This wording on 'notion of bias' is commendable for its clarity and precision in addressing the nuanced issue of bias in decision-making. By emphasizing that only biases that are legally or socially prohibited require protection , it avoids the unrealistic expectation of eliminating all forms of bias, which may not always be harmful or relevant. Part D: Feedback on Recommendations To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance The supposed empowered mechanism of a Committee or a Group as proposed must take into consideration that without enough capacity-building measures, having a steering committee-like approach to enable whole-of-government approach will be only solution-obsessed and not helpful. Thus, unless the capacity-building mandate to anticipate, address & understand what limited form of AI governance for near-about important areas of priority, like incident response, liability issues, or accountability, are not dealt with enough settled legal, administrative and policy positions, the Committee / Group would not be productive or helpful enough to promote a whole-of-government approach. To build evidence on actual risks and to inform harm mitigation, the Technical Secretariat should establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes. The approach rightly notes that AI incidents extend beyond cybersecurity issues, encompassing discriminatory outcomes, system failures, and emergent behaviors. However, this broad scope risks making the database unwieldy without a clear taxonomy or prioritisation of incident types. Learning from the AIID , which encountered challenges in indexing 750+ incidents due to structural ambiguities and epistemic uncertainty, it is crucial to define clear categories and reporting triggers to avoid overwhelming the system with disparate data. Encouraging voluntary reporting by private entities is a positive step but may face low participation due to concerns about confidentiality and reputational risks. The OECD emphasises that reporting systems must ensure secure handling of sensitive information while minimising transaction costs for contributors. The proposal could benefit from hybrid reporting models (mandatory for high-risk sectors and voluntary for others), as suggested by the OECD's multi-tiered approach. The proposal assumes that public sector organizations can effectively report incidents during initial stages. However, as noted in multiple studies , most organisations lack the technical expertise or resources to identify and document AI-specific failures comprehensively. Capacity-building initiatives for both public and private stakeholders should be prioritised before expecting meaningful contributions to the database. The proposal correctly identifies the need for an evidence base to inform governance initiatives. Incorporating automated analysis tools (e.g., taxonomy-based clustering or root cause analysis) could enhance the database's utility for identifying patterns and informing policy decisions. Conclusion While the pursuit of becoming a "use case capital" has merit in addressing immediate societal challenges and fostering innovation, it should not overshadow the imperative of developing indigenous AI capabilities. The Dual Imperative India must maintain its momentum in developing AI applications while simultaneously building capacity for foundational model training and research. Immediate societal needs are addressed through practical AI applications Long-term technological sovereignty is preserved through indigenous development Research capabilities evolve beyond mere application development Beyond the Hype The discourse around AI in India must shift from sensationalism to substantive discussions about research, development, and ethical considerations. As emphasized in the Durgapur AI Principles , this includes: Promoting evidence-based discussions over speculative claims Fostering collaboration between academia, industry, and government Emphasizing responsible innovation that considers societal impact Building trust-based partnerships both domestically and internationally The future of AI in India depends not on chasing headlines or following global trends blindly, but on cultivating a robust ecosystem that balances practical applications with fundamental research.

bottom of page