top of page

Search Results

Results found for empty search

  • New Research Decodes NIST Adversarial ML Standards for Indian Enterprises

    Indic Pacific Legal Research LLP today released "NIST Adversarial Machine Learning Taxonomies: Decoded" (IPLR-IG-016, First Edition 2025), a comprehensive analysis of emerging AI cybersecurity threats and mitigation strategies. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/ 📈 Research Highlights: The publication, authored by cybersecurity researchers Gargi Mundotia, Yashita Parashar, and Sneha Binu, translates complex NIST AI 100-2 E2025 standards into actionable intelligence for Indian organizations across critical sectors. 🏦 Sector-Specific Focus: The research addresses unique vulnerabilities in Banking & Financial Services (regulatory compliance in AI fraud detection), Telecommunications (deepfake attacks), and Digital Public Infrastructure (citizen data governance concerns). ⚡ Dual-Use Technology Challenge: As AI enhances cyber defense capabilities, the same technology enables sophisticated attacks including adversarial AI and data poisoning - creating an evolving threat landscape requiring specialized countermeasures. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/ "This research fills a critical gap by making international cybersecurity standards practically applicable for Indian enterprises around adversarial machine learning," said Abhivardhan, our Founder. "Understanding adversarial ML isn't optional anymore - it's essential for digital resilience." The publication advocates for zero-trust frameworks, advanced encryption, and sector-specific approaches to contain AI-powered risks. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/

  • Announcing our Partnership with Future Shift Labs

    We are delighted to announce our partnership with Future Shift Labs  in line with a memorandum of understanding mutually agreed upon long back. We genuinely appreciate the good work done in improving AI-enabled social sector deliverables and accessibility initiatives, such as Yashoda AI  by National Commission for Women - India , wherein team FSL had a huge role to play, for instance. Team FSL also conducted a successful event recently around Indian DPI use cases and their implementation in African countries. We believe in the convictions of the leadership of FSL led by Nitin Narang  and Sagar Vishnoi , and the team members' zeal and focus-centric approach to things, including Pranav Dwivedi , Bhairabi Kashyap Deka  and Aastha Naresh Kohli . Well, it's a long road to go, so we hope to continue this partnership for the better.

  • Abhivardhan quoted on GOI Advisory against Pakistan-origin Content

    Our Founder and Managing Partner, Abhivardhan  was quoted by Hindustan Times  on the Government of India 's non-binding advisory to OTT platforms, streaming platforms and digital intermediaries against carrying content originating from Pakistan on May 9, 2025. Read the complete feature at https://www.hindustantimes.com/india-news/no-pakistani-films-shows-or-music-on-indian-otts-govt-101746731485524.html

  • Examining the Perplexity Position on Antitrust Issues associated with Google

    Recently, Aravind Srinivas, the CEO of Perplexity.AI , had announced that his company was asked to testify at the United States Congress on the recent antitrust issues clearly raised by the US FTC against Google. Now, Perplexity AI's position on the Google antitrust case reveals surprising parallels with the Trump Administration's April 3, 2025 AI memorandums, though significant tensions exist in how their approach to intellectual property protection and competition would impact Indo-Pacific digital sovereignty. Let's understand this further in this brief input.

  • The Position of Second Trump Admin on AI and Intellectual Property Laws: The April 2025 Memorandums

    The Trump Administration recently released two significant memorandums—M-25-22 ("Driving Efficient Acquisition of Artificial Intelligence in Government") and M-25-21 ("Accelerating Federal Use of AI through Innovation, Governance, and Public Trust")—providing guidelines on federal use and procurement of artificial intelligence (AI). This policy brief analyzes these memorandums specifically regarding intellectual property (IP) protections in AI development, contrasting them with recent advocacy by certain tech companies for weakened copyright protections. From an Indo-Pacific perspective, these memorandums signal a continued U.S. commitment to IP protection while simultaneously promoting AI innovation in global technology competition, particularly with China. This position has significant implications for Indo-Pacific nations navigating their own AI governance frameworks and IP protection regimes, especially as they position themselves within the U.S.-China technological rivalry. The Call to Undo TRIPS and IP Laws using Fair Use Justifications Several prominent tech leaders and companies have actively lobbied for relaxed IP protections for AI development. OpenAI's March 13, 2025, submission to the White House Office of Science and Technology Policy explicitly called for fundamental changes to U.S. copyright law that would allow AI companies to use copyrighted works without permission or compensation to rightsholders.

  • US Government Accountability Office’s Testimony on Data Quality and AI, Explained

    The Government Accountability Office (GAO) testimony before the Joint Economic Committee highlights a critical challenge facing the federal government: how to leverage artificial intelligence to combat fraud and improper payments while ensuring data quality and workforce readiness. This analysis examines the intricate relationship between data quality, skilled personnel, and AI implementation in government settings, drawing insights from the GAO's extensive research and recommendations. The Magnitude of the Problem: Fraud and Improper Payments The federal government faces staggering financial losses due to fraud and improper payments. According to GAO estimates, fraud costs taxpayers between $233 billion and $521 billion annually, based on fiscal year 2018-2022 data1. Since fiscal year 2003, cumulative improper payment estimates by executive branch agencies have totaled approximately $2.8 trillion. The scale of this problem demonstrates why innovative solutions like AI are being considered.

  • Indo-Pacific Research Principles on Use of Large Reasoning Models & 2 Years of IndoPacific.App

    The firm is delighted to first announce the launch of Indo Pacific Research Principles on Large Reasoning Models, and also announces two successful years of IndoPacific App, our legal and policy literature archive since 2023. The first section covers the firm's reasoning to introduce these principles on large reasoning models. The Research Principles: Their Purpose Now, Large Reasoning Models as developed by AI companies across the globe, whose examples include Deeper Search & Deep Research by XAI (Grok), Deep Research by Perplexity, Gemini's Deep Research and even OpenAI's own Deep Reasoning tool, are supposed to mimic reasoning abilities of human beings. The development of LRMs emerged from the recognition that standard LLMs often struggle with complex reasoning tasks despite their impressive language generation capabilities. Researchers observed, or say, supposed , that prompting LLMs to "think step by step" or to break down problems into smaller components often improved performance on mathematical, logical, and algorithmic challenges. Models like DeepSeek R1, Claude, and GPT-4 are frequently cited examples that incorporate reasoning-focused architectures or training methodologies. These models are trained to produce intermediate steps that suppose to 'resemble' human reasoning processes before arriving at final answers. These models, which include systems like Claude 3.7, DeepSeek R1, GPT-4, and others, claim to exhibit reasoning capabilities that mimic human thought processes, often displaying their work through "reasoning traces" or step-by-step explanations of their thought processes. However, recent research has begun to question these claims and identify significant limitations in how these models actually reason. While LRMs have shown some performance on certain benchmarks, researchers have found substantial evidence suggesting that what appears to be reasoning may actually be sophisticated pattern matching rather than genuine logical processing. The Anthropomorphisation Trap A critical issue in evaluating LRMs is what researchers call the " anthropomorphisation trap " - the tendency to interpret model outputs as reflecting human-like reasoning processes simply because they superficially resemble human thought patterns. The inclusion of phrases like "hmm..," "aha..," "let me think step by step..," may create the impression of deliberative thinking, but these are more likely stylistic imitations of human reasoning patterns present in training data rather than evidence of actual reasoning. This trap is particularly concerning because it can lead researchers and users to overestimate the models' capabilities. When LRMs produce human-like reasoning traces that appear thoughtful and deliberate, we may incorrectly attribute sophisticated reasoning abilities to them that don't actually exist. Here is a table that gives you an overview of the limitations associated with large reasoning models. Reasoning Limitation in Large Reasoning Models Description Lack of True Understanding LRMs operate by predicting the next token based on patterns they've learned during training, but they fundamentally lack a deep understanding of the environment and concepts they discuss. This limitation becomes apparent in complex reasoning tasks that demand true comprehension rather than pattern recognition. Contextual and Planning Limitations Although modern language models excel at grasping short contexts, they often struggle to maintain coherence over extended conversations or larger text segments. This can result in reasoning errors when the model must connect information from various parts of a dialogue or text. Additionally, LRMs frequently demonstrate an inability to perform effective planning for multi-step problems. Deductive vs. Inductive Reasoning Research indicates that LRMs particularly struggle with deductive reasoning, which requires deriving specific conclusions from general principles with a high degree of certainty and logical consistency. Their probabilistic nature makes achieving true deductive closure difficult, creating significant limitations for applications requiring absolute certainty. Even in the paper co-authored by Prof. Subbarao Kambhampati, entitled, " A Systematic Evaluation of the Planning and Scheduling Abilities of the Reasoning Model o1 " directly addresses critical themes from our earlier discussion about Large Reasoning Models (LRMs). Here are some quotes from this paper: While o1-mini achieves 68% accuracy on IPC domains compared to GPT-4's 42%, its traces show non-monotonic plan construction patterns inconsistent with human problem-solving [...] At equivalent price points, iterative LLM refinement matches o1's performance, questioning the need for specialized LRM architectures. [...] Vendor claims about LRM capabilities appear disconnected from measurable reasoning improvements. The Indo-Pacific Research Principles on Use of Large Reasoning Models Based on the evidence we have collected, and the insights received, we have proposed the following research principles on use of large reasoning models by Indic Pacific Legal Research. Principle 1: Emphasise Formal Verification Train LRMs to produce verifiable reasoning traces, like A* dynamics or SoS, for rigorous evaluation. Principle 2: Be Cautious with Intermediate Traces Recognise traces may be misleading; do not rely solely on them for trust or understanding. Principle 3: Avoid Anthropomorphisation Focus on functional reasoning, not human-like traces, to prevent false confidence. Principle 4: Evaluate Both Process and Outcome Assess both final answer accuracy and reasoning process validity in benchmarks. Principle 5: Transparency in Training Data Be clear about training data, especially human-like traces, to understand model behaviour. Principle 6: Modular Design Use modular components for flexibility in reasoning structures and strategies. Principle 7: Diverse Reasoning Structures Experiment with chains, trees, graphs for task suitability, balancing cost and effectiveness. Principle 8: Operator-Based Reasoning Implement operators (generate, refine, prune) to manipulate and refine reasoning processes. Principle 9: Balanced Training Use SFT and RL in two-phase training for foundation and refinement. Principle 10: Process-Based Evaluation Evaluate the entire reasoning process for correctness and feedback, not just outcomes. Principle 11: Integration with Symbolic AI Combine LRMs with constraint solvers or planning algorithms for enhanced reasoning. Principle 12: Interactive Reasoning Design LRMs for environmental interaction, using feedback to refine reasoning in sequential tasks. Please note that all principles are purely consultative and constitute no binding value on the members of Indic Pacific Legal Research. We also permit the use of these principles, provided that we are well-cited and referenced, for strict non-commercial use. IndoPacific.App Celebrates its Glorious 2 Years The IndoPacific App, launched under Abhivardhan's leadership in 2023 was a systemic reform undergone at Indic Pacific Legal Research to document our research publications, and contributions better. In our partnership with the Indian Society of Artificial Intelligence and Law, ISAIL.IN 's publications, and documentations are also registered at the IndoPacific App, under the AiStandard.io Alliance, and even otherwise as well. As this archive of legal (mostly) and policy literature completes its 2 years of existence under Indic Pacific's VLiGTA research & innovation division, we are glad to put some statistics clear for everyone's purview, which is updated as of April 12, 2025, and verified by manual means, after our use of Generative AI tools. This means that the stats is double-checked: We host publications and documentations of exactly 238 original authors (1 error removed). Our founder, Mr Abhivardhan's own publications constitute around 10% (approx.) The number of publications on IndoPacific App stand at 85, however, the number of chapters or contributory sections or articles, if we add numbers within research collections, then the number of research contributions stand at 304 unique contributions, which is a historic figure. Now, if we attribute these 304 unique contributions to each author (which are in the form of chapters to a collection of research or handbook, or a report, or a brief, for instance) - then the number of individual author credits will cross 300 as per our approximate estimates. This means something simple, honest and obvious. The IndoPacific.App , started by our Founder, Abhivardhan, is the biggest technology law archive of mostly Indian authors, with around 238 original authors documented in this archive, and 304 unique contributions (published) featured. There is no law firm, consultancy, think tank or institution with such a huge technology law archive, with independent support, and we are proud to have achieved this feat in the 5 years span of existence of both Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law. Thank you for becoming a part of this research community either through Indic Pacific Legal Research, or the Indian Society of Artificial Intelligence and Law. It is our honour and duty to safeguard this archive for all, which is 99% (except the handbooks) free. So, don't wait, go and download some amazing works from the IndoPacific.app , today.

  • Crafting the Future: Gratitude to DNLU Jabalpur and the Pivotal Role of aiact.in in Shaping AI Governance

    At Indic Pacific Legal Research LLP, we are thrilled to extend our heartfelt gratitude to Dharmashastra National Law University (DNLU), Jabalpur, for their remarkable initiative in hosting a Legislative Drafting Competition centered on the "Artificial Intelligence (Development and Regulation) Act." It’s a moment of pride and affirmation for us to witness a leading Indian law school engage with the critical intersection of AI and law—a space we’ve been passionately shaping since our inception in 2019. DNLU’s efforts to nurture innovative legal thinking align beautifully with our mission to foster responsible AI development and governance in India, and we wish them resounding success in this endeavour. This moment also shines a light on the journey of aiact.in —our flagship project, the Artificial Intelligence (Development & Regulation) Act, 2023, spearheaded by our founder, Abhivardhan. Launched in November 2023 with no grand expectations, this privately proposed AI bill has grown into a pivotal resource, inspiring conversations like the one at DNLU. What began as a vision to craft an India-centric framework for AI regulation has, in just over a year, garnered appreciation from developers, judges, and technologists alike. Its strength lies in its feedback-driven approach—offering a practical, adaptable blueprint that stakeholders can refine and build upon. Seeing it spark a legislative drafting competition at DNLU is a testament to its relevance and potential to influence India’s AI policy landscape. For us at Indic Pacific, aiact.in  is more than a draft—it’s a cornerstone of our commitment to pioneering technology law solutions with an Indo-Pacific lens. Despite early skepticism (including a dismissive encounter with a law firm that overlooked its originality), this initiative has proven its worth by amplifying Indian perspectives in a global discourse often dominated by Western frameworks. It embodies our ethos of salience, persistence, and adaptivity, driving dialogue among startups, MSMEs, and policymakers. Through our Research & Innovation Division, VLiGTA®, we’ve ensured aiact.in  remains a dynamic tool—evolving with insights from industry and academia, as evidenced by its recognition in DNLU’s competition. We’re deeply grateful to DNLU Jabalpur for not only embracing this theme but also acknowledging our efforts in shaping AI governance. Your competition is a powerful step toward building a future where AI is harnessed responsibly, and we at Indic Pacific are honoured to be part of this narrative. Here’s to continued collaboration and innovation—may DNLU’s students and faculty inspire the next wave of legal brilliance!

  • The Version 5 of Artificial Intelligence (Development & Regulation) Act, 2023 is Launched

    Indic Pacific Legal Research, under the stewardship of Abhivardhan, proudly presents Version 5.0 of the Draft Artificial Intelligence (Development & Regulation) Act, 2023 ( AIACT.in ). This iteration introduces pivotal amendments, with Section 23 leading as a freshly revised cornerstone, alongside updates to Section 7, Section 9, Section 13, Section 20A, and the newly enacted Section 24-A. These changes underscore Indic Pacific’s commitment to ethical, transparent, and inclusive AI regulation in India. Section 23: Content Provenance and Identification (Key Highlight) Indic Pacific has reimagined Section 23 to set a gold standard for AI-generated content. The amendment mandates watermarking with detailed metadata—covering scraping methods, data origins, and licensing—while enforcing ethical data practices limited to consented or public sources. Developers of high-impact systems must secure insurance up to ₹50 crores, ensuring accountability and curbing misuse. This positions Indic Pacific at the forefront of content integrity. Section 7: Strengthened Risk Classification Indic Pacific refines AI risk tiers—Narrow, Medium, High, and Unintended—banning the latter and intensifying scrutiny on High-Risk systems. This amendment safeguards against unpredictable technologies, reinforcing public trust and security. Section 9: Oversight in Strategic Sectors High-risk AI in designated strategic sectors now falls under tailored regulations, with this Act prevailing over conflicting rules. Indic Pacific ensures robust governance where it matters most. Section 13: Enhanced National AI Ethics Code The updated ethics code prioritizes transparency, fairness, and human oversight, offering a clear roadmap for responsible AI. Indic Pacific champions ethical innovation with this refresh. Section 20A: Transparency in Public AI Initiatives Government and partnership AI projects must now disclose objectives, funding, and algorithms, backed by audits and public explanations. Indic Pacific drives accountability in the public sphere. Section 24-A: Right to AI Literacy Introduced A landmark addition, this section grants every individual access to AI literacy—covering concepts, impacts, and recourse options. Indic Pacific empowers citizens for an AI-driven future. These amendments, with Section 23 as the flagship, exemplify Indic Pacific’s vision for a balanced, responsible AI ecosystem. Please give your feedback on this version of the bill at vligta@indicpacific.com .

  • Decoding the AI Competency Triad for Public Officials: A Deep Dive into India’s Strategic Framework

    The Ministry of Electronics and Information Technology (MeitY) recently launched its AI Competency Framework, aiming to equip public officials with the skills needed to responsibly integrate artificial intelligence into governance processes. Our latest report, "Decoding the AI Competency Triad for Public Officials" (IPLR-IG-014), provides an in-depth analysis of this framework and its implications for India’s public sector. This report is authored by Abhivardhan, Founder & Managing Partner, and interns at the Indian Society of Artificial Intelligence and Law, Yashita Parashar, Sneha Binu, and Gargi Mundotia. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/ Why This Framework Matters India is at a pivotal moment in its AI journey, with initiatives like the IndiaAI Mission positioning the country as a global leader in ethical and inclusive AI adoption. The competency framework identifies three core skill areas—behavioral, functional, and domain-specific—that are essential for public officials navigating the complexities of AI governance. Key Highlights from the Report Behavioral Competencies Focuses on systems thinking, ethical governance, and innovative leadership to address complex societal challenges through AI. Functional Competencies Covers practical skills like risk assessment, procurement oversight, and data governance necessary for effective implementation of AI projects. Domain-Specific Competencies Tailored to high-impact sectors like healthcare, education, agriculture, urban mobility, and environmental management. Strategic Recommendations The report also provides actionable insights across three critical legal-policy dimensions: Data Policy Alignment: Ensuring privacy-by-design principles are embedded in every stage of AI deployment. Intellectual Property Management: Addressing gaps in knowledge sharing while safeguarding innovation rights. Accountability & Transparency: Establishing robust oversight mechanisms to ensure ethical use of AI technologies. Who Should Read This? This report is designed for policymakers, entrepreneurs, public officials, and citizens who want to understand how India is building capacity for responsible AI integration while addressing global challenges like bias mitigation and data privacy. 📖 Access the full report here: https://indopacific.app/product/iplr-ig-014/

  • ciarb Guideline on the Use of AI in Arbitration (2025), Explained

    This insight is co-authored by Vishwam Jindal, Chief Executive Officer, WebNyay. The Chartered Institute of Arbitrators (CIArb) guideline on the use of AI in arbitration, published in 2025, provides a detailed framework for integrating AI into arbitration proceedings. This analysis covers every chapter, highlighting what each includes and identifying potential gaps. Below, we break down the key sections for clarity, followed by a detailed survey note for a deeper understanding. Chapter-by-Chapter Analysis Part I: Benefits and Risks: Details AI's advantages (e.g., legal research, data analysis) and risks (e.g., confidentiality, bias), providing a broad overview. Part II: General Recommendations: Advises on due diligence, risk-benefit analysis, legal compliance, and maintaining accountability for AI use. Part III: Parties’ Use of AI: Covers arbitrators' powers to regulate AI, party autonomy in agreeing on its use, and disclosure requirements for transparency. Part IV: Use of AI by Arbitrators: Allows discretionary AI use for efficiency, prohibits decision delegation, and emphasizes transparency through party consultation. Appendices: Includes templates for AI use agreements and procedural orders, aiding practical implementation. Definitions: Provides clear definitions for terms like AI, hallucination, and tribunal, based on industry standards. On definitions, it could have been better that ciarb could have opted definitions associated on AI from third-party technical forums like IEEE, Creative Commons, ISO etc., instead of IBM. Part I: Benefits and Risks Part I provides a balanced view of AI's potential benefits and risks in arbitration. The benefits section (1.1-1.10) highlights efficiency gains through legal research enhancement, data analysis capabilities, text generation assistance, evidence collection streamlining, and translation/transcription improvements. Notably, section 1.10 acknowledges AI's potential to remedy "inequality of arms" by providing affordable resources to under-resourced parties. The risks section (2.1-2.9) addresses significant concerns including confidentiality breaches when using third-party AI tools, data integrity and cybersecurity vulnerabilities, impartiality issues arising from algorithmic bias, due process risks, the "black box" problem of AI opacity, enforceability risks for arbitral awards, and environmental impacts of energy-intensive AI systems. Benefits Now, AI offers transformative potential in arbitration by enhancing efficiency and quality across various stages of the process: Legal Research : AI-powered tools outperform traditional search engines with their adaptability and predictive capabilities, enabling faster and more precise research. Data Analysis : AI tools can process large datasets to identify patterns, correlations, and inconsistencies, aiding in case preparation. Text Generation : Tools can draft, summarize, and refine documents while ensuring grammatical accuracy and coherence. Translation and Transcription : AI facilitates multilingual arbitration by translating documents and transcribing hearings at lower costs. Case Analysis : Predictive analytics provide insights into case outcomes and procedural strategies. Evidence Collection : AI streamlines evidence gathering and verification, including detecting deep fakes or fabricated evidence. Risks Despite its advantages, AI introduces several risks: Confidentiality : Inputting sensitive data into third-party AI tools raises concerns about data security and misuse. Bias : Algorithmic bias can compromise impartiality if datasets or algorithms are flawed. Due Process : Over-reliance on AI tools may undermine parties' ability to present their cases fully. "Black Box" Problem : The opaque nature of some AI algorithms can hinder transparency and accountability. Enforceability : The use of banned or restricted AI tools in certain jurisdictions could jeopardise the validity of arbitral awards. Limitations in Part 1 Part I exhibits several significant limitations that undermine its comprehensiveness: Incomplete treatment of risks : While identifying key risk categories, the guidelines lack depth in addressing bias detection and mitigation strategies, transparency mechanisms, and AI explainability challenges. Gaps in benefits coverage : The incomplete presentation of sections 1.5-1.9 suggests missing analysis of potential benefits such as evidence gathering and authentication applications. Absence of risk assessment framework : No structured methodology is provided for quantitatively evaluating the likelihood and severity of identified risks, leaving arbitrators without clear guidance on risk prioritisation. Limited forward-looking analysis : The section focuses primarily on current AI capabilities without adequately addressing how rapidly evolving AI technologies might create new benefits or risks in the near future. Part II: General Recommendations The CIArb guidelines emphasise a cautious yet proactive approach to AI use: Due Diligence : Arbitrators and parties should thoroughly understand any AI tool's functionality, risks, and legal implications before using it. Balancing Benefits and Risks : Users must weigh efficiency gains against potential threats to due process, confidentiality, or fairness. Accountability : The use of AI should not diminish the responsibility or accountability of parties or arbitrators. In summary, Part II establishes broad principles for AI adoption in arbitration. It encourages participants to conduct reasonable inquiries about AI tools' technology and function (3.1), weigh benefits against risks (3.2), investigate applicable AI regulations (3.3), and maintain responsibility despite AI use (3.4). The section addresses critical issues like AI "hallucinations" (factually incorrect outputs) and prohibits arbitrators from delegating decision-making responsibilities to AI systems. Part II provides general advice on due diligence, risk assessment, legal compliance, and accountability for AI use. However, it has notable gaps: Lack of Specific Implementation Guidance: The recommendations, such as conducting inquiries into AI tools (3.1), are broad and lack practical tools like checklists or frameworks. For example, it could include a step-by-step guide for evaluating AI tool security or a risk-benefit analysis template, aiding users in application. Insufficient technical implementation guidance : The recommendations remain abstract without providing specific technical protocols for different types of AI tools or use cases. No Examples or Hypothetical / Real Case Studies: Without real-world scenarios or even comparable hypothetical scenarios, such as how a party assessed an AI tool for confidentiality risks, practitioners may struggle to apply the recommendations. Hypothetical examples could bridge this gap, enhancing understanding. Absence of AI literacy standards : No baseline competency requirements are established for arbitration participants using AI tools, creating potential disparities in understanding and application. Missing protocols for AI transparency : The guidelines don't specify concrete mechanisms to make AI processes comprehensible to all parties, particularly important given the "black box" problem acknowledged elsewhere. No Mechanism for Periodic Review: Similar to Part I, there is no provision for regularly updating the recommendations, such as a biennial review process, which is critical given AI's rapid evolution, like the advent of generative AI models. Lack of Input from Technology Experts: The guideline does not indicate consultation with AI specialists or technologists, such as input from organizations like the IEEE ( IEEE AI Ethics ), which could ensure the recommendations reflect current industry practices and technological realities. Part III: Parties’ Use of AI Arbitrators’ Powers Arbitrators have broad authority to regulate parties' use of AI: They may issue procedural orders requiring disclosure of AI use if it impacts evidence or proceedings. Arbitrators can appoint experts to assess specific AI tools or their implications for a case. Party Autonomy Parties retain significant autonomy to agree on the permissible scope of AI use in arbitration. Arbitrators are encouraged to facilitate discussions about potential risks and benefits during case management conferences. Disclosure Requirements Parties may be required to disclose their use of AI tools to preserve procedural integrity. Non-compliance with disclosure obligations could lead to adverse inferences or cost penalties. In summary, Part III establishes a framework for regulating parties' AI use. Section 4 outlines arbitrators' powers to direct and regulate AI use, including appointing AI experts (4.2), preserving procedural integrity (4.3), requiring disclosure (4.4), and enforcing compliance (4.7). Section 5 respects party autonomy in AI decisions while encouraging proactive discussion of AI parameters. Sections 6 and 7 address rulings on AI admissibility and disclosure requirements respectively. Part III contains several problematic gaps: Ambiguity in Private vs. Procedural AI Use: Section 4.5 states arbitrators cannot regulate private use unless it interferes with proceedings, but the boundary is vague. For example, using AI for internal strategy could blur lines, and clearer definitions are needed. Inadequate dispute resolution mechanisms : Despite acknowledging potential disagreements over AI use, the guidelines lack specific procedures for efficiently resolving such disputes. Disclosure framework tensions : The optional nature of disclosure creates uncertainty about when transparency should prevail over party discretion, potentially undermining procedural fairness. Absence of cost allocation guidance : The guidelines don't address how costs related to AI tools or AI-related disputes should be allocated between parties. Limited cross-border regulatory guidance : Insufficient attention is paid to navigating conflicts between different jurisdictions' AI regulations, a critical issue in international arbitration. Potential Issues with Over-Reliance on Party Consent: The emphasis on party agreement (Section 5) might limit arbitrators’ ability to act decisively if parties disagree, especially if one party lacks technical expertise, potentially undermining procedural integrity. Need for Detailed Criteria for Selecting AI Experts: While arbitrators can appoint AI experts, there are no specific criteria, such as qualifications in AI ethics or experience in arbitration, which could ensure expert suitability and consistency. Part IV: Use of AI by Arbitrators Discretionary Use Arbitrators may leverage AI tools to enhance efficiency but must ensure: Independent judgment is maintained. Tasks such as legal analysis or decision-making are not delegated entirely to AI. Transparency Arbitrators are encouraged to consult parties before using any AI tool. If parties object, arbitrators should refrain from using that tool unless all concerns are addressed. Responsibility Regardless of AI involvement, arbitrators remain fully accountable for all decisions and awards issued. In summary, Part IV addresses arbitrators' AI usage, establishing that arbitrators may employ AI to enhance efficiency (8.1) but must not relinquish decision-making authority (8.2), must verify AI outputs independently (8.3), and must assume full responsibility for awards regardless of AI assistance (8.4). Section 9 emphasises transparency through consultation with parties (9.1) and other tribunal members (9.2). Part IV exhibits several notable limitations: Inadequate technical implementation guidance : The section provides general principles without specific technical protocols for different AI applications in arbitrator decision-making. Missing AI literacy standards for arbitrators : No baseline competency requirements are established to ensure arbitrators sufficiently understand the AI tools they employ. Insufficient documentation requirements : The guidelines don't specify how arbitrators should document AI influence on their decision-making process in awards or orders. Absence of practical examples : Without concrete illustrations of appropriate versus inappropriate AI use by arbitrators, the guidance remains abstract and difficult to apply. Underdeveloped bias mitigation framework : While acknowledging potential confirmation bias, the guidelines lack specific strategies for detecting and counteracting such biases. Appendix A: Agreement on the Use of AI in Arbitration Appendix A provides a template agreement for parties to formalize AI use parameters, including sections on permitted AI tools, authorized uses, disclosure obligations, confidentiality preservation, and tribunal AI use1. Critical Deficiencies Appendix A falls short in several areas: Excessive generality : The template may be too generic for complex or specialised AI applications, potentially failing to address nuanced requirements of different arbitration contexts. Limited customisation guidance : No framework is provided for adapting the template to different types of arbitration or technological capabilities of the parties. Poor institutional integration : The template doesn't adequately address how it interfaces with various institutional arbitration rules that may have their own technology provisions. Static nature : No provisions exist for updating the agreement as AI capabilities evolve during potentially lengthy proceedings. Insufficient technical validation mechanisms : The template lacks provisions for verifying technical compliance with agreed AI parameters. Appendix B: Procedural Order on the Use of AI in Arbitration Appendix B provides both short-form and long-form templates for arbitrators to issue procedural orders on AI use, introducing the concept of "High Risk AI Use" requiring mandatory disclosure, establishing procedural steps for transparency, and enabling parties to comment on proposed AI applications. Critical Deficiencies Appendix B contains several notable gaps: Technology adaptation limitations : The templates lack mechanisms for addressing emerging AI technologies that may develop during proceedings. Enforcement uncertainty : Limited guidance is provided on monitoring and enforcing compliance with AI-related orders. Insufficient technical validation : The templates don't establish concrete mechanisms for verifying adherence to AI usage restrictions. Absence of update protocols : No provisions exist for modifying orders as AI capabilities evolve during proceedings. Limited remedial options : Beyond adverse inferences and costs, few specific remedies are provided for addressing non-compliance. Conclusion: Actionable Recommendations for Enhancement The CIArb AI Guideline represents a significant first step toward establishing a framework for AI integration in arbitration, demonstrating awareness of both benefits and risks while respecting party autonomy. However, to transform this preliminary framework into a robust and practical tool, several enhancements are necessary: Technical Implementation Framework : Develop supplementary technical guidelines with specific protocols for AI verification, validation, and explainability across different arbitration contexts and AI applications. AI Literacy Standards : Establish minimum competency requirements and educational resources for arbitrators and practitioners to ensure informed decision-making about AI tools. Adaptability Mechanisms : Implement a formal revision process with specific timelines for guideline updates to address rapidly evolving AI capabilities. Transparency Protocols : Create more detailed transparency requirements with clearer thresholds for mandatory disclosure to balance flexibility with procedural fairness. Risk Assessment Methodology : Develop a quantitative framework for systematically evaluating AI risks in different arbitration contexts. Practical Examples Library : Supplement each section with concrete case studies illustrating appropriate and inappropriate AI applications in arbitration. Institutional Integration Guidance : Provide specific recommendations for aligning these guidelines with existing institutional arbitration rules.

  • Does Art Law in India Require Regulation? Maybe.

    India, a country with an unparalleled artistic heritage, faces unique legal challenges in regulating its growing art market. While existing laws protect antiquities and govern intellectual property, the lack of a dedicated regulatory body for art has led to gaps in dispute resolution, authentication, taxation, and trade compliance. Moreover the rise of digital art and NFTs has introduced complexities and intricacies that Indian laws are yet to fully address and dealt with. Without proper oversight, artists, collectors, and investors navigate a market that is often ambiguous and vulnerable to exploitation. This article highlights these pressing issues and the crucial role of arbitration, mediation, and regulatory reforms in shaping a more structured and secure art ecosystem. India's Art Industry at Loggerheads? The Indian art industry remains largely unregulated, leading to issues such as forgery, misrepresentation, and an unclear dispute resolution mechanism. Without a formal authentication authority, buyers and collectors often struggle to verify the provenance of artworks, increasing the risk of fraud, duplicate art works . This lack of oversight has allowed counterfeit artworks to flood the market, eroding trust and making transactions riskier for both buyers and sellers. Adding to these concerns is the absence of regulated pricing and taxation policies, making it difficult for artists and buyers to navigate legal obligations. Unlike other industries that benefit from structured oversight, art transactions in India remain fragmented, leading to inconsistent taxation and compliance challenges. Many deals occur in informal markets, where tax evasion and opaque pricing structures prevail. Without a dedicated Art Regulatory Authority, buyers rely on informal networks for provenance verification, and disputes often escalate into prolonged litigation. The lack of streamlined governance and regulations in the art market highlights the requirement for a structured regulatory framework that can ensure transparency, fairness, and accountability in all aspects of art trade and ownership. In India however, art arbitration looks at an interplay between, intellectual property rights and arbitration laws. As per the Arbitration and Conciliation Act, awards are unenforceable if they arise out of an “in-arbitrable” dispute. Art disputes involve issues of ownership, authenticity and copyright infringement, succession, and testamentary matter, therefore are often contested as being in-arbitrable   Art disputes often involve complex issues, including authorship claims, forgery allegations, and breach of contractual terms. Given the time-consuming nature of traditional litigation, arbitration and mediation have become preferred modes of dispute resolution in the global art market. These mechanisms provide a faster, more cost-effective, and confidential approach to resolving conflicts without jeopardizing artistic or commercial relationships. Mediation allows parties to reach a mutually acceptable settlement while preserving professional relationships. This is particularly useful in cases involving artist-gallery disputes, copyright infringements, and ownership claims. A mediated resolution ensures that creative partnerships remain intact, preventing long legal battles from hindering artistic growth. Arbitration, on the other hand, ensures confidential, specialised, and enforceable decisions, making it ideal for high-value transactions. Art-related disputes often involve international parties, and arbitration provides a neutral forum for resolution. Institutions such as the Delhi International Arbitration Centre (DIAC) and Mumbai Centre for International Arbitration (MCIA) have begun handling art-related disputes, yet India still lacks dedicated arbitral rules for art transactions. By integrating alternative dispute resolution mechanisms into the art industry, India can ensure faster dispute resolution, and stronger legal safeguards for artists, collectors. With the rise of blockchain technology, digital art and NFTs (Non-Fungible Tokens) have opened new avenues for artists to monetise their work. However, Indian law remains silent on key aspects, leading to challenges in ownership rights, resale royalties, and tax implications. One of the biggest concerns is ownership rights who holds the copyright for an NFT, the artist or the buyer? Traditional art markets recognize artists' rights to their works, but if we talk about in the digital space, the legal standing of NFT ownership is still unregulated . More of that, there is ambiguity surrounding resale royalties, where artists often do not receive reward and compensation  when their NFT is resold in the secondary market. In the absence of clear legal provisions, artists are often at the mercy of marketplace policies. Tax implications also remain undecided. Are NFTs classified as goods, securities, or digital assets under Indian law? The lack of proper classification results in taxation challenges, leaving buyers and sellers in a legal gray area. Without a defined legal framework, NFT transactions could potentially fall under multiple tax regulations, leading to confusion and unintended liabilities. A lack of regulation has led to instances of digital art theft, plagiarism, and   unauthorized commercial use, leaving artists vulnerable. The rise of AI-generated art and digital manipulation further complicates the legal landscape.   The international art trade is heavily regulated, and India has multiple laws governing the import and export of artworks. However, enforcement gaps have led to a thriving underground market where valuable artifacts bypass legal scrutiny. The Foreign Exchange Management Act (FEMA) 1999 governs cross-border transactions. Restrictions on foreign direct investment (FDI) in the art sector limit global collaborations, while compliance with Reserve Bank of India (RBI) regulations is mandatory. The Goods and Services Tax (GST) applies to artworks. Original paintings and sculptures attract 12% GST, while prints and reproductions are taxed at 18% GST (Ministry of Finance, Government of India). High taxes encourage informal trade and underreporting, impacting transparency. The Consumer Protection Act, 2019 protects buyers from misrepresentation and fraud, particularly in online sales (Department of Consumer Affairs, India). However, lack of a formal certification authority makes enforcement difficult. The Customs Tariff Act, 1975, governs import duties and requires special permits for antique exports (Central Board of Indirect Taxes and Customs). Stronger inter-agency collaboration is needed to curb illegal art trafficking and reclaim stolen heritage. Conclusion Art law in India is at a crossroads, requiring urgent regulatory intervention to balance cultural preservation with modern commercial needs. By establishing a dedicated regulatory body, modernizing legal frameworks, and integrating alternative dispute resolution mechanisms, India can create a more structured and globally competitive art market.

bottom of page