Search Results
Results found for empty search
- Understanding CERT-In's AIBOM: A Cybersecurity Tool, Not a New AI Regulation
Clearing the confusion around India's Artificial Intelligence Bill of Materials before the regulatory panic sets in What's All the Fuss About? Let's take a step back and understand what CERT-In's Artificial Intelligence Bill of Materials (AIBOM) actually is from a cybersecurity perspective.
- Announcing our Partnership with Governance Consulting Group to Advance Tech Governance
Indic Pacific Legal Research, a premier technology law consultancy specialising in AI governance, today announced a strategic partnership with Governance Consulting Group (GCG), a leading advisory firm in corporate and policy governance. This collaboration will provide integrated solutions to help organisations navigate the complex regulatory and et hical landscape of emerging technologies. The partnership combines Indic Pacific Legal Research's research-backed expertise in AI law, data policy, and intellectual property with GCG’s extensive experience in risk management, regulatory compliance, and corporate governance frameworks. The joint venture is poised to address the growing demand for holistic advisory services that align technological innovation with sound governance principles. About Governance Consulting Group Governance Consulting Group (GCG) is a business consulting firm that works at the intersection of enterprise strategy, public policy, and innovation. We offer business advisory, policy advisory, and program advisory services across high-impact sectors shaping the future economy. Through theme-based research and strategic foresight, we help business leaders, startups, and governments make informed decisions, build resilient systems, and implement innovation in tandem with long-term goals. From green manufacturing and AI to agri-tech, defense, space, and digital media, GCG supports organisations ready to lead, adapt, and grow in a fast-changing world. About Indic Pacific Legal Research Established in 2019, Indic Pacific Legal Research is an India-centric legal consultancy that delivers legal and policy solutions in technology and global governance . With a focus on artificial intelligence, intellectual property, and corporate innovation, the firm provides research-backed consulting, training, and publications to a diverse range of clients, including corporations, governments, and startups.
- AI Policy Aspects of the Karnataka Gig Workers (Social Security and Welfare) Ordinance, 2025
The Governor of the State of Karnataka had promulgated an ordinance entitled the Karnataka Gig Workers (Social Security and Welfare) Ordinance, 2025. This ordinance by design addresses certain labour law and employment-related health and safety issues associated with the ever-growing demographic of gig workers in the State of Karnataka. While the ordinance focuses on primordial labour law and policy issues since gig workers may not receive similar benefits as "employees", there are some AI policy aspects around the contents of the ordinance, which merits a reasonable analysis of this ordinance. Since this ordinance seems to be in force (post-promulgation), this explainer addresses the core AI policy and industry aspects around this ordinance, which is perhaps the first ever AI policy measure done by any Indian Statement at a legislative level, for it being comprehensive, and unique.
- Indic Pacific Legal Research Launches India's First Global AI Inventorship Handbook, Setting New Standards for AI Patent Law Understanding
A unique handbook addresses critical questions around AI's role in invention and patent law, offering accessible insights for legal professionals, technologists, and policymakers. Lucknow, India – June 20, 2025 – Indic Pacific Legal Research LLP , an India-based emerging AI-focused technology law consultancy, today announced the release of The Global AI Inventorship Handbook, First Edition (RHB-AI-INVENT-001-2025), marking India's inaugural comprehensive guide to understanding AI inventorship in patent law. The groundbreaking handbook, developed through VLA.Digital , the Research & Innovation Division of Indic Pacific Legal Research, is led by distinguished legal experts Bhavana J Sekhar and Kailash Chauhan , Chair of the AI Development Committee at the Indian Society of Artificial Intelligence and Law (ISAIL) . Read the Handbook at https://indopacific.app/product/the-global-ai-inventorship-handbook-first-edition-rhb-ai-invent-001-2025/ The publication features an insightful foreword by Vivek Doulatani and is now available exclusively through the IndoPacific App platform. Addressing Critical Legal Questions in the AI Era The handbook tackles one of the most pressing questions in contemporary intellectual property law: whether artificial intelligence can qualify as an inventor or significantly contribute to the invention process. As AI systems become increasingly sophisticated, the intersection of AI technology and patent law presents complex challenges that require nuanced understanding and clear guidance. Read the Handbook at https://indopacific.app/product/the-global-ai-inventorship-handbook-first-edition-rhb-ai-invent-001-2025/
- The Illusion of Thinking: Apple's Groundbreaking Research Exposes Critical Limitations in AI Reasoning Models
Apple's recent research paper titled " The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity " has sent shockwaves through the artificial intelligence community, fundamentally challenging the prevailing narrative around Large Reasoning Models (LRMs) and their capacity for genuine reasoning. The study, led by senior researcher Mehrdad Farajtabar and his team, presents compelling evidence that current reasoning models fail catastrophically when faced with problems beyond a certain complexity threshold, raising profound questions about the path toward artificial general intelligence (AGI). The study focused on variants of classic algorithmic puzzles, including the Tower of Hanoi, which serves as an ideal test case because it requires precise algorithmic execution while allowing researchers to systematically increase complexity . This approach enabled the analysis of not only final answers but also the internal reasoning traces, providing unprecedented insights into how LRMs actually "think".
- New Research Decodes NIST Adversarial ML Standards for Indian Enterprises
Indic Pacific Legal Research LLP today released "NIST Adversarial Machine Learning Taxonomies: Decoded" (IPLR-IG-016, First Edition 2025), a comprehensive analysis of emerging AI cybersecurity threats and mitigation strategies. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/ 📈 Research Highlights: The publication, authored by cybersecurity researchers Gargi Mundotia, Yashita Parashar, and Sneha Binu, translates complex NIST AI 100-2 E2025 standards into actionable intelligence for Indian organizations across critical sectors. 🏦 Sector-Specific Focus: The research addresses unique vulnerabilities in Banking & Financial Services (regulatory compliance in AI fraud detection), Telecommunications (deepfake attacks), and Digital Public Infrastructure (citizen data governance concerns). ⚡ Dual-Use Technology Challenge: As AI enhances cyber defense capabilities, the same technology enables sophisticated attacks including adversarial AI and data poisoning - creating an evolving threat landscape requiring specialized countermeasures. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/ "This research fills a critical gap by making international cybersecurity standards practically applicable for Indian enterprises around adversarial machine learning," said Abhivardhan, our Founder. "Understanding adversarial ML isn't optional anymore - it's essential for digital resilience." The publication advocates for zero-trust frameworks, advanced encryption, and sector-specific approaches to contain AI-powered risks. Download: https://indopacific.app/product/nist-adversarial-machine-learning-taxonomies-decoded-iplr-ig-016/
- Announcing our Partnership with Future Shift Labs
We are delighted to announce our partnership with Future Shift Labs in line with a memorandum of understanding mutually agreed upon long back. We genuinely appreciate the good work done in improving AI-enabled social sector deliverables and accessibility initiatives, such as Yashoda AI by National Commission for Women - India , wherein team FSL had a huge role to play, for instance. Team FSL also conducted a successful event recently around Indian DPI use cases and their implementation in African countries. We believe in the convictions of the leadership of FSL led by Nitin Narang and Sagar Vishnoi , and the team members' zeal and focus-centric approach to things, including Pranav Dwivedi , Bhairabi Kashyap Deka and Aastha Naresh Kohli . Well, it's a long road to go, so we hope to continue this partnership for the better.
- Abhivardhan quoted on GOI Advisory against Pakistan-origin Content
Our Founder and Managing Partner, Abhivardhan was quoted by Hindustan Times on the Government of India 's non-binding advisory to OTT platforms, streaming platforms and digital intermediaries against carrying content originating from Pakistan on May 9, 2025. Read the complete feature at https://www.hindustantimes.com/india-news/no-pakistani-films-shows-or-music-on-indian-otts-govt-101746731485524.html
- Examining the Perplexity Position on Antitrust Issues associated with Google
Recently, Aravind Srinivas, the CEO of Perplexity.AI , had announced that his company was asked to testify at the United States Congress on the recent antitrust issues clearly raised by the US FTC against Google. Now, Perplexity AI's position on the Google antitrust case reveals surprising parallels with the Trump Administration's April 3, 2025 AI memorandums, though significant tensions exist in how their approach to intellectual property protection and competition would impact Indo-Pacific digital sovereignty. Let's understand this further in this brief input.
- The Position of Second Trump Admin on AI and Intellectual Property Laws: The April 2025 Memorandums
The Trump Administration recently released two significant memorandums—M-25-22 ("Driving Efficient Acquisition of Artificial Intelligence in Government") and M-25-21 ("Accelerating Federal Use of AI through Innovation, Governance, and Public Trust")—providing guidelines on federal use and procurement of artificial intelligence (AI). This policy brief analyzes these memorandums specifically regarding intellectual property (IP) protections in AI development, contrasting them with recent advocacy by certain tech companies for weakened copyright protections. From an Indo-Pacific perspective, these memorandums signal a continued U.S. commitment to IP protection while simultaneously promoting AI innovation in global technology competition, particularly with China. This position has significant implications for Indo-Pacific nations navigating their own AI governance frameworks and IP protection regimes, especially as they position themselves within the U.S.-China technological rivalry. The Call to Undo TRIPS and IP Laws using Fair Use Justifications Several prominent tech leaders and companies have actively lobbied for relaxed IP protections for AI development. OpenAI's March 13, 2025, submission to the White House Office of Science and Technology Policy explicitly called for fundamental changes to U.S. copyright law that would allow AI companies to use copyrighted works without permission or compensation to rightsholders.
- US Government Accountability Office’s Testimony on Data Quality and AI, Explained
The Government Accountability Office (GAO) testimony before the Joint Economic Committee highlights a critical challenge facing the federal government: how to leverage artificial intelligence to combat fraud and improper payments while ensuring data quality and workforce readiness. This analysis examines the intricate relationship between data quality, skilled personnel, and AI implementation in government settings, drawing insights from the GAO's extensive research and recommendations. The Magnitude of the Problem: Fraud and Improper Payments The federal government faces staggering financial losses due to fraud and improper payments. According to GAO estimates, fraud costs taxpayers between $233 billion and $521 billion annually, based on fiscal year 2018-2022 data1. Since fiscal year 2003, cumulative improper payment estimates by executive branch agencies have totaled approximately $2.8 trillion. The scale of this problem demonstrates why innovative solutions like AI are being considered.
- Indo-Pacific Research Principles on Use of Large Reasoning Models & 2 Years of IndoPacific.App
The firm is delighted to first announce the launch of Indo Pacific Research Principles on Large Reasoning Models, and also announces two successful years of IndoPacific App, our legal and policy literature archive since 2023. The first section covers the firm's reasoning to introduce these principles on large reasoning models. The Research Principles: Their Purpose Now, Large Reasoning Models as developed by AI companies across the globe, whose examples include Deeper Search & Deep Research by XAI (Grok), Deep Research by Perplexity, Gemini's Deep Research and even OpenAI's own Deep Reasoning tool, are supposed to mimic reasoning abilities of human beings. The development of LRMs emerged from the recognition that standard LLMs often struggle with complex reasoning tasks despite their impressive language generation capabilities. Researchers observed, or say, supposed , that prompting LLMs to "think step by step" or to break down problems into smaller components often improved performance on mathematical, logical, and algorithmic challenges. Models like DeepSeek R1, Claude, and GPT-4 are frequently cited examples that incorporate reasoning-focused architectures or training methodologies. These models are trained to produce intermediate steps that suppose to 'resemble' human reasoning processes before arriving at final answers. These models, which include systems like Claude 3.7, DeepSeek R1, GPT-4, and others, claim to exhibit reasoning capabilities that mimic human thought processes, often displaying their work through "reasoning traces" or step-by-step explanations of their thought processes. However, recent research has begun to question these claims and identify significant limitations in how these models actually reason. While LRMs have shown some performance on certain benchmarks, researchers have found substantial evidence suggesting that what appears to be reasoning may actually be sophisticated pattern matching rather than genuine logical processing. The Anthropomorphisation Trap A critical issue in evaluating LRMs is what researchers call the " anthropomorphisation trap " - the tendency to interpret model outputs as reflecting human-like reasoning processes simply because they superficially resemble human thought patterns. The inclusion of phrases like "hmm..," "aha..," "let me think step by step..," may create the impression of deliberative thinking, but these are more likely stylistic imitations of human reasoning patterns present in training data rather than evidence of actual reasoning. This trap is particularly concerning because it can lead researchers and users to overestimate the models' capabilities. When LRMs produce human-like reasoning traces that appear thoughtful and deliberate, we may incorrectly attribute sophisticated reasoning abilities to them that don't actually exist. Here is a table that gives you an overview of the limitations associated with large reasoning models. Reasoning Limitation in Large Reasoning Models Description Lack of True Understanding LRMs operate by predicting the next token based on patterns they've learned during training, but they fundamentally lack a deep understanding of the environment and concepts they discuss. This limitation becomes apparent in complex reasoning tasks that demand true comprehension rather than pattern recognition. Contextual and Planning Limitations Although modern language models excel at grasping short contexts, they often struggle to maintain coherence over extended conversations or larger text segments. This can result in reasoning errors when the model must connect information from various parts of a dialogue or text. Additionally, LRMs frequently demonstrate an inability to perform effective planning for multi-step problems. Deductive vs. Inductive Reasoning Research indicates that LRMs particularly struggle with deductive reasoning, which requires deriving specific conclusions from general principles with a high degree of certainty and logical consistency. Their probabilistic nature makes achieving true deductive closure difficult, creating significant limitations for applications requiring absolute certainty. Even in the paper co-authored by Prof. Subbarao Kambhampati, entitled, " A Systematic Evaluation of the Planning and Scheduling Abilities of the Reasoning Model o1 " directly addresses critical themes from our earlier discussion about Large Reasoning Models (LRMs). Here are some quotes from this paper: While o1-mini achieves 68% accuracy on IPC domains compared to GPT-4's 42%, its traces show non-monotonic plan construction patterns inconsistent with human problem-solving [...] At equivalent price points, iterative LLM refinement matches o1's performance, questioning the need for specialized LRM architectures. [...] Vendor claims about LRM capabilities appear disconnected from measurable reasoning improvements. The Indo-Pacific Research Principles on Use of Large Reasoning Models Based on the evidence we have collected, and the insights received, we have proposed the following research principles on use of large reasoning models by Indic Pacific Legal Research. Principle 1: Emphasise Formal Verification Train LRMs to produce verifiable reasoning traces, like A* dynamics or SoS, for rigorous evaluation. Principle 2: Be Cautious with Intermediate Traces Recognise traces may be misleading; do not rely solely on them for trust or understanding. Principle 3: Avoid Anthropomorphisation Focus on functional reasoning, not human-like traces, to prevent false confidence. Principle 4: Evaluate Both Process and Outcome Assess both final answer accuracy and reasoning process validity in benchmarks. Principle 5: Transparency in Training Data Be clear about training data, especially human-like traces, to understand model behaviour. Principle 6: Modular Design Use modular components for flexibility in reasoning structures and strategies. Principle 7: Diverse Reasoning Structures Experiment with chains, trees, graphs for task suitability, balancing cost and effectiveness. Principle 8: Operator-Based Reasoning Implement operators (generate, refine, prune) to manipulate and refine reasoning processes. Principle 9: Balanced Training Use SFT and RL in two-phase training for foundation and refinement. Principle 10: Process-Based Evaluation Evaluate the entire reasoning process for correctness and feedback, not just outcomes. Principle 11: Integration with Symbolic AI Combine LRMs with constraint solvers or planning algorithms for enhanced reasoning. Principle 12: Interactive Reasoning Design LRMs for environmental interaction, using feedback to refine reasoning in sequential tasks. Please note that all principles are purely consultative and constitute no binding value on the members of Indic Pacific Legal Research. We also permit the use of these principles, provided that we are well-cited and referenced, for strict non-commercial use. IndoPacific.App Celebrates its Glorious 2 Years The IndoPacific App, launched under Abhivardhan's leadership in 2023 was a systemic reform undergone at Indic Pacific Legal Research to document our research publications, and contributions better. In our partnership with the Indian Society of Artificial Intelligence and Law, ISAIL.IN 's publications, and documentations are also registered at the IndoPacific App, under the AiStandard.io Alliance, and even otherwise as well. As this archive of legal (mostly) and policy literature completes its 2 years of existence under Indic Pacific's VLiGTA research & innovation division, we are glad to put some statistics clear for everyone's purview, which is updated as of April 12, 2025, and verified by manual means, after our use of Generative AI tools. This means that the stats is double-checked: We host publications and documentations of exactly 238 original authors (1 error removed). Our founder, Mr Abhivardhan's own publications constitute around 10% (approx.) The number of publications on IndoPacific App stand at 85, however, the number of chapters or contributory sections or articles, if we add numbers within research collections, then the number of research contributions stand at 304 unique contributions, which is a historic figure. Now, if we attribute these 304 unique contributions to each author (which are in the form of chapters to a collection of research or handbook, or a report, or a brief, for instance) - then the number of individual author credits will cross 300 as per our approximate estimates. This means something simple, honest and obvious. The IndoPacific.App , started by our Founder, Abhivardhan, is the biggest technology law archive of mostly Indian authors, with around 238 original authors documented in this archive, and 304 unique contributions (published) featured. There is no law firm, consultancy, think tank or institution with such a huge technology law archive, with independent support, and we are proud to have achieved this feat in the 5 years span of existence of both Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law. Thank you for becoming a part of this research community either through Indic Pacific Legal Research, or the Indian Society of Artificial Intelligence and Law. It is our honour and duty to safeguard this archive for all, which is 99% (except the handbooks) free. So, don't wait, go and download some amazing works from the IndoPacific.app , today.











