Search Results
Results found for empty search
- The John Doe v. GitHub Case, Explained
This case analysis is co-authored by Sanvi Zadoo and Alisha Garg, along with Samyak Deshpande. The authors of this case analysis have formerly interned at the Indian Society of Artificial Intelligence and Law quite recently. In a world where artificial intelligence is redefining the way developers write code, Copilot, an AI-powered coding program developed by GitHub in collaboration with OpenAI was launched in 2021. Copilot promised to revolutionize software development by generating code functions based on the input of choice. However, this ‘revolution’ soon found itself in the midst of a legal storm . The now famous GitHub-Copilot case revolves around allegations that the AI-powered coding assistant uses copyrighted code from open-source repositories without proper credit. The initiative for the lawsuit was taken by programmer and attorney Matthew Butterick and joined by other developers. They claimed that Copilot's suggestions include exact code from public repositories without adhering to the licenses under which the code was published. Despite efforts by Microsoft, GitHub, and OpenAI to dismiss the lawsuit, the court allowed the case to proceed. Timeline of the case June 2021 GitHub Copilot is publicly launched in a technical preview November 2022 Plaintiffs file a lawsuit against GitHub and OpenAI, alleging DMCA violations and breach of contract December 2022 The court dismisses several of the Plaintiffs' claims, including unjust enrichment, negligence, and unfair competition, with prejudice. March 2023 GitHub introduces new features for Copilot, including improved security measures and an AI-based vulnerability prevention system. June 2023 The court dismisses the DMCA claim with prejudice July 2024 The California court affirms the dismissal of nearly all the claims
- Supreme Court of Singapore’s Circular on Using RoughDraft AI, Explained
The Supreme Court of Singapore came up with an intriguing circular on Using Generative AI, or RoughDraft AI (term coined by AI expert Gary Marcus), by stakeholders in courts. The guidance indicated in the circular is quite intriguing, which requires an essential breakdown. However, to begin with: the circular itself shows that the Court does not regard using GenAI tools, as an ultimatum to improve their court tasks, and has reduced the status of Generative AI tools as mere productivity enhancement tools, unlike what many AI companies in India & abroad tried to claim. This insight covers the circular in detail.
- CCI's Landmark Ruling on Meta's Privacy Practices
The Competition Commission of India's (CCI) recent press release announcing a substantial penalty of Rs. 213.14 crore on Meta marks a significant milestone in the regulation of digital platforms in India. This decision, centered on WhatsApp's 2021 Privacy Policy update, underscores the growing scrutiny of data practices and market dominance in the digital economy. The CCI's action reflects a proactive approach to addressing anti-competitive behaviours in the tech sector, particularly concerning data sharing and user privacy.This policy insight examines the implications of the CCI's decision, which goes beyond mere financial penalties to impose behavioural remedies aimed at reshaping Meta's data practices in India. The order's focus on user consent, data sharing restrictions, and transparency requirements signals a shift towards more stringent regulation of digital platforms. It also highlights the intersection of competition law with data protection concerns, setting a precedent that could influence regulatory approaches both in India and globally. As a draft of Digital Competition Bill was proposed in March 2024, this CCI action provides valuable insights into the regulator's perspective on digital market dynamics and its readiness to enforce competition laws in the digital sphere. The decision raises important questions about the balance between fostering innovation in the digital economy and protecting user rights and market competition. Detailed Breakdown of the CCI Press Release
- India's Draft Digital Personal Data Protection Rules, 2025, Explained
Sanad Arora, Principal Researcher is the co-author of this Insight. The Draft Digital Personal Data Protection (DPDP) Rules, released on January 3, 2025 , represent an essential step towards making in the digital age simple. These rules aim to enhance the protection of personal data while addressing the challenges posed by emerging technologies, particularly artificial intelligence (AI). As AI continues to evolve and integrate into various sectors, ensuring that its deployment aligns with ethical standards and legal requirements is paramount. The DPDP rules seek to create a balanced environment that fosters innovation while safeguarding individual privacy rights. Figure 1: Draft DPDP Rules (January 3, 2025 version, explained and visualised). This chart, meticulously created by Abhivardhan and Sanad Arora, as a part of the explainer. Download the chart below.
- Technology Law is NOT Legal-Tech: Why They’re Not the Same (and Why It Matters)
Created using Luma AI. Technology has changed the way we communicate, transact, and live our daily lives. It’s no wonder that law and technology have increasingly converged into two distinct but often confused areas: Tech-Legal and Legal-Tech . This quick explainer clears the air on what each term means, why mixing them up creates chaos, and how to get them right. The Basic Difference Tech-Legal focuses on the legal frameworks, policies, and governance of technology itself. It deals with questions like: How should AI be regulated? What legal boundaries apply to blockchain or cross-border data flows? Think of Tech-Legal as creating the rules of the game for emerging technologies. Legal-Tech , on the other hand, is about using technology to improve or automate legal services and processes. It addresses questions like: How do we efficiently manage legal cases online? Can we use AI to review contracts faster? Legal-Tech is essentially playing the game better by adopting new tools to streamline legal work. Diving into Tech-Legal What It Involves
- When AI Expertise Meets AI Embarrassment: A Stanford Professor's Costly Citation Affair
In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI. The case, Kohls v. Ellison , which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice. Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less." The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice. Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.
- Beyond the AI Garage: India's New Foundational Path for AI Innovation + Governance in 2025
This is quite a long read. India's artificial intelligence landscape stands at a pivotal moment, where critical decisions about model training capabilities and research directions will shape its technological future. The discourse was recently energised by Aravind Srinivas, CEO of Perplexity AI, who highlighted two crucial perspectives that challenge India's current AI trajectory. T he Wake-up Call Figure 1: The two posts on X.com on Strategic Perspectives, by Aravind Srinivas. Srinivas emphasises that India's AI community faces a critical choice: either develop model training capabilities or risk becoming perpetually dependent on others' models. His observation that "Indians cannot afford to ignore model training" stems from a deeper understanding of the AI value chain. The ability to train models represents not just technical capability, but technological sovereignty. A significant revelation comes from DeepSeek's recent achievement . Their success in training competitive models with just 2,048 GPUs challenges the widespread belief that model development requires astronomical resources. This demonstrates that with strategic resource allocation and expertise, Indian organisations can realistically pursue model training initiatives. India's AI ecosystem currently focuses heavily on application development and use cases . While this approach has yielded short-term benefits, it potentially undermines long-term technological independence. The emphasis on building applications atop existing models, while important, shouldn't overshadow the need for fundamental research and development capabilities. In short, Srinivas attempts to highlight 3 key issues, through his posts, on the larger tech development and application layer debate in India: Limited hardware infrastructure for AI model training Concentration of model training expertise in select global companies Over-reliance on foreign AI models and frameworks This insight fixates itself on legal and policy perspectives around building necessary capabilities around innovating in core AI models, and also focusing on building use case capitals in India, including in Bengaluru and other places. In addition, this long insight covers recommendations to the Ministry of Electronics and Information Technology, Government of India on the Report on AI Governance Guidelines Development , in the concluding sections. The Policy Imperative: Balancing Use Cases and Foundational AI Development What is pointed out by Aravind Srinivas about AI development avenues in India's scenario is also backed by policy & industry realities. The recent repeal of the former Biden Administration (US Government)'s Executive Order on Artificial Intelligence by the Trump Administration hours ago demonstrated that the US Government's focus has pivoted on hard resource considerations around AI development, such as data centres, semiconductors, and talent. India has no choice but to keep both ideas - building use case capitals in India, and focus on foundational AI research alternatives, at the same time.
- Decoding the Second Trump Admin AI & Law Approach 101: the Jan 23 Executive Order
On January 23, 2025, US President Donald Trump signed an executive order titled "Removing Barriers to American Leadership in Artificial Intelligence," marking a significant shift in U.S. AI policy. This order revokes and replaces key elements of the Biden administration's approach to AI governance. Here's a simple and comprehensive breakdown of the executive order's main provisions in this quick insight.
- The CCI Study on AI and Competition, Explained
Quite recently, the Competition Commission of India in partnership with the Management Development Institute, Gurugram, came up with a Market Study on Artificial Intelligence and Competition. The study is extensive, and to that end, this explainer explores the most important estimates that have been provided by India's antitrust regulator. The report is divided into six chapters and one annexure. However, this explainer intends to keep things specific, to examine the context set by the authors of the report, and evaluate the initial relevance of the suggestions made by them. Defining the "AI Ecosystem" Figure 1: Figure 3 on Artificial Intelligence (AI) Stack for Market Study on AI and Competition of CCI and MDI, Gurugram.
- Exploring India's First Open-Access AI Policy Knowledge Ecosystem
Artificial intelligence (AI) is redefining industries and transforming the way we live and work. However, the presence of AI raises essential questions about its legal governance. The need for a solid legal framework is more critical now than ever. With this in mind, Indic Pacific Legal Research has undertaken an innovative journey to establish India's first open-access tech law knowledge ecosystem. After five years of dedicated research, they have created a platform intended to support legal professionals, policymakers, and businesses as they navigate the intricate world of AI governance. This blog post will explore the different aspects of this revolutionary ecosystem, emphasizing its importance and the valuable resources it provides to tech law professionals in India. A modern library showcasing legal resources for tech law professionals Key Features of the Ecosystem Indic Pacific Legal Research offers various unique features within its tech law knowledge ecosystem, tailored to meet the specific needs of the AI governance community. AIACT.IN Visualiser [ https://indicpacific.com/ai/regulationvisual ] One of the core tools is the AIACT.IN Visualiser , the first interactive AI regulation tracker in India. This tool provides a real-time representation of the governance landscape from 2000 to 2025, allowing users to stay updated on the latest AI regulation developments. For instance, the Visualiser helps professionals spot trends, such as the increasing focus on data privacy regulations, which spiked by over 30% in the last two years. AIACT.IN [ https://indicpacific.com/ai ] The AIACT.IN platform features India's first privately proposed AI bill (V5.0, April 2025), created by Abhivardhan. This invaluable resource enables users to understand the potential impacts of AI regulations on various industries, from healthcare to finance. By examining the proposed bill, stakeholders can engage in informed debates about the future of AI governance in India, paving the way for responsible regulation. The AIACT.IN platform also hosts 40+ real-life AI regulation sources from Indian government institutions. Tech in Data Insights [ https://indicpacific.com/data ] The Tech in Data Insights feature delivers market-driven analyses merging legal, technological, and business insights. This resource is invaluable for AI leaders seeking to stay informed about emerging trends. For instance, recent insights have shown that 78% of Indian enterprises are considering AI adoption, highlighting an urgent need for legal guidance in aligning those strategies with compliance. Tech in Data Explainers [ https://indicpacific.com/glossary ] To make complex AI governance concepts easier to understand, the Tech in Data Explainers section provides over 90 glossary terms. This resource helps legal professionals and business leaders grasp intricate legal issues, fostering greater inclusivity in discussions around AI governance. By simplifying legal jargon, these explainers promote a wider understanding and engagement with relevant topics. Tech in Data Connect [ https://indicpacific.com/connect ] The Tech in Data Connect feature is an interactive tool that lays out pathways for learning about AI & Law 101 and AI & Geopolitics 101. This resource is particularly helpful for newcomers, structuring learning about the integration of technology and law. AI & Law 101: https://indicpacific.com/ai101 AI & Geopolitics 101: https://indicpacific.com/aigeo101 By guiding users through essential knowledge areas, Tech in Data Connect encourages proactive engagement with critical AI and legal topics. IndoPacific.App [ https://indopacific.app ] The IndoPacific.App stands as India’s largest tech law archive, housing over 300 contributions from 238 authors, with 99% of the content available for free. This rich collection is a vital resource for legal professionals keen to deepen their understanding of tech law. With such extensive access to quality information, IndoPacific.App democratizes knowledge and allows for active engagement with the legal implications of technology. Why This Matters The creation of this comprehensive tech law knowledge ecosystem holds profound significance for various reasons: Research-Backed Expertise The resources offered by Indic Pacific Legal Research are underpinned by thorough research and substantial market insights. This ensures that legal practitioners can trust the information they receive, leading to more robust decision-making. Focus on Technology Law With an impressive 67% concentration on technology law, this ecosystem is home to the most extensive resources dedicated to this field in India. This position enables in-depth exploration of AI governance opportunities and challenges. Industry-Conscious Solutions Far from traditional academic rhetoric, this ecosystem provides industry-focused solutions. Its practical approach enhances a legal professional's ability to apply knowledge in real-world contexts. Tailored for Diverse Stakeholders The ecosystem caters to a broad spectrum of stakeholders, including startups, MSMEs, legal experts, and policymakers. This inclusivity makes the resources accessible and relevant to everyone engaged in AI governance. Actionable Intelligence for Navigating AI Governance Indic Pacific Legal Research offers more than just materials; their resources provide actionable intelligence for navigating India's evolving AI governance landscape. Equipping legal professionals with tools and insights helps them adequately engage with AI regulations, enabling them to influence the landscape of technology law in India positively. Final Thoughts As AI continues to change industries and impact societies, the call for a comprehensive legal framework grows louder. Indic Pacific Legal Research's open-access tech law knowledge ecosystem marks a significant advance in fulfilling this requirement. Through a diverse array of resources, including interactive tools, educational materials, and an extensive archive, this ecosystem empowers legal professionals, policymakers, and businesses to adeptly navigate the complexities of AI governance. In today's rapidly changing environment, remaining informed and engaged is essential. The resources accessible through Indic Pacific Legal Research are designed to promote this connection, helping stakeholders effectively address the challenges and prospects that AI technologies present. Start exploring this invaluable ecosystem today at indicpacific.com.
- India's AI Regulatory Landscape: Introducing Our Comprehensive Tracker on aiact.in
India currently operates under 35 distinct regulatory sources governing artificial intelligence—a complex framework that reflects both the country's federal structure and the evolving nature of AI governance. Just go to aiact.in or search an AI regulation document of Indian perspective, at indicpacific.com . At Indic Pacific , we recognize the challenges posed by this fragmented regulatory environment. Delhi-centric policymaking and competing interests have created uncertainty for stakeholders seeking clarity on AI compliance and governance. To address this gap, we are pleased to announce the launch of our India AI Regulation Tracker, available at aiact.in . This initiative builds on our earlier work, including the "India AI Regulation Landscape 101" and our AI law case repository at indicpacific.com . The tracker was developed following extensive research, and was inspired by a July 2025 paper on AI and Federalism in India, authored by Abhivardhan ㅤ. and Deepanshu Singh for the Forum of Federations . What the tracker provides Key Features 1️⃣ Binding authority and legal status of each document 2️⃣ Document classification and categorization 3️⃣ Issuing authority identification 4️⃣ Direct access links to primary sources We have curated references from legal media, case law repositories, and mainstream sources to ensure comprehensive coverage. In select instances, we provide analytical insights with corresponding source documentation. This resource is designed for legal practitioners, policymakers, researchers, and organizations navigating India's AI regulatory framework.
- A Tech Paper on AI Output Preferences validates our AIInventorship.com Handbook
I have said this to all my co-authors, including Kailash Chauhan and Bhavana J , when we co-authored one of the rarest AI inventorship handbooks from India, ever: "Striving for excellence matters. Stay humble, build the foundations, then let the river flow." What happened exactly? Tuhin Chakraborty, P. Dhillion and Jane Ginsburg from Stony Brook University , Columbia Law School and University of Michigan respectively published a crucial paper on AI and market harms, from a copyright and AI angle (check the paper at https://www.arxiv.org/abs/2510.13939 ). And that paper validates AIinventorship.com , Indic Pacific 's Global AI Inventorship Handbook. :D How exactly? Are we hyping AIinventorship.com ? No. 1️⃣ The speed and scale of LLMs to produce generative content causes market risks. We all knew that, but the paper proposed that and proved it. 2️⃣ Great, but in what context? Creating artificial competition for authors who developed their content using LLMs considering their technical capabilities creates market clarity and trust dilution - and that is a competition policy problem lol. 3️⃣ Expert human writers were recruited for this research to emulate the style/voice of 50 diverse award-winning authors - which were then paired with human-written excerpts with AI-written ones and had 150+ expert and lay readers evaluate them blindly. Result: Both Human & AI received the same In-context prompt making it an apples-to-apples comparison. 4️⃣ However, fine-tuning #ChatGPT on individual authors’ complete works showed something: Experts now favoured AI-generated text for stylistic fidelity (OR=8.16) and writing quality (OR=1.87), with lay readers showing similar shifts. Why This Research Validates Our Handbook 😉 When we published The Global AI Inventorship Handbook (June 2025), we warned this was coming. Now we have empirical proof. Chapter 2 on Convergence of Patent and Copyright Laws on AI had estimated that LLMs would cause "market dilution" by "flooding the market and causing more competition for sales of an author's works." The research confirms this with $81 per author cost—a 99.7% reduction that makes displacement economically inevitable. Chapter 2's "Cross-Domain Legal Contamination" framework explained how companies face contradictory legal positions. The research validates this: fine-tuned outputs evade detection 97% of the time (3% detection rate vs. 97% for in-context prompting), creating legally invisible competition—exactly the "artificial competition" problem we identified. The Introduction had stated that 54,000 GenAI patents filed while "legal infrastructure remains fundamentally misaligned." The research's guardrail recommendations echo our call for improved subject matter eligibility standards and data governance frameworks (Chapter 6). Cross-Domain Legal Contamination Confirmed The handbook introduced the concept of "Cross-Domain Legal Contamination," explaining how "patent and copyright jurisprudence are experiencing mutual contamination where legal theories and precedents bleed across domains." The authors warned that "companies developing AI systems cannot maintain contradictory legal positions where their patent applications claim AI innovations are novel and non-obvious" while "their copyright defenses argue AI training is merely transformative pattern recognition." The Chakrabarty research validates this framework through detection rates. Fine-tuned outputs were flagged as AI-generated only 3% of the time compared to 97% for in-context prompting by state-of-the-art AI detectors like Pangram Labs. This creates what the handbook called "legally invisible competition"—AI-generated content that evades both detection systems and traditional copyright frameworks. Bottom line: Our handbook mapped the legal framework. This research proves it's happening at scale, right now. Pattern Extraction and the Decompression Problem Chapter 2's section on "Pattern Extraction and the Decompression Problem" argued that "AI systems convert expressive content into mathematical abstractions that capture statistical relationships rather than protected expression." The handbook explained this operates "at the precise boundary between copyrightable expression and uncopyrightable ideas, procedures, and systems." The research's mediation analysis confirms this mechanism: "fine-tuning eliminates detectable AI stylistic quirks (e.g., cliche density) that penalize in-context outputs." This validates the handbook's assertion that "when AI outputs resemble training data, courts must distinguish between statistical coincidence and expressive copying." Trade Secrets as the Last Resort Chapter 3 explored "Trade secrets as an alternative to patent protection," noting that "due to the conventional questions surrounding the human-inventor conundrum, a viable method of AI intellectual property protection is trade secret protection." The research validates this prediction: at $81 per author with 99.7% cost reduction, the competitive advantage lies not in patented processes but in proprietary training datasets and fine-tuning methodologies that remain trade secrets. Now go, read it. Get the Global AI Inventorship Handbook now, on indopacific.app .











