Search Results
105 items found
- The UK Government Brief on AI and Copyright Law (2024), Explained
The author of this insight was a Research Intern at the Indian Society of Artificial Intelligence and Law. Made via Luma AI. The UK economy is driven by many creative industries, including TV and film, advertising, performing arts, music publishing and video games contributing nearly 124.8 billion GVA to the economy annually. The rapid development of AI over the recent years has sparked a debate globally and within the UK about various challenges and opportunities it brings. It led to massive concerns within the creative and media industries about their work being used to train AI without their permission and media organizations not being able to secure remuneration to licensing agreements. There has also been a lack of transparency from the AI developers about the content that is being used to train the models while these firms also raise their own concerns about the lack of clarity over how they can legally access the data to train the models. These concerns are hindering AI, adoption, stunting innovation, and holding back the UK from fully utilizing the potential AI holds. The UK government consultation document highlights the need for working in partnership with both the AI sector and media sector ensuring greater transparency from AI developers, to build trust between developers and the creative industry. Focus Areas of the Consultation The key pillars of the UK government focused on the approach to copyright and AI policy include transparency, technical standards, contracts and licensing, labelling, computer generated works, digital replicas and emerging issues. The government aims to tackle the challenges with AI in terms of copyright by ensuring that AI developers are transparent about the use of training data for their AI models. The government seeks views on the level of transparency required to ensure that there is a trust built between AI companies and organisations in the creative industry. Establishing technical standards will help improve and standardise the tools, making it easier for creators and developers to exercise their reserving rights. Moreover, licensing frameworks need to be strengthened to ensure that the creators receive fair remuneration while the AI developers also get access to necessary training material. Labelling measures help distinguish the AI generated content from the human created work, which will foster clarity for consumers. Additionally, the protection of consumer generated work needs to align with modern AI capabilities so that fairness is ensured. Finally, addressing digital replicas, such as deepfakes is essential to protect individuals’ identity from misuse. Figure 1: Key pillars of Copyright and AI policy Overcoming Challenges in AI Training and Copyright Protection The government’s consultation document looks at the problem of using copyrighted works to train AI models. AI developers use large amounts of data, including copyrighted works, to train their models but many creators don’t get paid for the use of their work. The consultation highlights the issue of transparency as creators often don’t know if their work is in the AI training datasets. The government acknowledges the conflict between copyright law and AI development especially when AI outputs reproduce substantial parts of copyrighted works without permission which could be copyright infringement. The Getty Images vs Stability AI case is being debated but it may take years to resolve. The government is looking at legislation to clarify the rules around AI training and outputs to get the balance right between creators and AI developers. Figure 2: A Venn Diagram discussing intersectional aspects around AI Training & Data Mining and Copyright Ownership & Creator Rights Exceptions with rights reservation Key features and scope The data mining exception and rights reservation package that is under consideration would have features pertaining to increased transparency by AI firms in use of training data, ensuring right holders get fair payment upon use of their work by AI firms and addresses the need for licensing. The proposed solutions aim to regulate data mining activities ensuring lawful access to data building trust and partnership between AI firms and media and creative organisations. Figure 3: Proposed exceptions to Data Mining and its Scope. Addressing Challenges in Developing and Implementing Technical Standards There is a requirement and growing need for standardization for copyright and AI so publishers of content on the Internet can reserve the rights while AI developers have access to training data that does not infringe on the rights of publishers. Regulation is needed to support the adoption of such standards, which will ensure that protocols are recognised and complied with. There are multiple generative AI web crawlers that flag data unavailable for training to the developer. Many firms and data set owners also keep themselves open to be notified more directly by organisations if they don’t want their work to be used for training an AI model. However, even the most widely adopted standard, which is the robots.txt cannot provide granular control over the use of works that right holders seek. Robots.txt does not allow a massive degree of control because content that is being used for search indexing or language training may not be recognized for generative AI. The consultation proposes the need for standardisation that ensure that developers have legal access to training data and that protocols protecting data privacy of content are met. Figure 4: Key focus areas to achieve technical standardisation Contracts and licensing Contracts and licensing for AI training often involve creators licensing their works through collective management organizations (CMOs) or directly to developers, but creators sometimes lack control over how their work is used. Broad or vague contractual terms and industry expectations can make it challenging for creators to protect their rights. CMOs play a crucial role in efficiently licensing large collections of works, ensuring fair remuneration for creators while simplifying access for AI developers. However, new structures may be needed to aggregate and license data for AI training. The government aims to support good licensing practices, fair remuneration, and mechanisms like text and data mining (TDM) exceptions to balance the needs of right holders and AI developers. Additionally, copyright and AI in education require consideration to protect pupils’ intellectual property while avoiding undue burdens on educators. Ensuring Transparency: Tackling Challenges in Openness and Accountability Transparency is crucial for building trust in AI and copyright frameworks. Right holders face challenges in determining whether their works are used for AI training, as some developers do not disclose or provide limited information about training data sources. Greater transparency can help enforce copyright law, assess legal liabilities, and foster consumer confidence in AI systems. Potential measures include requiring AI firms to disclose datasets, web crawler details, and compliance with rights reservations. However, transparency must be balanced with practical challenges, trade secret protections, and proportionality. International approaches, such as the EU’s AI Act and California’s AB 2013, offer insights into implementing effective transparency standards, which the UK will consider for global alignment. Enhancing Accountability Through Effective AI Output Labelling Standards Labelling AI-generated outputs enhances transparency and benefits copyright owners, service providers, and consumers by providing clear attribution and informed choices. Industry initiatives like Meta’s ‘AI info’ label exemplify current efforts, but consistent regulation may be needed to ensure uniformity and effectiveness. Challenges include defining the threshold for labelling, scalability, and preventing manipulation or removal of labels. International developments, such as the EU AI Act’s rules for machine-readable labels, offer valuable insights. The UK government will explore supporting research and development for robust labelling tools to promote transparency and facilitate copyright compliance. Figure 5: AI Labelling, depicted. Navigating Challenges in Regulating Digital Replicas The use of AI to create “digital replicas” of actors and singers—realistic images, videos, and audio replicating their voice or appearance has raised significant concerns within the creative industries. These replicas are often made without consent, using AI tools trained on an individual’s likeness or voice. Existing protections in the UK, such as intellectual property rights, performers’ rights under the CDPA 1988, and data protection laws, offer some control over the misuse of personal data or unauthorized reproductions. However, concerns remain about AI’s ability to imitate performances or create synthetic reproductions, prompting calls for stronger legal protections, such as the introduction of personality rights. The government acknowledges these concerns and is exploring whether the current legal framework adequately protects individuals’ control over their personality and likeness while monitoring international developments, such as proposed federal laws in the US. Policy Analysis and The Way Ahead The UK government's Copyright and AI consultation is a critical moment for policy to strike the balance between technological innovation and the protection of creative industries, generally, the proposal aims to solve a complicated thicket of legal issues on AI model training. This would allow access to copyrighted works by AI developers unless rights holders specifically opt out, addressing considerably grey areas of uncertainty that still lurk over AI developments. The consultation accepts that the fast development in technology no longer fits very well with the existing copyright framework, thus putting the UK in danger of losing its edge in the field of global AI innovations. An opt-out mechanism in place for copyright rules would help stimulate policymakers who otherwise could not be sure how to protect intellectual property in an environment conducive to technological improvements. Creative industries express grave concerns that unlicensed use of their works by new AI firms, arising from a notion of fair use protections, will undermine personal ownership. AI companies counter that without complete access to the training data required for the construction of sophisticated models of machine learning, through either licensing or exceptions, they won't be able to continue with their work. The intentions of these consultations are to find some common ground, a solution which looks set to simultaneously ensure AI's continued development and provide some control and possible remuneration to content creators that would help de-escalate conflicts between these two groups. Arising out of a more long-term vision, these consultations represent the beginning of an attempt to get ahead of the curve in shaping copyright law, technology development, and IP issues in an increasingly AI-governed world. References UK Government. (2021, December 16). Copyright and artificial intelligence . GOV.UK . Retrieved December 25, 2024, from https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence
- Decoding the Second Trump Admin AI & Law Approach 101: the Jan 23 Executive Order
On January 23, 2025, US President Donald Trump signed an executive order titled "Removing Barriers to American Leadership in Artificial Intelligence," marking a significant shift in U.S. AI policy. This order revokes and replaces key elements of the Biden administration's approach to AI governance. Here's a simple and comprehensive breakdown of the executive order's main provisions in this quick insight. Core Framework and Purpose President Trump signed the executive order " Removing Barriers to American Leadership in Artificial Intelligence " with the fundamental aim of maintaining U.S. leadership in AI innovation through free markets, research institutions, and entrepreneurial spirit. The order explicitly establishes a policy directive to enhance America's global AI dominance for promoting human flourishing, economic competitiveness, and national security. Definition of Artificial Intelligence Relied Upon The order refers to a specific definition of artificial intelligence based on 15 USC § 9401(3) : The term “artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to— (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action. The definition specifies that AI systems must use both machine and human-based inputs to perform three key functions: Perceive real and virtual environments Abstract these perceptions into models through automated analysis Use model inference to formulate options for information or action Significance and Context of the AI Definition This definition choice is notable for some reasons: The order maintains continuity with existing legal frameworks by using the established definition from the National Artificial Intelligence Initiative Act of 2020 . However, this definition has limitations - it may not clearly encompass generative AI unless broadly interpreted, yet such broad interpretation could include basic computer systems not typically considered AI. The definition's scope is particularly relevant as it serves as the foundation for all provisions and implementations under the new executive order, including the development of the mandated AI action plan within 180 days. Implementation Structure The order creates an implementation framework centered around key officials including David Sacks as Special Advisor for AI and Crypto, working alongside the Assistant to the President for Science and Technology and National Security Affairs. Within 180 days, these officials must develop a comprehensive AI action plan, coordinating with economic policy advisors, domestic policy advisors, the OMB Director, and relevant agency heads. Regulatory Changes and Review Process A significant aspect of the order is its systematic dismantling of previous AI governance structures. The Office of Management and Budget has been given 60 days to revise Memoranda M-24-10 and M-24-18, which were cornerstone AI policy documents under the Biden administration. Agency heads are directed to suspend, revise or rescind actions that conflict with the new policy direction. Nik Marda, who had formerly worked in the Office of Science and Technology Policy at the Biden Administration, had remarked : "That's not only bad policy given the many real bias and discrimination risks from AI, but it's also a significant departure from both the bipartisan law that mandates this guidance and Trump's own 2020 executive order on AI — both of which name civil rights as an objective for federal use of AI." Now, for some context: the OMB Memoranda M-24-10 and M-24-18 were two critical policy documents issued in 2024 under the Biden administration that established the federal government's AI governance framework. M-24-10: Core Governance Framework Released in March 2024, Memorandum M-24-10 "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" had established fundamental AI governance structures across federal agencies. Key elements included : Creation of Chief AI Officer (CAIO) positions in federal agencies Establishment of the Chief AI Officer Council (CAIOC) for interagency coordination Requirements for agencies to develop compliance plans for AI governance Framework for identifying and managing "rights-impacting" and "safety-impacting" AI systems To note, M-24-10 never included common commercial products with relevant embedded AI functionality. M-24-18: AI Acquisition Guidelines Issued on October 3, 2024, Memorandum M-24-18 "Advancing the Responsible Acquisition of Artificial Intelligence in Government" focused specifically on federal AI procurement. The memorandum: Set requirements for contracts awarded after March 23, 2025 Required agencies to disclose if AI systems were rights-impacting or safety-impacting Mandated 72-hour reporting of serious AI incidents Established contractor requirements for AI system documentation and training data transparency Implementation Status of the Memoranda Before Trump's executive order, agencies were actively implementing these memoranda. For example : The USDA had developed and published its compliance plan in September 2024 Federal agencies were required to identify contracts involving rights-impacting AI by November 1, 2024 The framework was being integrated with national security considerations through the October 2024 National Security Memorandum on AI Trump's January 23, 2025 executive order now requires OMB to revise both memoranda within 60 days. Most probably, it could be a significant shift away from Biden's emphasis on AI safety and governance toward reduced regulatory oversight. Ideological Shift in AI Development The order marks a stark departure from previous approaches by emphasizing AI systems "free from ideological bias or engineered social agendas". This represents a fundamental shift from the Biden administration's focus on AI safety and ethical considerations. However, the ideological shift may also signal that this administration is not interested in DEI-related and over-regulatory Responsible AI ideas, which have been under consideration in the previous administration. It must be noted that the previous administration's obsession with a few companies to merely focus on AI innovation & AI safety was also counterproductive, despite the fact that the main Executive Order of the Biden administration keenly focused on capacity building on relevant responsible AI measures for their own government personnel. Energy Infrastructure Expansion Trump declared a national energy emergency to fast-track the approval of energy projects critical for AI development. During his address at the World Economic Forum in Davos, Switzerland , he emphasized that the U.S. must double its current energy capacity to meet the demands of AI technologies. This includes constructing power plants specifically dedicated to supporting AI data centres. These facilities will not be subject to traditional climate objectives, with Trump allowing them to use any fuel source, including coal as a backup. He highlighted the reliability of coal, describing it as "a great backup" due to its resilience against weather and other disruptions. The president also proposed a co-location model , where power plants are built directly adjacent to AI data centers. This approach bypasses reliance on the traditional electricity grid, which Trump described as "old and vulnerable." By connecting power plants directly to data centers, companies can ensure uninterrupted energy supply for their operations. The Stargate Initiative In conjunction with these measures, Trump announced the Stargate Initiative , a joint venture between OpenAI, Oracle, and SoftBank, aimed at investing up to $500 billion over four years in building AI infrastructure. The initiative begins with a $100 billion investment in data centers, starting with a flagship facility in Texas. The project is supposed to be expected to create over 100,000 jobs and significantly enhance U.S. computing capabilities for AI. Industry Impact The executive order also introduces sweeping changes aimed at reducing regulatory oversight on AI companies, which Trump has characterised as "unnecessarily burdensome" under Biden's administration. Deregulation Trump's order repealed Biden's 2023 executive order that mandated stricter oversight of AI development. Key provisions eliminated include: Requirements for AI developers to submit safety testing data before deploying high-risk technologies. Obligations for federal agencies to assess and mitigate potential harms caused by government use of AI tools. Mandatory transparency measures for companies building powerful AI models. By removing these guardrails, the Trump administration seeks to foster innovation by reducing compliance costs and accelerating deployment timelines. This deregulatory approach aligns with Trump's broader economic agenda of empowering private sector-led growth in technology industries. What will happen to the AI Diffusion Export Control Rule? Reevaluation of Licensing Requirements The AI Diffusion Export Control Rule currently imposes global licensing requirements for exporting advanced computing integrated circuits (ICs) and AI model weights trained with more than 10 to the power of 26 computational operations. These restrictions are designed to prevent U.S. adversaries, particularly China, from accessing cutting-edge AI technologies while allowing controlled exports to trusted allies under specific license exceptions.Trump's executive order, which emphasises reducing regulatory barriers and fostering U.S. innovation, may lead to: Relaxation of Licensing Rules : The administration could ease licensing requirements for U.S. companies exporting AI technologies to middle-tier countries (e.g., India, Brazil) to boost international market competitiveness. Focus on Deregulation : There is a strong likelihood that Trump will reduce compliance burdens for U.S. firms by eliminating or simplifying reporting and monitoring obligations tied to export licenses. Strategic Adjustments to Country Tiers The Biden administration's framework divides countries into three tiers: Top-tier allies (e.g., NATO members, Japan, South Korea) enjoy unrestricted access. Middle-tier nations (e.g., India, Saudi Arabia) face quotas on computing power imports. Adversarial nations (e.g., China, Russia) remain blocked from accessing advanced U.S. AI technologies. Trump's transactional foreign policy approach suggests potential adjustments: Reclassification of Countries : Middle-tier nations like India and Israel could be moved into the top tier as part of bilateral negotiations or geopolitical alliances. Use as Diplomatic Leverage : Trump may leverage access to U.S. AI technologies as a bargaining chip in trade agreements or security partnerships. Private Sector Benefits The rollback of regulations has been welcomed by industry leaders who argue that excessive oversight stifles innovation and hinders competitiveness. For example, OpenAI CEO Sam Altman has praised the administration's focus on enabling rapid development of computing infrastructure through initiatives like Stargate. Trump’s administration has framed these changes as essential for maintaining U.S. leadership in AI while countering international competitors like China. The emphasis on deregulation reflects a shift toward prioritizing economic competitiveness over ethical or safety considerations. Conclusion The January 23 executive order demonstrates President Trump's aggressive push for U.S. dominance in artificial intelligence by combining massive infrastructure investments with significant deregulation of the industry. While these measures promise economic growth and technological advancement, they come with potential risks related to environmental impact and reduced oversight. If you have any questions or need assistance regarding this policy or its implications, feel free to contact us at vligta@indicpacific.com . Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- Beyond the AI Garage: India's New Foundational Path for AI Innovation + Governance in 2025
This is quite a long read. India's artificial intelligence landscape stands at a pivotal moment, where critical decisions about model training capabilities and research directions will shape its technological future. The discourse was recently energised by Aravind Srinivas, CEO of Perplexity AI, who highlighted two crucial perspectives that challenge India's current AI trajectory. T he Wake-up Call Figure 1: The two posts on X.com on Strategic Perspectives, by Aravind Srinivas. Srinivas emphasises that India's AI community faces a critical choice: either develop model training capabilities or risk becoming perpetually dependent on others' models. His observation that "Indians cannot afford to ignore model training" stems from a deeper understanding of the AI value chain. The ability to train models represents not just technical capability, but technological sovereignty. A significant revelation comes from DeepSeek's recent achievement . Their success in training competitive models with just 2,048 GPUs challenges the widespread belief that model development requires astronomical resources. This demonstrates that with strategic resource allocation and expertise, Indian organisations can realistically pursue model training initiatives. India's AI ecosystem currently focuses heavily on application development and use cases . While this approach has yielded short-term benefits, it potentially undermines long-term technological independence. The emphasis on building applications atop existing models, while important, shouldn't overshadow the need for fundamental research and development capabilities. In short, Srinivas attempts to highlight 3 key issues, through his posts, on the larger tech development and application layer debate in India: Limited hardware infrastructure for AI model training Concentration of model training expertise in select global companies Over-reliance on foreign AI models and frameworks This insight fixates itself on legal and policy perspectives around building necessary capabilities around innovating in core AI models, and also focusing on building use case capitals in India, including in Bengaluru and other places. In addition, this long insight covers recommendations to the Ministry of Electronics and Information Technology, Government of India on the Report on AI Governance Guidelines Development , in the concluding sections. The Policy Imperative: Balancing Use Cases and Foundational AI Development What is pointed out by Aravind Srinivas about AI development avenues in India's scenario is also backed by policy & industry realities. The recent repeal of the former Biden Administration (US Government)'s Executive Order on Artificial Intelligence by the Trump Administration hours ago demonstrated that the US Government's focus has pivoted on hard resource considerations around AI development, such as data centres, semiconductors, and talent. India has no choice but to keep both ideas - building use case capitals in India, and focus on foundational AI research alternatives, at the same time. Moving Beyond Use-Case Capitalism India's current AI strategy has been heavily skewed toward application development—leveraging existing foundational models like those from OpenAI or Google to build domain-specific use cases. While this approach has yielded quick wins in sectors like healthcare, agriculture, and governance, it risks creating along-term dependency on foreign technologies. This"use-case capitalism" prioritises short-term gains over the strategic imperative of building indigenous capabilities. Technological Dependence : Relying on pre-trained foundational models developed abroad limits India's ability to innovate independently and negotiate favourable terms in the global AI ecosystem. Economic Vulnerability : By focusing on applications rather than foundational research, India risks being relegated to a secondary role in the AI value chain, capturing only a fraction of the economic value generated by AI technologies. Missed Opportunities for Sovereignty : Foundational models are critical for ensuring control over data, algorithms, and intellectual property. Without them, India remains vulnerable to external control over critical AI infrastructure. Building Indigenous Model Training Capabilities The ability to train foundational models domestically is essential for achieving technological independence. Foundational models—large-scale pre-trained systems—are the backbone of generative AI applications. Building such capabilities in India requires addressing key gaps in infrastructure, talent, and data availability. Key Challenges Infrastructure Deficit : Training large-scale models requires significant computational resources (e.g., GPUs or TPUs). India's current infrastructure lags behind global leaders like the US and China. Initiatives like CDAC’s AIRAWAT supercomputer are steps in the right direction but need scaling up. Talent Shortage : While India has a large tech workforce (420,000 professionals), expertise in training large language models (LLMs) remains concentrated in a few institutions like IITs and IISc. Collaboration with global experts and targeted upskilling programs are necessary to bridge this gap. Data Limitations : High-quality datasets for Indian languages are scarce, limiting the ability to train effective multilingual models. Efforts like Bhashini have made progress but need expansion to include diverse domains such as agriculture, healthcare, and governance. From Lack of Policy Clarity to AI Safety Diplomacy The Indian government has positioned itself as a participant in the global conversation on artificial intelligence (AI) regulation, emphasising its leadership on equity issues relevant to the Global South while proposing governance frameworks for AI. However, these initiatives often appear inconsistent and lack coherence. For instance, in April 2023, former Minister of State of Electronics & Information Technology, Rajeev Chandrasekhar had asserted that India would not regulate AI, aiming to foster a pro-innovation environment. Yet, by June of the same year, he shifted his stance, advocating for regulations to mitigate potential harms to users. India’s approach to AI regulation is thereby fragmented and overreactive, with overlapping initiatives and strategies across various government entities. Now, we have to understand 2 important issues here. The AI-related approaches, and documents adopted by statutory bodies, regulators, constitutional authorities under Article 324 of the Indian Constitution and even non-tech ministries have some specificity and focus, as compared to MeiTY. For example, the Election Commission of India came up with an advisory on content provenance of any synthetic content/ deepfakes being used during election campaigning. MeiTY, on the other hand, had been rushing on AI governance initiatives, thanks to the Advisory they had published on March 1, 2024, replaced by a subsequent advisory in the 2nd week of March of the same year. By October 2024, however, the Government of India had pivoted its approach towards AI regulation in 2 facets: (1) the Principle Scientific Advisor's Office had taken over major facets of AI regulation policy conundrums ; and (2) MeiTY narrows down its goals of AI regulation to AI Safety by considering options to develop an Artificial Intelligence Safety Institute . Unlike many law firms, chambers and think tanks who might have been deceptive in their discourse on India's AI regulation & data protection landscape, the reality is simple that the Government of India keeps publishing key AI industry, and policy / governance guidelines without showing any clear intent to regulate AI per se. The focus towards over-regulation / regulatory capture in the field of artificial intelligence doesn't exist. In fact, the Government has taken a patient approach by stating that some recognition of key liability issues around certain mainstream AI applications (like deepfakes ) can be addressed by tweaking existing criminal law instruments. Here's what was quoted from S Krishnan's statement at the Global AI Conclave of end-November 2024. Deepfakes is not a regulatory issue, it is a technology issue. However we may need few tweaks in regulations, as opposed to a complete overhaul. In fact, this statement by MeiTY Secretary S Krishnan, is truly appreciative: We need to allow some maturity to come into the market so that the industry doesn’t get spooked that the government is stepping in and doing all kinds of things. It has to be need based. We are currently working on a voluntary code of conduct through the AI Safety Institute. Intellectual Property (IP) Challenges: Wrappers and Foundational Models The reliance on "wrappers"—deliverables or applications built on top of existing foundational AI models—raises significant intellectual property (IP) concerns. These challenges are particularly pronounced in the Indian context, where businesses and end-users face risks associated with copyright, trade secrets, and patentability. Copyright Issues with AI-Generated Content AI-generated content, often used in wrappers, presents a fundamental challenge to traditional copyright frameworks. Lack of Ownership Clarity : Determining ownership of AI-generated content is contentious. For example, does the copyright belong to the developer of the foundational model, the user providing input prompts, or the organisation deploying the wrapper? This ambiguity undermines enforceability. Attribution Gaps : Foundational models often use vast datasets without proper attribution during training. This creates potential liabilities for businesses using wrappers built on these models if outputs closely resemble copyrighted material. These uncertainties make it difficult for Indian businesses to claim exclusive rights over wrapper-based deliverables, exposing them to potential legal disputes and economic risks. Trade Secrets and Proprietary Model Risks Trade secrets are critical for protecting proprietary algorithms, datasets, and other confidential information embedded within foundational models. However, wrappers built on these models face unique vulnerabilities: Reverse Engineering : Competitors can potentially reverse-engineer wrappers to uncover proprietary algorithms or techniques from the foundational models they rely on. This compromises the confidentiality essential for trade secret protection. Data Security Threats : Foundational models often retain input data for further training or optimization. Wrappers that interface with such models risk exposing sensitive business data to unauthorised access or misuse. Algorithmic Biases : Biases embedded in foundational models can inadvertently compromise trade secret protections by revealing patterns or vulnerabilities during audits or legal disputes. Insider Threats : Employees with access to wrappers and underlying foundational models might misuse confidential information, especially in industries with high turnover rates. These risks are exacerbated by India's lack of a dedicated trade secret law, relying instead on common law principles and contractual agreements like non-disclosure agreements (NDAs). Patentability Challenges The patent system in India poses significant hurdles for innovations involving foundational models and their wrappers due to restrictive interpretations under Section 3(k) of the Patents Act, 1970. Key challenges include: Subject Matter Exclusions : Algorithms and computational methods integral to foundational models are excluded from patent eligibility unless they demonstrate a "technical effect." This limits protections for innovations derived from these models. Inventorship Dilemmas : Indian patent law requires inventors to be natural persons. This creates a legal vacuum when foundational models autonomously generate novel solutions integrated into wrappers. Global Disparities : While jurisdictions like the U.S. and EU have begun adapting their patent frameworks for AI-related inventions, India's outdated approach discourages investment in foundational model R&D. Economic Risks : Without clear patent protections, Indian businesses may struggle to attract funding for wrapper-based innovations that rely on foundational model advancements. These challenges highlight the systemic barriers preventing Indian innovators from fully leveraging foundational models while protecting their intellectual property. Negotiation Leverage Over Foundational Models Indian businesses relying on foreign-owned foundational models face additional risks tied to access rights and licensing terms: Restrictive Licensing Agreements : Multinational corporations (MNCs) controlling foundational models often impose restrictive terms that limit customization or repurposing by Indian businesses. Data Ownership Conflicts : Foundational models trained on Indian datasets may not grant reciprocal rights over outputs generated using those datasets, creating an asymmetry in value capture. Supply Chain Dependencies : Dependence on global digital value chains exposes Indian businesses to geopolitical risks, price hikes, or service disruptions that could impact access to critical AI infrastructure. These legal-policy issues are critical and cannot be ignored by the Government of India, nor by major Indian companies, emerging AI companies, as well as research labs & other market cum technical stakeholders. The Case for a CERN-Like Body for AI Research: Moving Beyond the "AI Garage" India's current positioning as an " AI Garage " for developing and emerging economies, as outlined in its AI strategy of 2018, emphasizes leveraging AI to solve practical, localized problems. While this approach has merit in addressing immediate societal challenges, it risks limiting India's role to that of an application developer rather than a leader in foundational AI research. To truly establish itself as a global AI powerhouse, India must advocate for and participate in the creation of a CERN-like body for artificial intelligence research. The Limitations of the "AI Garage" Approach The "AI Garage" concept, promoted by NITI Aayog, envisions India as a hub for scalable and inclusive AI solutions tailored to the needs of emerging economies. While this aligns with India's socio-economic priorities, it inherently focuses on downstream applications rather than upstream foundational research. This approach creates several limitations: Dependence on Foreign Models : By focusing on adapting existing foundational models (developed by global tech giants like OpenAI or Google), India remains dependent on external technologies and infrastructure. Missed Opportunities for Leadership : The lack of investment in foundational R&D prevents India from contributing to groundbreaking advancements in AI, relegating it to a secondary role in the global AI value chain. Limited Global Influence : Without leadership in foundational research, India's ability to shape global AI norms, standards, and governance frameworks is diminished. The Vision for a CERN-Like Body A CERN-like body for AI research offers an alternative vision—one that emphasizes international collaboration and foundational R&D. Gary Marcus , a prominent AI researcher and critic of current industry practices, has long advocated for such an institution since 2017 . He argues that many of AI's most pressing challenges—such as safety, ethics, and generalization—are too complex for individual labs or profit-driven corporations to address effectively. A collaborative body modeled after CERN (the European Organization for Nuclear Research) could tackle these challenges by pooling resources, expertise, and data across nations. Key features of such a body include: Interdisciplinary Collaboration : Bringing together experts from diverse fields such as computer science, neuroscience, ethics, and sociology to address multifaceted AI challenges. Open Research : Ensuring that research outputs, datasets, and foundational architectures are publicly accessible to promote transparency and equitable benefits. Focus on Public Good : Prioritising projects that address global challenges—such as climate change, healthcare disparities, and education gaps—rather than narrow commercial interests. Why India Needs to Lead or Participate India is uniquely positioned to champion the establishment of a CERN-like body for AI due to its growing digital economy, vast talent pool, and leadership in multilateral initiatives like the Global Partnership on Artificial Intelligence (GPAI). However, if the United States remains reluctant to pursue such an initiative on a multilateral basis, India must explore partnerships with other nations like the UAE or Singapore. Strategic Benefits for India : Anchor for Foundational Research : A CERN-like institution would provide India with access to cutting-edge research infrastructure and expertise. Trust-Based Partnerships : Collaborative research fosters trust among participating nations, creating opportunities for equitable technology sharing. Global Influence : By playing a central role in such an initiative, India can shape global AI governance frameworks and standards. Why UAE or Singapore Could Be Viable Partners : The UAE has already demonstrated its commitment to becoming an AI leader through initiatives like its National Artificial Intelligence Strategy 2031. Collaborating with India would align with its policy goals while providing access to India's talent pool. Singapore's focus on innovation-driven growth makes it another strong candidate for partnership. Its robust digital infrastructure complements India's strengths in data and software development. The Need for Large-Scale Collaboration As Gary Marcus has pointed out, current approaches to AI research are fragmented and often driven by secrecy and competition among private corporations. This model is ill-suited for addressing fundamental questions about AI safety, ethics, and generalization. A CERN-like body would enable large-scale collaboration that no single nation or corporation could achieve alone. For example: AI Safety : Developing frameworks to ensure that advanced AI systems operate reliably and ethically across diverse contexts. Generalization : Moving beyond narrow task-specific models toward systems capable of reasoning across domains. Equitable Access : Ensuring that advancements in AI benefit all nations rather than being concentrated in a few tech hubs. India's current "AI Garage" approach is insufficient if the country aims to transition from being a consumer of foundational models to a creator of them. Establishing or participating in a CERN-like body for AI research represents a transformative opportunity—not just for India but also for the broader Global South. The Case Against an All-Comprehensive AI Regulation in India As the author of this insight has also developed India's first privately proposed AI regulation, aiact.in , the experience and feedback from stakeholders has been that a sweeping, all-encompassing AI Act is not the right course of action for India at this moment. While the rapid advancements in AI demand regulatory attention, rushing into a comprehensive framework could lead to unintended consequences that stifle innovation and create more confusion than clarity. Why an AI Act is Premature India’s AI ecosystem is still in its formative stages, with significant gaps in foundational research, infrastructure, and policy coherence. Introducing a broad AI Act now risks overregulating an industry that requires flexibility and room for growth. Moreover: Second-Order Effects : Feedback from my work on AI regulation highlights how poorly designed laws can have ripple effects on innovation, investment, and adoption. For example, overly stringent rules could discourage startups and SMEs from experimenting with AI solutions. Sectoral Complexity : The diverse applications of AI—ranging from healthcare to finance—demand sector-specific approaches rather than a one-size-fits-all regulation. Recommendations on the Report on AI Governance Guidelines Development by IndiaAI of January 2025 Part 1: Feedback on the AI Governance Principles as proposed and stated to align with OECD, NITI and NASSCOM's efforts. Principle Feedback Transparency : AI systems should be accompanied with meaningful information on their development, processes, capabilities & limitations, and should be interpretable and explainable, as appropriate. Users should know when they are dealing with AI. The focus on interpretability & explainability with a sense of appropriate considerations, is appreciated. Accountability : Developers and deployers should take responsibility for the functioning and outcomes of AI systems and for the respect of user rights, the rule of law, & the above principles. Mechanisms should be in place to clarify accountability. The principle remains clear with a pivotal focus on rule of law & the respect of user rights, and is therefore appreciated. Safety, reliability & robustness : AI systems should be developed, deployed & used in a safe, reliable, and robust way so that they are resilient to risks, errors, or inconsistencies, the scope for misuse and inappropriate use is reduced, and unintended or unexpected adverse outcomes are identified and mitigated. AI systems should be regularly monitored to ensure that they operate in accordance with their specifications and perform their intended functions. The reference to acknowledge the link between any AI system's intended functions (or intended purpose) and specifications, with safety, reliability and robustness considerations is appreciated. Privacy & security : AI systems should be developed, deployed & used in compliance with applicable data protection laws and in ways that respect users’ privacy. Mechanisms should be in place to ensure data quality, data integrity, and ‘security-by-design’. The indirect reference via the term 'security-by-design' is to the security safeguards under the National Data Management Office framework under the IndiaAI Expert Group Report of October 2023, and the Digital Personal Data Protection Act, 2023, in spirit. This is also appreciated. Fairness & non-discrimination : AI systems should be developed, deployed, & used in ways that are fair and inclusive to and for all and that do not discriminate or perpetuate biases or prejudices against, or preferences in favour of, individuals, communities, or groups. The principle's wording seems fine, but it could have also emphasised upon technical biases, and not just biases causing socio-economic or socio-technical disenfranchisement or partiality. Otherwise, the wording is pretty decent & normal. Human-centred values & ‘do no harm’ : AI systems should be subject to human oversight, judgment, and intervention, as appropriate, to prevent undue reliance on AI systems, and address complex ethical dilemmas that such systems may encounter. Mechanisms should be in place to respect the rule of law and mitigate adverse outcomes on society. The reference to the phrase 'do no harm' is the cornerstone of this principle, in context of what appropriate human oversight, judgment and intervention may be feasible. Since this is a human-centric AI principle, the reference to 'adverse outcomes on society' was expected, and is appreciated. Inclusive & sustainable innovation : The development and deployment of AI systems should look to distribute the benefits of innovation equitably. AI systems should be used to pursue beneficial outcomes for all and to deliver on sustainable development goals. The distributive aspect of innovation benefits that the development and deployment of AI systems may be agreed as a generic international law principle to promote AI Safety Research, and evidence-based policy making & diplomacy. Digital by design governance : The governance of AI systems should leverage digital technologies to rethink and re-engineer systems and processes for governance, regulation, and compliance to adopt appropriate technological and techno-legal measures, as may be necessary, to effectively operationalise these principles and to enable compliance with applicable law. This principle suggests that the infusion of AI governance by default should be driven by digital technologies by virtue of leverage. It is therefore recommended that the reference to "technological and techno-legal measures" is not viewed as solution-obsessed AI governance measure. Compliance technologies cannot replace technology law adjudication and human autonomy, which is why despite using the word 'appropriate', the reference to the 'measures' seems solution-obsessed, because obsessing with solution-centricity does not address issues of technology maintenance & efficiency by default. Maybe the intent to promote 'leverage' and 'rethink' should be considered as a consultative aspect of the principle, and not a fundamental one. Part B: Feedback on Considerations to operationalise the principles Examining AI systems using a lifecycle approach While governance efforts should keep the lifecycle approach as an initial consideration, the ecosystem view of AI actors should take precedence over all considerations to operationalise the principles. The best reference to the lifecycle approach can be technical, and management-centric, about best practices, and not about policy. As stated in the document, the lifecycle approach should merely be considered 'useful', and nothing else. The justification of 'Diffusion' as a stage is very unclear, since "examining the implications of multiple AI systems being widely deployed and used across multiple sectors and domains" by default should be specific to the intended purpose of AI systems and technologies. Some tools or applications might not have a multi-sectoral effect. Therefore, 'Diffusion' by virtue of its meaning should be considered as an add-on stage. In fact the reference to Diffusion would only suit General Purpose AI systems, if we take the OECD and European Union definitions of General Purpose AI systems into consideration. This should either be an add-on phase, or should be taken into direct correlation with how intended purpose of AI systems is banked upon. Otherwise, the reference to this phase may justify regulatory capture. Taking an ecosystem-view of AI actors The listing of actors as a matter of exemplification in the context of foundation model seems appropriate. However, in the phrase "distribution of responsibilities and liabilities", the term distribution should be replaced with apportionment, and the phrase "responsibilities and liabilities" should be replaced with accountability and responsibility only. Here is an expanded legal reasoning, put forth: Precision in Terminology : The term distribution implies an equal or arbitrary allocation of roles and duties, which may not accurately reflect the nuanced and context-specific nature of governance within an AI ecosystem. Apportionment , on the other hand, suggests a deliberate and proportional assignment of accountability and responsibility based on the roles, actions, and influence of each actor within the ecosystem. This distinction is critical in ensuring that governance frameworks are equitable and just. Legal Clarity on Liability : The inclusion of liabilities in the phrase "responsibilities and liabilities" introduces ambiguity, as liability is not something that can be arbitrarily allocated or negotiated among actors. Liability regimes are determined by statutory provisions and case law precedents, which rely on existing legal frameworks rather than ecosystem-specific governance agreements. Therefore, retaining accountability and responsibility —terms that are more flexible and operational within governance frameworks—avoids conflating governance mechanisms with legal adjudication. Role of Statutory Law : Liability pertains to legal consequences that arise from breaches or harms, which are adjudicated based on statutory laws or judicial interpretations. Governance frameworks cannot override or preempt these legal determinations but can only clarify roles to ensure compliance with existing laws. By focusing on accountability and responsibility , the framework aligns itself with operational clarity without encroaching upon the domain of statutory liability. Ecosystem-Specific Context : Foundation models involve multiple actors—developers, deployers, users, regulators—each with distinct roles. Using terms like apportionment emphasizes a tailored approach where accountability is assigned proportionally based on each actor's influence over outcomes, rather than a blanket distribution that risks oversimplification. Avoiding Overreach : Governance frameworks should aim to clarify operational responsibilities without overstepping into areas governed by statutory liability regimes. This ensures that such frameworks remain practical tools for managing accountability while respecting the boundaries set by law. Leveraging technology for governance The conceptual approach outlined in the provided content, while ambitious and forward-looking, suffers from several critical weaknesses that undermine its practicality and coherence: Overly Broad and Linguistically Ambiguous: The conceptual approach is excessively verbose and broad, which dilutes its clarity and focus. The lack of precision in defining key terms, such as "techno-legal approach," creates interpretative ambiguity. This undermines its utility as a concrete governance framework and risks being dismissed as overly theoretical or impractical. Misplaced Assumption About Techno-Legal Governance: The assertion that a "complex ecosystem of AI models, systems, and actors" necessitates a techno-legal approach is unsubstantiated. The government must clarify whether this approach is a solution-obsessed strategy focused on leveraging compliance technology as a panacea for governance challenges. If so, this is deeply flawed because: It assumes technology alone can address systemic issues without adequately considering the socio-political and institutional capacities required for effective implementation. It risks prioritizing tools over outcomes, leading to a governance framework that is reactive rather than adaptive. Counterproductive Supplementation of Legal Regimes: Merely supplementing legal and regulatory regimes with "appropriate technology layers" is insufficient to oversee or promote growth in a rapidly expanding AI ecosystem. This mirrors the whole-of-government approach proposed under the Digital India Act (of March 2023) but lacks the necessary emphasis on capacity-building among legal and regulatory institutions. Without consistent capacity-building efforts: The techno-legal approach remains an abstract concept with no actionable pathway. It risks becoming verbal gibberish that fails to translate into meaningful governance outcomes. Practical Flaws in Risk Mitigation and Compliance Automation: While using technology to mitigate risks, scale, and automate compliance appears conceptually appealing, it is fraught with practical challenges: Risk estimation, scale estimation, and compliance objectives are not standardised across sectors or jurisdictions in India. Most government agencies, regulatory bodies, and judicial authorities lack the technical expertise to understand or implement automation effectively. There is no settled legal or technical framework to support AI safety measures in a techno-legal manner. This undermines evidence-based policymaking and contradicts the ecosystem view of AI actors by failing to account for their diverse roles and responsibilities. As a result, this aspect of the conceptual approach is far-fetched and impractical in India's current regulatory landscape. Unrealistic Assumptions About Liability Allocation: The proposal to use technology for allocating regulatory obligations and apportioning liability across the AI value chain is premature and lacks empirical support. According to the Ada Lovelace Institute , assigning responsibilities and liabilities in AI supply chains is particularly challenging due to the novelty, complexity, and rapid evolution of AI systems. To add further: No country has successfully implemented such a system, not even technologically advanced nations like the US, China, or UAE. The availability of sufficient data and content to process such allocations remains uncertain. Terms like "lightweight but gradually scalable" are misleading in the context of India's complex socio-economic fabric. Without robust evidence-based policymaking or AI safety research, this proposal risks being overly optimistic and disconnected from ground realities. Overreach in Proposing Tech Artefacts for Liability Chains: The suggestion to create tech artefacts akin to consent artefacts for establishing liability chains is problematic on multiple fronts: No jurisdiction has achieved such standardisation or automation of liability allocation. Automating or augmenting liability apportionment without clear standards violates principles of natural justice by potentially assigning liability arbitrarily. The idea of "spreading and distributing liability" among participants oversimplifies complex legal doctrines and undermines rule-of-law principles. Liability cannot be treated as a quantifiable commodity subject to technological augmentation. This aspect reflects an overreach that fails to consider the legal complexities involved in enforcing liability chains. Assumptions About Objective Technical Assistance: The proposal for extending "appropriate technical assistance" assumes that the techno-legal approach is inherently objective. However: The lack of standardisation in techniques undermines this assumption. Without clear benchmarks or criteria for what constitutes "appropriate" assistance, this aspect becomes self-referential and overly broad. While there may be merit in leveraging technical assistance for governance purposes, it requires acknowledgment of the need for standardisation before implementation. Part C: Feedback on Gap Analysis The need to enable effective compliance and enforcement of existing laws On deepfakes, the report is accurate to mention that existing laws should be effectively endorsed. In the case of deepfakes, while the justification to adopt technological measures is reasonable, the overreliance on watermarking techniques might not be feasible necessarily: Watermarking Vulnerabilities : As highlighted in the 2024 Seoul AI Summit Scientific Report , watermarks can be removed or tampered with by malicious actors, rendering them unreliable as a sole mechanism for deepfake detection. This limits their effectiveness in ensuring accountability across the lifecycle of AI-generated content. Scalability Issues : Implementing immutable identities for all participants in a global AI ecosystem would require unprecedented levels of standardisation and cooperation. Such a system is unlikely to function effectively without robust international agreements, which currently remain fragmented. Make deepfake detection methods open-source and keep improving upon the human and technical forensic methods to detect AI-generated content, be it multi-modal or not, should be the first step. This approach helps in improving upon the relied methods of content provenance. Cyber security The reference to cybersecurity law obligations in the report is accurate. However, the report does not give enough emphasis on cybersecurity measures at all, which is disappointing. Intellectual property rights Training models on copyrighted data and liability in case of infringement: The report fails to provide any definitive understanding on copyright aspects, and leaves it to mere questions, which is again disappointing. AI led bias and discrimination This wording on 'notion of bias' is commendable for its clarity and precision in addressing the nuanced issue of bias in decision-making. By emphasizing that only biases that are legally or socially prohibited require protection , it avoids the unrealistic expectation of eliminating all forms of bias, which may not always be harmful or relevant. Part D: Feedback on Recommendations To implement a whole-of-government approach to AI Governance, MeitY and the Principal Scientific Adviser should establish an empowered mechanism to coordinate AI Governance The supposed empowered mechanism of a Committee or a Group as proposed must take into consideration that without enough capacity-building measures, having a steering committee-like approach to enable whole-of-government approach will be only solution-obsessed and not helpful. Thus, unless the capacity-building mandate to anticipate, address & understand what limited form of AI governance for near-about important areas of priority, like incident response, liability issues, or accountability, are not dealt with enough settled legal, administrative and policy positions, the Committee / Group would not be productive or helpful enough to promote a whole-of-government approach. To build evidence on actual risks and to inform harm mitigation, the Technical Secretariat should establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes. The approach rightly notes that AI incidents extend beyond cybersecurity issues, encompassing discriminatory outcomes, system failures, and emergent behaviors. However, this broad scope risks making the database unwieldy without a clear taxonomy or prioritisation of incident types. Learning from the AIID , which encountered challenges in indexing 750+ incidents due to structural ambiguities and epistemic uncertainty, it is crucial to define clear categories and reporting triggers to avoid overwhelming the system with disparate data. Encouraging voluntary reporting by private entities is a positive step but may face low participation due to concerns about confidentiality and reputational risks. The OECD emphasises that reporting systems must ensure secure handling of sensitive information while minimising transaction costs for contributors. The proposal could benefit from hybrid reporting models (mandatory for high-risk sectors and voluntary for others), as suggested by the OECD's multi-tiered approach. The proposal assumes that public sector organizations can effectively report incidents during initial stages. However, as noted in multiple studies , most organisations lack the technical expertise or resources to identify and document AI-specific failures comprehensively. Capacity-building initiatives for both public and private stakeholders should be prioritised before expecting meaningful contributions to the database. The proposal correctly identifies the need for an evidence base to inform governance initiatives. Incorporating automated analysis tools (e.g., taxonomy-based clustering or root cause analysis) could enhance the database's utility for identifying patterns and informing policy decisions. Conclusion While the pursuit of becoming a "use case capital" has merit in addressing immediate societal challenges and fostering innovation, it should not overshadow the imperative of developing indigenous AI capabilities. The Dual Imperative India must maintain its momentum in developing AI applications while simultaneously building capacity for foundational model training and research. Immediate societal needs are addressed through practical AI applications Long-term technological sovereignty is preserved through indigenous development Research capabilities evolve beyond mere application development Beyond the Hype The discourse around AI in India must shift from sensationalism to substantive discussions about research, development, and ethical considerations. As emphasized in the Durgapur AI Principles , this includes: Promoting evidence-based discussions over speculative claims Fostering collaboration between academia, industry, and government Emphasizing responsible innovation that considers societal impact Building trust-based partnerships both domestically and internationally The future of AI in India depends not on chasing headlines or following global trends blindly, but on cultivating a robust ecosystem that balances practical applications with fundamental research.
- When AI Expertise Meets AI Embarrassment: A Stanford Professor's Costly Citation Affair
In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI. The case, Kohls v. Ellison , which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice. Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less." The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice. Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations. The Iro nic Incident Figure 1: An excerpt from the Order from Kohls v. Ellison. The deepfake-related lawsuit in Minnesota took an unexpected turn with the filing of two expert declarations—one from Professor Jevin West and another from Professor Jeff Hancock—for Attorney General Keith Ellison in opposition to a motion for a preliminary injunction. As noted, “[t]he declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy” . Plaintiffs promptly moved to exclude both declarations, asserting they were “conclusory and contradicted by the experts’ prior writings” . However, the controversy escalated when it was revealed that the Hancock Declaration contained fabricated references, courtesy of an inadvertent reliance on GPT-4o. According to the record, “[a]fter reviewing Plaintiffs’ motion to exclude, Attorney General Ellison’s office contacted Professor Hancock, who subsequently admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article” . As explained, “These errors apparently originated from Professor Hancock’s use of GPT-4o—a generative AI tool—in drafting his declaration. GPT-4o provided Professor Hancock with fake citations to academic articles, which Professor Hancock failed to verify before including them in his declaration” . Although Professor Hancock offered a “detailed explanation of his drafting process” and stated “he stands by the substantive propositions in his declaration” , the court could not overlook the severity of filing “a declaration made under penalty of perjury with fake citations” . Plaintiffs further argued that “the fake citations in the Hancock Declaration taint the entirety of Professor Hancock’s opinions and render any opinion by him inadmissible” . The Court’s Scathing Response In evaluating both declarations, the court first observed that it would assess the “competence, personal knowledge and credibility” of each submission rather than apply a full Daubert analysis at the preliminary-injunction stage. Regarding the West Declaration, the court found that while Plaintiffs deemed it “conclusory” , its overall “competence, personal knowledge and credibility” made it admissible for the limited purpose of preliminary-injunction proceedings. Moreover, the court noted that “an expert may not testify as to whether ‘a legal standard has been met,’ [but] … may offer his opinion as to facts that, if found, would support a conclusion that the legal standard at issue was satisfied” —a standard the West Declaration satisfied. Additionally, countering deepfakes through counter speech was treated as “a fact relevant to the ultimate legal inquiry,” not as “a legal standard” . In stark contrast stood the Hancock Declaration. Labelling it “particularly troubling” , the court underscored the “irony” that a “credentialed expert on the dangers of AI and misinformation” had “fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less” . The court stressed that, regardless of whether the mistakes were innocent, “the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations” . This lapse in judgment, especially from someone who “typically validates citations with a reference software when he writes academic articles” but failed to do so here, “shatters his credibility” before the court. Beyond the specific repercussions for Professor Hancock, the court’s admonition carried broader implications. It reminded counsel of the “personal, nondelegable responsibility” under Rule 11 of the US Federal Rules of Civil Procedure to “validate the truth and legal reasonableness of the papers filed” in any action and proclaimed the now-familiar warning that attorneys and experts must “verify AI-generated content in legal submissions!” . Ultimately, the court excluded Professor Hancock’s testimony in its entirety for the preliminary-injunction analysis, emphasizing that “signing a declaration under penalty of perjury” should never be treated as a “mere formality” . Instead, reliability and trustworthiness remain paramount for the judicial process, as “citing to fake sources imposes many harms, including ‘wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system’” . Legal and Professional Implications The court’s order in Kohls v. Ellison emphasises that the submission of expert declarations under penalty of perjury must remain a solemn undertaking, and that “citing to fake sources imposes many harms, including ‘wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system (to name a few).’” As a result, professionals in both the legal and academic realms are under increasing pressure to ensure all cited materials—especially those derived from AI tools—are thoroughly verified to avoid undermining credibility. Impact on Expert Witness Credibility Expert witnesses play a pivotal role by helping courts grasp technically complex or scientific issues, yet “Professor Hancock’s citation to fake, AI-generated sources in his declaration…shatters his credibility with this Court.” Even if the remaining portions of an expert’s testimony carry legitimate insights, any inclusion of unreliable citations can effectively negate their value. Because trust is paramount, courts are inclined to exclude testimony once credibility is compromised. Responsibilities while Filing in a Court The US Federal Rule of Civil Procedure places a “personal, nondelegable responsibility” on attorneys to verify that filings are factually and legally sound. This means counsel must conduct a “reasonable inquiry under the circumstances” before submitting an expert’s declaration. When AI tools like GPT-4 are used in drafting, attorneys must confirm that witnesses have checked the authenticity of any references generated by AI, ensuring the final declarations are not contaminated by fictitious sources. Verification Requirements for AI-Generated Content In granting in part the motion to exclude the expert declarations, the court “adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!” This clarion call underscores the need for robust protocols—such as cross-referencing AI-furnished citations in academic databases—to guard against fabricated content that may slip into legal filings. Evidence Law in the AI Era Trust Mechanisms in Legal Proceedings Declarations under penalty of perjury benefit from “indicia of truthfulness” that courts rely on to assess credibility. Where AI is involved, these traditional trust mechanisms are tested: the chain of verifying the accuracy and reliability of reference materials is complicated when the “author” is a software algorithm, rather than a person who can be cross-examined. Challenges to Traditional Evidence Standards Traditional evidence rules—rooted in witness testimony, document authentication, and cross-examination—did not anticipate a party’s reliance on AI-generated documents containing fabricated citations or invented studies. As a result, courts face unprecedented procedural stress, such as how to handle novel forms of “fictitious” or “hallucinatory” references inserted by AI tools. Need for New Verification Protocols Because AI integration into legal practice continues to expand, judges, lawyers, and experts must develop fresh or updated reference-checking procedures. The court admonished that “the Court should be able to trust the ‘indicia of truthfulness’ that declarations made under penalty of perjury carry, but that trust was broken here.” This highlights the urgent need for multi-step validation protocols, peer reviews, and external verification software to safeguard procedural integrity. Future Guidelines and Best Practices Proposed Standards for AI Use in Legal Documents Going forward, courts may set baseline requirements for the use of AI, such as compulsory disclosure of AI assistance and mandatory cross-checking of citations. The order in Kohls v. Ellison suggests that attorneys should explicitly ask their experts if they have relied on AI—and if so, what steps they took to verify any AI-generated content. Failure to comply could invite adverse rulings and sanctions. Expert Witness Responsibilities Expert witnesses, especially those testifying on topics like deepfakes and AI, owe heightened duties of diligence. As the court stated, “One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles.” Experts must now demonstrate to judges that they mitigate the risk of AI “hallucinations” by employing reference management systems and meticulously confirming each cited source. Attorney Due Diligence Requirements For attorneys, professional accountability includes overseeing the drafting process to confirm that any AI-derived references have been verified. This may involve: Asking explicit questions about the extent of AI usage. Insisting on documentary proof (e.g., PDF copies, DOIs, or credible database links) for each citation. Staying abreast of emerging precedential guidelines, as courts progressively adapt their evidentiary rules to the realities of AI. Broader Implications for Courts in India and the Indo-Pacific Courts across India and the wider Indo-Pacific region increasingly face legal submissions involving AI-generated content, compelling them to define verification standards for evidence. India’s recent judicial initiatives—including digitisation drives and pilot AI projects—mirror broader Asia-Pacific trends, where lawmakers and courts debate how best to balance innovation with accuracy and due process. In particular, concerns about fabricated citations resonate with legislative and judicial authorities seeking to maintain public trust in the judicial process while welcoming AI’s potential for legal research and document automation. Impact on Academic-Legal Collaborations Such incidents place academic-legal collaborations under scrutiny, pushing universities to mandate tighter guidelines when faculty serve as expert witnesses. Enhanced training on AI’s limitations—and transparent disclaimers about the reliability of AI-derived sources—can mitigate risks while preserving fruitful collaboration. Trust in Expert Testimony Courts traditionally extend professional deference to expert witnesses, presuming their diligence in source verification. Yet the discovery of bogus citations “shatters credibility” and erodes the trust integral to expert testimony. This dynamic is prompting courts to demand detailed methodologies from experts, including any use of AI. For instance, requiring supplemental affidavits attesting to source verification can help restore the damaged “indicia of truthfulness” once guaranteed by penalty-of-perjury statements. Indian Evidence Law Framework The Bhartiya Sakshya Adhiniyam (BSA) has introduced Section 61 which explicitly recognises electronic and digital records, treating them at par with documentary evidence. The BSA's Section 63 expands the scope to include electronic records in semiconductor memories, alongside traditional paper and optical/magnetic media storage. For AI-generated content to be admissible as evidence, it must satisfy stringent authentication requirements. Under the BSA, such content would likely be classified as 'digital' or 'electronic evidence,' requiring authentication through certificates signed by both the person in charge of the computer/communication device and an expert. Authentication Challenges The multi-contributor nature of AI systems presents unique verification challenges: Multiple persons involved in data collation, model training, and testing Complex self-learning algorithms making certification cumbersome Difficulty in explaining functioning of advanced AI systems, especially those involving deep learning Singapore Supreme Court’s AI Guidelines The Supreme Court of Singapore issued comprehensive guidelines for generative AI use in court proceedings through Registrar's Circular No. 1 of 2024, effective October 1, 2024. Key provisions include: Core Principles Maintains a "neutral stance" on GenAI tools Treats AI as a tool, with users bearing full responsibility No pre-emptive declaration required unless specifically questioned Specific Requirements Document Preparation: AI can be used for drafting but not for generating evidence All content must be fact-checked and independently verified References must be authenticated using trusted sources Verification Protocol: Users must verify citations through official channels Cannot use one AI tool to verify another's output Must be prepared to identify AI-generated portions Professional Responsibility: Lawyers retain professional obligations Self-represented persons bear responsibility for accuracy Violations may result in costs orders or other sanctions Future Implications for Legal Systems Evidence Authentication Systems Courts must develop comprehensive verification protocols for AI-generated content. This includes establishing chain-of-custody requirements specific to AI outputs and implementing multi-step validation processes. The Singapore Supreme Court's approach provides a model, requiring users to "independently verify AI-generated content" and authenticate "references through official channels". Such systems should incorporate both technical and procedural safeguards, moving beyond simple human verification to include specialised software tools and expert review processes. Professional Standards Evolution The legal profession must adapt its ethical guidelines and practice standards. As demonstrated in Kohls v. Ellison , even experienced professionals can fall prey to AI hallucinations, necessitating new professional responsibilities. Legal practitioners must now implement specific verification protocols for AI-generated content, including: Documentation Requirements Mandatory disclosure of AI use in legal submissions Detailed records of verification methods employed Clear attribution of human oversight and responsibility Verification Protocols Practitioners must establish robust systems that go beyond traditional citation checking. This includes using specialized software for AI content detection and maintaining clear documentation of AI use in legal submissions. Judicial Framework Development Courts must establish clear standards for: Evidence Admissibility The current approach to AI-generated evidence remains fragmented. Courts need standardised criteria for evaluating the reliability and authenticity of AI-generated content. This includes: Technical standards for validating AI outputs Requirements for expert testimony regarding AI systems Clear protocols for challenging AI-generated content Expert Testimony Guidelines The Minnesota case demonstrates the need for updated standards governing expert testimony about AI systems. Courts must establish: Qualification requirements for AI experts Standards for validating AI-generated research Protocols for verifying AI-assisted expert declarations These changes require a fundamental shift in how legal systems approach technology-generated evidence, moving beyond traditional authentication methods to embrace new technical and procedural safeguards. As Judge Provinzino noted, "when attorneys and experts abdicate their independent judgment and critical thinking skills in favour of ready-made, AI-generated answers, the quality of our legal profession and the Court's decisional process suffer”. List of Quotes from the Kohl v. Ellison “The declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy. ECF No. 23 ¶¶ 7–32; ECF No. 24 ¶¶ 7–23.” “Plaintiffs moved to exclude these declarations, arguing that they are conclusory and contradicted by the experts’ prior writings.” “After reviewing Plaintiffs’ motion to exclude, Attorney General Ellison’s office contacted Professor Hancock, who subsequently admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article. ECF No. 37 at 3–4. These errors apparently originated from Professor Hancock’s use of GPT-4o—a generative AI tool—in drafting his declaration. ECF No. 39 ¶¶ 11, 21. GPT-4o provided Professor Hancock with fake citations to academic articles, which Professor Hancock failed to verify before including them in his declaration. Id. ¶¶ 12–14.” “Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim… the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations.” “Plaintiffs continue to maintain that the fake citations in the Hancock Declaration taint the entirety of Professor Hancock’s opinions and render any opinion by him inadmissible. ECF No. 44 at 8–9.” “Rather, the Court will evaluate the ‘competence, personal knowledge and credibility’ of the West and Hancock Declarations.” “As for the West Declaration, Plaintiffs argue that it is conclusory because it lacks a reliable methodology under Daubert. ECF No. 30 at 17–21… the Court is satisfied that the ‘competence, personal knowledge and credibility’ of Professor West’s testimony weigh in favor of admitting his declaration at this early stage.” “Although an expert may not testify as to whether ‘a legal standard has been met,’ an expert ‘may offer his opinion as to facts that, if found, would support a conclusion that the legal standard at issue was satisfied.’” “But whether counterspeech is effective in combatting deepfakes is not a legal standard; rather, it is a fact relevant to the ultimate legal inquiry here: the First Amendment means-fit analysis.” “Professor Hancock… has fallen victim to the siren call… shatters his credibility with this Court… verify AI-generated content in legal submissions!” “Signing a declaration under penalty of perjury is not a mere formality; rather, it ‘alert[s] declarants to the gravity of their undertaking and thereby have a meaningful effect on truth-telling and reliability.’” “Moreover, citing to fake sources imposes many harms… Courts therefore do not, and should not, ‘make allowances for a [party] who cites to fake, nonexistent, misleading authorities.’” Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- Technology Law is NOT Legal-Tech: Why They’re Not the Same (and Why It Matters)
Created using Luma AI. Technology has changed the way we communicate, transact, and live our daily lives. It’s no wonder that law and technology have increasingly converged into two distinct but often confused areas: Tech-Legal and Legal-Tech . This quick explainer clears the air on what each term means, why mixing them up creates chaos, and how to get them right. The Basic Difference Tech-Legal focuses on the legal frameworks, policies, and governance of technology itself. It deals with questions like: How should AI be regulated? What legal boundaries apply to blockchain or cross-border data flows? Think of Tech-Legal as creating the rules of the game for emerging technologies. Legal-Tech , on the other hand, is about using technology to improve or automate legal services and processes. It addresses questions like: How do we efficiently manage legal cases online? Can we use AI to review contracts faster? Legal-Tech is essentially playing the game better by adopting new tools to streamline legal work. Diving into Tech-Legal What It Involves Tech-Legal is the domain that deals with legal frameworks, policies, and governance surrounding technology. It includes how societies, governments, and regulatory bodies respond to the rise of AI, data protection challenges, digital sovereignty, and cross-border transactions. The primary question here is: How do we craft standards, laws, and regulations for new and disruptive technologies? Key Areas AI Regulation & Policy: Crafting guidelines for AI’s ethical use, liability frameworks for AI-driven decisions, and classification approaches like “AI as a Juristic Entity” or “AI as a Subject”. Digital Sovereignty & Data Protection: Ensuring national laws reflect local interests (e.g., Permeable Indigeneity in Policy) while also complying with global data protection standards. Governance & Enforcement: Establishing legal mechanisms for oversight of algorithmic activities and operations, including the concept of “Manifest Availability,” which recognises AI’s tangible influence in real-world applications. International Technology Law: Addressing cross-border technology disputes through treaties, agreements, and new fields like “International Algorithmic Law”. Why It Matters Tech-Legal shapes the rules of engagement for emerging technologies. Without well-defined policies, we risk unregulated AI deployments, data misuse, and infringement of human rights. A robust Tech-Legal framework helps countries and corporations innovate responsibly, minimise legal uncertainties, and protect public interests. Diving into Legal-Tech What It Involves Legal-Tech is about leveraging technology to improve or automate legal services . It focuses on tools and solutions aimed at making legal work more efficient, reducing costs, and streamlining traditional processes. Key Areas Contract Automation: Utilizing AI-driven platforms to draft, review, and manage legal documents quickly and accurately. E-Discovery Tools: Searching, sorting, and analyzing vast sets of digital documents to accelerate litigation. Case & Practice Management: Using software dashboards to track deadlines, client communications, and billing seamlessly. AI-Driven Research: Employing natural language processing to find relevant case law, statutes, and precedents in seconds. Why It Matters Legal-Tech modernizes how lawyers and law firms operate. By adopting these tools, legal practitioners can reduce manual workloads, focus on higher-value tasks like complex litigation or negotiations, and ultimately offer faster, more cost-effective services to clients. Why Mixing Them Up Is Dangerous Overlap vs. Confusion While Tech-Legal and Legal-Tech can intersect, confusing the two can create significant practical and policy problems. Simply put, a Legal-Tech expert skilled at automating law firm tasks might not have the specialized knowledge to shape AI regulations or digital sovereignty laws. Conversely, a Tech-Legal professional focused on big-picture governance may overlook the technical needs of day-to-day legal practice automation. Risks of Confusion Policy Gaps: Regulatory frameworks could lack necessary depth if based on tools rather than governance principles. Misallocated Resources: Law firms might invest in AI policy consultants instead of practical automation solutions, or vice versa. Non-Compliance Hazards: Missing the nuances of AI’s “manifest availability” or “SOTP classification” can result in failing to adhere to emerging legal standards. Stunted Innovation: If teams expect Legal-Tech solutions to solve Tech-Legal governance challenges, genuine regulatory and ethical questions remain unanswered. Real-World Consequences Hiring Mismatches Bringing in a contract automation specialist to advise on cross-border AI liability leads to superficial policy coverage. Relying on a Tech-Legal scholar for in-house document review tools could produce inefficient legal practice outcomes. Misaligned Tools Implementing advanced e-discovery software when the actual need is a broader governance framework for AI-based data analysis. Investing in “Zero Knowledge Systems” for privacy compliance when the real challenge is a lack of policy clarity on data-sharing standards. Compliance Nightmares Mistaking “Zero Knowledge Taxes” (a futuristic policy concept) for standard tax compliance tools can cause misunderstandings of how to prove tax payments without disclosing income details. Failing to consider “Privacy by Default” or “Technology by Design” in your AI system could run afoul of new digital regulations. The Cost of Confusion Business & Policy Fallout Regulatory Non-Compliance: Firms operating AI solutions or generative AI applications (e.g., text-based tools, image generators) risk running into penal actions if they do not meet Tech-Legal standards. Resource Wastage: Expensive software implementations might fail to resolve governance or liability issues, leading to sunk costs. Missed Opportunities: Startups or enterprises may lose competitive advantage by ignoring advanced Legal-Tech tools that streamline daily operations. Eroded Trust: Governments may appear unprepared for emergent technologies if they confuse Legal-Tech with Tech-Legal, undermining public confidence. Importance of Distinguishing Understanding the specific goals of each domain ensures the right expertise is deployed, the proper strategies are adopted, and both legal innovation and responsible governance thrive. How to Get It Right Define Your Need : Are you seeking legal frameworks for emerging tech, or do you want to optimize daily legal tasks? Hire the Right Expertise : Tech-Legal specialists handle governance and regulatory policy. Legal-Tech professionals tailor software and automation for legal work. Engage Different Stakeholders : Regulators, policymakers, and technology experts for Tech-Legal; IT professionals and law firm managers for Legal-Tech. Aim for Different Outcomes : Tech-Legal aims to shape the broader digital landscape, while Legal-Tech refines the way law is practiced and delivered. Moving Forward in a Digital World As technology evolves, Tech-Legal will continue to grow in importance, shaping national and international legal frameworks for everything from data privacy to AI governance. Legal-Tech will keep transforming how lawyers and firms operate, making legal practice more efficient, accurate, and accessible. By understanding these two domains as separate but equally vital aspects of our modern legal system, businesses, policymakers, and legal professionals can collaborate more effectively, ensuring that technology is both well-governed and well-leveraged. In Summary : Tech-Legal is all about policy, governance, and frameworks around technology. Legal-Tech is about using technology to enhance how legal services are delivered. Know the difference, hire the right experts, invest in the right tools, and navigate the digital age with confidence. You may like going through this slide to understand the tech law vs legal tech differences in an interactive way. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- India's Draft Digital Personal Data Protection Rules, 2025, Explained
Sanad Arora, Principal Researcher is the co-author of this Insight. The Draft Digital Personal Data Protection (DPDP) Rules, released on January 3, 2025 , represent an essential step towards making in the digital age simple. These rules aim to enhance the protection of personal data while addressing the challenges posed by emerging technologies, particularly artificial intelligence (AI). As AI continues to evolve and integrate into various sectors, ensuring that its deployment aligns with ethical standards and legal requirements is paramount. The DPDP rules seek to create a balanced environment that fosters innovation while safeguarding individual privacy rights. Figure 1: Draft DPDP Rules (January 3, 2025 version, explained and visualised). This chart, meticulously created by Abhivardhan and Sanad Arora, as a part of the explainer. Download the chart below for free. Overview of Key Rules Notice Requirements (Rule 3) Data Fiduciaries must provide clear, comprehensible notices to Data Principals about their personal data processing. Notices must include: An itemized description of the personal data being processed. The specific purposes for processing. Information about the services or goods related to the processing. Consent Manager Registration (Rule 4) Consent Managers must apply to the Data Protection Board for registration and comply with specified obligations. They serve as intermediaries between Data Principals and Data Fiduciaries, facilitating consent management. Data Processing by Government (Rule 5) This rule governs how government authorities process personal data when issuing benefits or licenses. Examples include issuing driving licenses and subsidies. Security Safeguards (Rule 6) Data Fiduciaries are required to implement reasonable security measures to protect personal data, such as: Encryption Access controls Monitoring for unauthorized access Specific challenges for Micro, Small, and Medium Enterprises (MSMEs) regarding cybersecurity are highlighted. Breach Notification (Rule 7) In case of a data breach, Data Fiduciaries must inform affected Data Principals and the Data Protection Board within specified timeframes. Notifications must detail the nature, extent, timing, and potential impacts of the breach. Data Retention and Erasure (Rule 8) Personal data must be erased if not engaged with by the Data Principal within a specified timeframe. Notification of impending erasure must be provided at least 48 hours in advance. Additional Provisions Rights of Data Principals (Rule 13) Data Principals can request access to their personal data and its erasure. Consent Managers must publish details on how these rights can be exercised. Cross-Border Data Transfer (Rule 14) Transfers of personal data outside India are subject to compliance with Central Government orders and specific security provisions. Exemptions for Certain Processing Activities (Fourth Schedule) Certain classes of Data Fiduciaries, such as healthcare institutions, may be exempt from specific consent requirements when processing children's data if necessary for health services or educational benefits. Some Introspective Questions Below is a detailed legal analysis of the critical areas that require further examination and policy throughput. Please note that this is not an official feedback published by Indic Pacific Legal Research for the Ministry of Electronics & Information Technology, Government of India Notice Requirements (Rule 3) Clarity and Comprehensibility The rules mandate that notices provided by Data Fiduciaries must be clear, comprehensible, and understandable independently of other information. This raises several legal considerations: Definition of Comprehensibility : What specific standards will be used to determine whether a notice is comprehensible? Will there be guidelines or metrics established by the Data Protection Board? Consequences of Non-Compliance : What penalties or corrective measures will be enforced if a notice fails to meet these standards? Itemized Descriptions The requirement for itemized descriptions of personal data and processing purposes necessitates: Standardization of Notices : The need for uniformity in how notices are presented could lead to the development of templates or guidelines that Data Fiduciaries must adhere to. Impact on Consent Withdrawal : How will the ease of withdrawing consent be operationalized? Will there be specific processes that must be followed to ensure compliance? Registration and Obligations of Consent Managers (Rule 4) Conditions for Registration Consent Managers must meet specific conditions to register, including technical, operational, and financial capacity. Legal analysis should focus on: Assessment Criteria : What specific criteria will the Data Protection Board use to evaluate an applicant's capacity? Ongoing Compliance : How will ongoing compliance with these conditions be monitored and enforced? Procedural Safeguards The opportunity for Consent Managers to be heard by the Board is a procedural safeguard that requires scrutiny: Nature of Hearings : What will the process look like for these hearings? Will there be formal procedures in place? Data Processing by Government (Rule 5) Legal Basis for Processing This rule governs government data processing when issuing benefits or services. Key considerations include: Alignment with Privacy Principles : How will government data processing align with individual privacy rights under the DPDP Act? Transparency in Public Spending : What mechanisms will be in place to ensure transparency regarding how public funds are used in data processing activities? Security Safeguards (Rule 6) Practicality for MSMEs The security measures required from Data Fiduciaries pose significant challenges, particularly for Micro, Small, and Medium Enterprises (MSMEs): Cost-Benefit Analysis : A thorough examination of the costs associated with implementing these safeguards versus the potential costs of data breaches is essential. Support Mechanisms : What support or resources can be provided to MSMEs to help them comply with these security requirements? Breach Notification (Rule 7) Timeliness and Content The obligations surrounding breach notifications necessitate a detailed examination: Best Practices for Breach Management : What best practices should organizations adopt to ensure timely and accurate breach notifications? Liability Implications : What are the potential liabilities for organizations that fail to comply with breach notification requirements? Erasure of Personal Data (Rule 8) Engagement Metrics The criteria defining when personal data must be erased raise questions about user engagement metrics: Tracking Engagement : How will organizations track user engagement effectively? What tools or systems will be necessary? Notification Processes : The requirement to notify Data Principals before erasure poses questions about communication strategies and compliance timelines. Rights of Data Principals (Rule 13) Implementation Mechanisms A thorough examination of how Data Principals can exercise their rights is needed: Technical and Organizational Measures : What specific measures must Data Fiduciaries implement to ensure timely responses to access and erasure requests? Response Times : What constitutes a reasonable response time, and how does this align with international best practices? Cross-Border Data Transfer (Rule 14) Compliance with Government Orders The provisions governing cross-border data transfers require careful consideration: Legal Basis for Transfers : Understanding the legal bases required for transferring personal data outside India, including consent mechanisms, will provide clarity on operational challenges. Impact on International Business Operations : How will these rules affect businesses operating internationally, particularly regarding compliance burdens? Data Localization in the Draft DPDP Rules The Draft Digital Personal Data Protection (DPDP) Rules underscore the importance of data localization, which mandates that certain categories of personal data pertaining to Indian citizens must be stored and processed within India. While this requirement is pivotal for enhancing data security and privacy, it also presents challenges and implications for businesses operating in the digital space. Current Framework and Implications Definition and Scope of Data Localization : Data localization aims to ensure that personal data related to Indian citizens is stored within the country, thereby enhancing governmental control over data privacy and security. The rules specify that Significant Data Fiduciaries (SDFs) must adhere to conditions regarding the transfer of personal data outside India, which may include obtaining explicit consent from Data Principals or complying with directives from the Central Government. Challenges in Implementation : Ambiguity in Guidelines : The current draft lacks comprehensive guidelines detailing how organizations can effectively achieve compliance with localization requirements. This ambiguity could lead to varied interpretations and inconsistent practices across different sectors. Operational Burden : For multinational companies, the requirement to localize data may result in increased operational complexity and costs. Organizations may need to invest significantly in local infrastructure or face penalties for non-compliance, potentially impacting their business models. Impact on Innovation : Critics argue that stringent localization mandates could hinder innovation by restricting access to global data resources and collaboration opportunities. Companies may struggle to leverage cloud computing and other technologies that depend on cross-border data flows. The Case for Data Localization Despite the challenges associated with data localization, the concept remains a critical consideration for several reasons: Enhanced Data Sovereignty : By mandating that personal data be stored within national borders, countries can exert greater control over their citizens' information. This can lead to improved accountability and facilitate legal recourse in cases of data breaches or misuse. Improved Security Measures : Localizing data can mitigate risks associated with international data transfers, such as exposure to foreign surveillance or differing legal standards for data protection. It allows governments to enforce local laws more effectively. Public Trust : Implementing robust localization policies can foster public trust in digital services by assuring citizens that their personal information is protected under local laws and regulations. Conclusion to the Overall Analysis The Draft Digital Personal Data Protection (DPDP) Rules represent a significant advancement in the establishment of a comprehensive data protection framework in India. The focus on data localization, consent management, and the rights of Data Principals reflects an increasing awareness of the necessity for robust privacy protections in a digital age. While these rules pose challenges, particularly regarding compliance and operational implications for businesses, they also create opportunities to enhance data security and foster public trust. As stakeholders engage in the consultation process initiated by the Ministry of Electronics and Information Technology (MeitY), it is vital to consider the implications of confidentiality in feedback submissions. The commitment to holding submissions in fiduciary capacity ensures that individuals and organizations can provide their insights without fear of disclosure or repercussion. This confidentiality is crucial for promoting open dialogue and collecting diverse perspectives that can inform the finalization of the rules. However, it is essential to acknowledge that undisclosed versions of the draft DPDP rules have been leaked in bad faith, potentially manipulating the tech policy discourse in India. Such actions undermine the integrity of the consultation process and could skew stakeholder perceptions and discussions surrounding these critical regulations. Submissions Held in Fiduciary Capacity The assurance that submissions will be held in fiduciary capacity by MeitY is a reasonable aspect of this consultation process. By ensuring that feedback remains confidential, stakeholders can express their views freely without hesitation. This approach encourages a more honest and constructive discourse around the challenges and implications of the DPDP Rules. Anonymity Encourages Participation : The ability to submit comments without attribution allows for a broader range of voices to be heard, including those from smaller organizations or individuals who might otherwise feel intimidated by potential backlash. Consolidated Feedback Summary : The promise to publish a consolidated summary of feedback received—without attributing it to specific stakeholders—further enhances transparency while protecting individual contributions. This summary can serve as a valuable resource for understanding common concerns and suggestions, ultimately aiding in refining the rules. Feedback can be submitted through an online portal set up by MeitY specifically for this purpose. The link for submitting feedback is available at MyGov DPDP Rules 2025 Portal . After submission, keep an eye on updates from MeitY regarding any further consultations or changes made based on stakeholder feedback.
- Character.AI, Disruption, Anger and Intellectual Property Dilemmas Ahead
The author is currently a Research Intern at the Indian Society of Artificial Intelligence and Law. Made with Luma AI. What is Character.AI and What is the Mass Deletion Event? Imagine having your personal Batman, Superman, Iron Man, or even Atticus Finch- someone you can interact with at any moment. Character.AI has turned this dream into reality for man, especially within fandom communities. Character.AI is an artificial intelligence (AI) platform through which users interact with and create AI-powered chatbots, based on either fictional or real people. Since its launch in 2021, the platform has gained a significant traction among the fandom communities and it has become a go-to platform to explore interactions with their favorite, often fictional, characters. However, the platform’s user base isn't just limited to fandom communities, it also extends over to people interested in history, philosophy, literature, and people with other niche interests. Character.AI also enjoys an advantage that is available only to very few platforms, and that is: a diverse user-base . This includes everyone from serious interests, to simple casual explorers. Users from fandom communities saw the platform as a new way to engage with their favorite characters. Character.AI also enjoys a good demographic advantage, where the majority of Character.AI users are located in the United States, Brazil, India, Indonesia and then the United Kingdom. However Character.AI also has been surrounded in its fair share of controversies, including the latest one where they carried out a mass deletion drive involving copyrighted characters, and raising concerns over copyright infringement, platform liability, and platform ethics in the context of AI generated content. Overview of Character.AI ’s platform and user base Character.AI ’s core value proposition lies in enabling users to interact with AI-powered chatbots designed to stimulate lifelike conversation. These chatbots reflect diverse personalities, conversational styles and traits unique to the character upon which the chatbot was trained, making the platform particularly popular for role-playing with favorite characters, and story-telling. At the heart of it all, Character.AI is a conversational AI platform that hosts a wide range of chatbots, and gives the users the ability to either interact with existing characters or create their own, and customize the characters personalities, and responses. Character.AI boasts a diverse user base with a chunk of them falling within the 18-23 age group. The composition of its user demographics is visually represented in the following figure: Figure 1: Age distribution of Character.AI Visitors The platform hosts a wide range of characters, including historical figures,celebrities, fictional characters, and even dungeon masters. This makes it accessible to people belonging to different age groups. It is also quite evident that the majority of its user base stems from the 18-24 age group. Also the combined user base of the age group of people belonging under the age group of 44 years make up for 89.84 percent of its user base. Summary of the mass deletion of copyrighted characters In the month of November 2024, Character.AI carried out a mass deletion drive of AI chatbots that were based on copyrighted characters from various franchises, including "Harry Potter,"Game of Thrones," and "Looney Tunes." The company announced that the deletions were the result of the Digital Millennium Copyright Act (DMCA) as well as copyright law. However, the company did not explain why they did this or whether they were proactively engaged in a dialogue with the copyright holders, vis-a-viz Warner Bros Discovery. Interestingly, users were not officially notified about these deletions but only came to know about the situation through a screenshot that was circulating online. The removals were countered with a strong backlash from the user community, in particular those within fandom cultures that have invested time and emotional energy in interacting with these AI characters and of people who share similar interests that have put a lot of enthusiasm and effort into their interactions with these AI characters. The removal of popular familiar figures such as Severus Snape, who had clocked 47.3 million user chats, has caused the fandom community to be in turmoil and has at the same time made people doubt the future of Character.AI and its relationship with copyrighted content. Initial user reactions and impact on the fandom community The initial reactions from users highlighted their frustrations, disappointment, discontentment, anger and upset. Some users considered migrating to different AI platforms, as the deletions have sparked discussions about the balance between copyright protection and creative expression within AI Platforms. Many users expressed their disappointment over a lack of prior notice regarding the deletion drive. One user remarked : “at least a prior notice would be nice. This allows us to archive or download the chats at the very least. Also, I earnestly hope you finally listen to your community. Thank you!”. While others criticized the unprofessionalism of the situation where the platform communicates the news two days after the deletion drive that has already occurred. While some users also acknowledged and in some ways already knew the potential reasons behind the deletion drive- recognizing the need for Warner Bros. Discovery to protect their IP’s from potential controversies- they were mostly concerned about the lack of transparent communication and the absence of any heads up. Copyright Law and AI-Generated Content The mass deletion on Character.AI highlights the complex legal issues in dealing with copyright law and AI-generated content. The use of copyrighted characters in AI chatbots raises concerns around copyright infringement, fair use, and the responsibilities of the AI platform regarding intellectual property rights. Analysis of copyright infringement claims in AI-generated chatbots Intellectual Property laws and particularly the copyright law, essentially grants exclusive rights to copyright holders , including the right to reproduce, distribute, licence, and create derivative works, based on their original creative works. The emergence of AI-chatbots and conversational AI in general presents a complex conundrum, where they potentially infringe upon these exclusive rights when they reproduce the exclusively protected elements of those characters, personalities, appearances, storylines, conversational styles, ideologies, and simply put those characters in their entirety. However, dealing with copyright infringements in the realm of AI-generated content is not an easy legal problem to overcome. Since matters pertaining to this realm are still pending in courts , and there are limited precedents to establish a responsible discourse. All of this gets even more complicated by the fact that the Large Language Models (LLMs) which power these AI systems, do not simply copy and present the content. Instead, they analyze vast data points to learn patterns to generate works inspired no by a single copyright holder, but everyone. Courts will need to consider factors such as the extent to which the AI chatbot copies protected elements of the copyrighted characters, the purpose, and the potential impact on the market for the original work. This mind map below gives a comprehensive examination of the fair use legal arguments with respect to AI Training. Figure 2: Analysis of Fair Use in AI Training, using a mind map. Discussion of the Digital Millennium Copyright Act (DMCA) Implications The Digital Millennium Copyright Act (DMCA) provides a safe harbor framework that protects online from liability for copyright infringement by their users, provided that certain conditions are met. These Conditions are illustrated for your reference in Figure 3.The DMCA also carries significant implications for platforms like Character.AI , requiring them to establish mechanisms for addressing infringement claims. This includes responding to takedown notices from copyright holders and proactively implementing measures to prevent potential infringements. However, the applications of the DMCA to AI-Generated content remains underdeveloped, leaving unanswered questions about how the notice and takedown systems can effectively address the unique challenges posed by the future of AI-generated content. Figure 3: DMCA Safe Harbour Compliances, depicted. Platform Liability and Content Moderation This mass deletion on Character.AI raises pertinent questions about the legal duties of AI platforms to moderate content and prevent harm. As these AI chatbots become ever more capable and able to produce increasingly lifelike, immersive experiences, it poses a tremendous challenge on such platforms as ensuring the safety of users, protecting intellectual property rights, and living up to various legal and ethical standards. Exploration of Character.AI ’s legal responsibilities as a platform. Character.AI , like other online platforms, bears a legal responsibility towards the users and society at large. These include protecting user privacy, preventing harm, and complying with the law of the land. Policies and guidelines in the terms of service of Character.AI deal with the dos and don'ts regarding user behaviour, content, and intellectual property rights. However, the specific legal obligations and the extent to which platforms should be held liable for content generated by their users or the actions of their chatbots are still evolving. The recent lawsuit against Character.AI involving an issue such as a wrongful death case regarding a teenager’s suicide after forming a deep emotional attachment with a 'Daenerys Targaryen'-inspired Chatbot, underscores the potential risks of conversational AI and specifically, character based conversational AI. The lawsuit alleges negligence, wrongful death, product liability, and deceptive trade practices, claiming that Character had a responsibility to inform users of dangers related to the service, particularly a dangerous threat to children. Aside from the legal responsibilities, Character.AI also grapples with ethical issues involving bias within the training data, ensuring prevention of black-boxisiation of their conversational AI models, and establishing accountability for actions and impacts of AI systems. These ethical concerns are critical in their own right and must be addressed as proactively we seek to innovate. Here's an evaluation of proactive vs. reactive content moderation strategies as depicted in the figure below. Figure 4: Comparison of Reactive and Proactive Content Moderation Comparison with other AI platforms approaches to copyrighted content Different AI platforms have adopted differing approaches towards copyright content management. Some of the platforms strictly enforce the policies against the use of the copyrighted characters, whereas others have taken a more permissive approach, allowing for users to create and interact with AI chatbots based on copyrighted characters under certain conditions. For example, Replika and Chai focused on the creation of novel AI companions rather than replicating pre-existent characters to minimise the issue of copyright. NovelAI on the other hand has implemented features, such as the ability for users to generate content that is based on copyrighted works but within limitations and safeguards to avoid copyright violations. User Rights and Expectations in AI Fandom Spaces In the complex scheme of things where copyrighted content is utilized to train large language models(LLMs), that is merely a derivative of the original work, and where users further refine these models through prompting to get a more personalized experience and interact with those that they couldn't interact with in real life. Thus, a new dynamic emerges , one where there are unreasonable set expectations. This dynamic becomes even more critical especially when companies are not doing their part in making their users aware about the limitations of their conversational AI models that the users want to experience. Users then invest significant time, creativity and emotional energy in fine tuning and interacting with these models. All the interactions that people have had with the models has helped it to be better and improved. They have contributed to the success of those chatbots and also helped in creating personalized experiences for others. The Initial reaction to the abrupt deletion of chatbots by the platforms has highlighted the basic expectations of core users, particularly the need to have some form of control or say over the deletion of those chatbots and the data generated during interactions and receiving prior notice, so that they could exercise their ability to archive conversations before they are removed. It is crucial to understand here that it’s not just about the energy they have spent in crafting the personalized conversations they had with the chatbots, but also the comfort they sought, the ideas they had, and the brainstorming they did with those chat bots. Examination of user generated content ownership in AI environments One question and a major concern of the users of conversational AI, for the future technology law jurisprudence is whether users of chats based on LLMs are also in-part copyright holders of the chats between them and the characters they are interacting with. Since platforms like Character.AI allow for users to have private, personalized conversations, that are often unique to the input prompts, and also given that the users now can share their chats with others, giving it the status of published works, complicating the issue of ownership even further. Character AI’s Terms Of Services (TOS) provide that users retain ownership of their characters and by extension, the generated content. However the platform reserves a broad and a sweeping license to use this content for any purpose, including commercial use. This convenient arrangement gives rise to the potential for Character.AI to commercially benefit from user generated content, without compensation or recognition of not only the User generated derivative content but as a matter of fact, the original copyrighted works itself. Discussion of user expectations for persistence of AI characters When it comes to deletion of characters, the TOS of Character.AI is broad and sweeping. It states that Character.AI reserves the right to terminate accounts and delete content for any and multiple reasons, including inactivity or violation of the TOS, often without prior notice. The lack of transparency into content moderation has an overpowering impact and consequence particularly when there can be severe emotional consequences for those who rely on these characters for emotional and mental support. The ethical implications of this opaque policy can be amplified in the context of fandom, where fans tend to be generally dependent on the parasocial relationships they tend to enjoy with their fictional characters. In addition to that the TOS also provides for the following provision: “You agree that Character.AI has no responsibility or liability for the deletion or failure to store any data or other content maintained or uploaded to the Services”. The provision of these terms only exacerbates the asymmetry between the control, influence and certainty which the users expect and the powers that company wants to exercise unquestionably. These terms not only neglect the user rights, but also fail to address the ethical concerns like transparency and fair moderation. Analysis of potential terms of service and user agreement issues. Character.AI ’s terms of services provide for several contentious provisions and they include as depicted in the figure below: Figure 5: Character.AI 's contentious policies, depicted. These provisions of the TOS raises several legal and policy concerns, including the broad and sweeping disregard of user expectations only highlights the need for a more balanced approach that protects user rights while still allowing for innovation and the responsible use of Conversational AI. This is even more pertinent especially in the context of conversational AI systems where users rely on platforms for emotional validation, support, and interactions. And where the consequences could be of a higher magnitude for the user than the other way around. Ethical Considerations in AI-Powered Fandom Interactions Exploration of parasocial relationships with AI characters One significant concern that has emerged since the advent of conversational AI and especially personalized and personality based conversational AI is the development of Parasocial Relationships . Parasocial relationships refer to one sided attachments and connections where individuals develop emotional attachments to fictional and media personalities.The development of emotional bond and attachments to these is an even more common occurrence in the fandom spaces. Within fandom communities, where people are already emotionally invested in their favorite characters and universe, for them, such relationships come on par with the reality they live in, sometimes exceeding real-life relationships. The introduction of Conversational AI , further intensifies these relationships and dynamics, since the interactions only become personalized, interactive, and more so real-wordly. Character.AI has the option to call your personal 'Batman', 'Harvey Specter', 'Harley Quinn' and a random 'mentorship coach'. Imagine interacting with them, and feeling intimately close to the figures you admire through this feature. The increasing sophistication of AI characters and their ability to mimic human-like conversations, only blurs the lines between the real and simulated worlds. It all would become real for people and has real world consequences. AI companies and their developers have an ethical responsibility to ensure transparency about the limitations of AI characters, and ensure that they do not mislead users about their capabilities or simulate emotions that those systems cannot experience. Minors and Elderly then become the vulnerable populations of manipulative conversational AI systems that if unchecked, creates a risk of people living in distorted realities, and alienated worlds that they have created for themselves, or simply put the AI systems manipulated them to be in. Discussion of potential psychological impacts on users, especially minors The psychological implications of excessive and early exposure and introduction to conversational AI are significant, particularly for children. Similar to social media’s impact, these systems could hinder the development of social skills and the ability to build meaningful, real-world relationships. This incorporation will only hurt their prospects of becoming mature and reasonable adults that can navigate the challenges in complex human dynamics. Research suggests that users and particularly children may be vulnerable to the “Empathy Gap” of AI chatbots. Children are likely to treat AI characters as friends and misinterpret their responses due their limited understanding of technologies they are interacting with. Studies have also suggested that interactions with AI systems increase loneliness, sleep disturbances, alcohol consumption, and depression. Also, early introduction to AI systems with limited awareness and in the absence of effective regulatory and support mechanisms would promote unhealthy behaviours that are not only detrimental to their human interactions, but also mental and physical health and also emotional intelligence. This could have second order effects into their careers and real world interactions where they might have unreasonable expectations from humans to do as they say and expect. (something which LLMs are known to do). Ethical implications of AI characters mimicking real or fictional personas AI Characters that mimic real life or fictional personalities raises a whole range of ethical dilemmas that humans truly are not ready to understand the consequences of. Issues related to identity, authenticity, consent, life like conversational mimicking, manipulation need a nuanced understanding in the backdrop of disagreements even on the definitions of what actually is AI? For example, the use of AI to create personas of real people, without their explicit consent can be seen as a gross violation of their privacy. Additionally, actors or creators associated with the original characters might face unintended consequences such as a displaced sense of attachment, love, anger, pain, and distress onto them. Creating real world consequences and unintended second order effects that are hard to mitigate. There is a potential for misrepresentation, and manipulation by AI characters is equally troubling. Technologies like deep fakes already have illustrated the potential for misinformation, reputational damage and legal consequences for those whose AI personas committed or abetted the said manipulation. Additionally it is also true that fictional personas may reinforce unsuitable and inappropriate narratives or behaviors, since which the chat bots were trained on. For example, an AI character that is based on fictional antagonists could reinforce the negative stereotypes or behaviors, when the users interacting with it are not aware about how the technology functions and in absence of required safeguards to protect the interacting users. To address these risks, companies developing these AI characters must themselves adopt widely accepted ethical standards. It is crucial to educate users about the limitations of AI systems and to implement transparent practices that are important to prevent harm. Intellectual Property Strategies for Media Companies in the AI Era The Rise of AI has presented media companies that seek to protect their intellectual property portfolio, while embracing innovation with challenges and opportunities. Traditional IP frameworks need to be reimagined and redesigned to address the unique set of challenges that AI-generated content and AI powered fandom brings to the table. It is crucial to highlight that AI systems have an asymmetrical advantage over the IP right holders whose creative works are often utilized to train theri LLMs. While these LLMs and the companies that train them rapidly ideate, scale, and distribute the fruits of their LLMs, the decision and analysis of the core issues that are central to shaping of future discourse is tied up in court for a significant while, to add onto the stagnant nature of policy making is also the hesitance of govt. to rapidly adopt effective policies and legislations, aiming to avoid completely stifling innovation. The IP owners of those exclusive works face a slower process of defending their rights through courts. They are also un-equipped with appropriate strategies that enforces their rights over their creative works. The incentive structures for AI companies encourages them to quickly develop and scale their products, and enjoy revenue sources from the commercialisation of these LLMs, often leaving IP holders scrambling to even claim rights over their own creative works. Meanwhile, governments often are completely hesitant and do not want to stifle innovation or potential helpful use cases of these systems, yet they do not move beyond the whole whac-a-mole approach to shaping policy discourse around AI and Law. Analysis of Warner Bros. Discovery’ approach to protecting IP Warner Bros. discovery is a media and entertainment company that faces the challenge of protecting its vast and matured IP portfolio in the age of AI. The company’s approach involves a combination of legal strategies, measures, and proactive interaction with AI platforms. The rapid ideation, scaling and implementation advantage of AI companies necessitate for media and creative works copyright holders to incorporate a variety of measures that are of ex ante and post ante nature. A key component to their approach involves monitoring AI platforms, and communities for unauthorized use of Intellectual property in training chatbots, taking legal measures against infringements, negotiating licensing opportunities, and exploring the future world of media entertainment. In the present context , Warner Bros. Discovery has seemed to have devised a proactive strategy to deal with infringements in the digital environment. Thus, mitigating the need for litigation-less enforcement of their claim over their IP rights. Warner Bros Discovery and other media and entertainment companies have a once in a decade opportunity to collaborate with AI platforms to develop tools and technologies that protect their Intellectual Property Portfolio, all the while furthering innovation; curbing misinformation, unauthorised access; dealing with ethical concerns and also enabling AI platforms to put in place appropriate compliance measures that further reduce their liabilities. These collaborations could give headway to develop industry standards and best practices for IP protection at a stage where these technologies are still developing. The unprecedented collaborations could also assist in educating the public about the potential misinformation, consent, unauthorized access and setting user expectations. Media and Entertainment companies could assist AI platforms in explaining the terms of services, privacy policies and user agreements in a story format, with the help of AI characters, this would foster a more healthy and effective approach to dealing with the ethical concerns that have been raised time and again by various stakeholders that are shaping the discourse around AI systems and content creation. Exploration of Licensing Models for AI Character Creation Recent cases, such as the Dow Jones and NYP Holdings v. Perplexity AI and Bartz v. Anthropic , have iterated a significant turning point in the potential relationship between AI companies and owners of creative works, upon which LLMs are trained. In both cases, the owners of exclusive intellectual property have expressed their willingness to potentially collaborate and explore licensing strategies that provide for fair compensation for the use of their works in Training LLMs. This marks a change in approach that IP holders want to exercise to earn an additional source of revenue, and also highlights the fact that they are not reluctant in the usage of their copyrighted content, but are only concerned about the piracy of the content of which they are the sole IP holder. There are various licensing strategies that the AI companies and media entertainment companies could potentially explore as a default. These include exclusive licenses, Non-exclusive licenses, revenue sharing models, and usage based licenses. These models of licensing could be explored, and incorporated, depending on the context for the usage of copyrighted content by the AI companies. The pros and cons of these models are explained hereinafter the form of a mind map: Figure 6: Licensing Models and their types, depicted. Conclusion and Recommendations To conclude, the potential collaborations between IP holders and AI platforms is going to shape how users and owners of creative works view the incentive structures, and what other forms of entertainment are yet to be explored. The 'Tabooisation' of AI systems in the creative work fields will only be detrimental to the media company. Instead, if they choose to embrace a future that is already here and is here to stay, Media companies then would be able develop interactive narratives, personalized experiences, postscript bites, and other new entertainment forms that work in collaboration and not in isolation from AI systems. Here are some mind maps, which reflect some suggestions for balancing copyright protection & innovation in the case of AI use. Figure 7: Suggestions for Balancing Copyright Protection and Innovation in AI, depicted. Figure 8: The Author's Proposed Guidelines for Ethical AI Character Creation and Interaction
- Beyond AGI Promises: Decoding Microsoft-OpenAI's Competition Policy Paradox
Explore Escher-inspired environments where AI elements navigate complex geometric spaces with policy cards, blending surreal architecture with futuristic aesthetics. Made with Luma AI. The strategic recalibration between Microsoft and OpenAI presents a compelling case study in digital competition policy, marked by two significant developments: OpenAI's potential removal of its AGI (Artificial General Intelligence) mandate and Microsoft's formal designation of OpenAI as a competitor in its fiscal reports. This analysis examines the implications of these interrelated events through three critical lenses: competition policy frameworks, market dynamics, and regulatory governance. The first dimension of this analysis explores the competitive framework assessment, delving into the complexities of vertical integration in AI markets and the unique dynamics of partnership-competition duality in technological ecosystems. This section examines how traditional antitrust frameworks struggle to address scenarios where major technology companies simultaneously act as investors, partners, and competitors. The second component focuses on regulatory implications, evaluating the adequacy of current competition policies in addressing AI-driven market transformations. It assesses existing regulatory oversight mechanisms and explores potential policy reforms needed to address the unique challenges posed by AI development partnerships and their impact on market competition. The final segment examines market structure dynamics, analysing how the evolution of AI development funding models affects corporate governance and innovation. This section particularly focuses on how the tension between public benefit missions and commercial imperatives shapes the future of AI enterprise structures and market competition. Examining the Key Events in the MSFT-OpenAI Relationship Two pivotal events have reshaped the Microsoft-OpenAI relationship, highlighting evolving dynamics in the AI industry. The AGI Clause Reconsideration OpenAI is discussing the removal of a significant contractual provision that currently restricts Microsoft's access to advanced AI models once Artificial General Intelligence (AGI) is achieved. This clause , originally designed to prevent commercial misuse of AGI technology, defines AGI as a "highly autonomous system that outperforms humans at most economically valuable work". Microsoft's Competitive Designation In a notable shift, Microsoft has officially listed OpenAI as a competitor in its annual report, specifically in AI, search, and news advertising sectors. This designation places OpenAI alongside traditional competitors like Amazon, Apple, Google, and Meta, despite Microsoft's substantial $13 billion investment in the company. Financial Context The timing of these developments is significant: OpenAI recently closed a $6.6 billion funding round, achieving a $157 billion valuation The company is exploring restructuring its core business into a for-profit benefit corporation Sam Altman acknowledged that the company's initial structure didn't anticipate becoming a product company requiring massive capital These events reflect a complex relationship where Microsoft serves as both OpenAI's exclusive cloud provider and now, officially, its competitor. Competitive Framework and Market Structure Analysis The Microsoft-OpenAI relationship exemplifies a new paradigm in digital market competition, characterised by complex interdependencies and strategic ambiguity. Vertical Integration Dynamics The relationship demonstrates unprecedented vertical integration patterns, where Microsoft simultaneously acts as OpenAI's largest investor($13 billion), exclusive cloud provider, and declared competitor. Figure 1: Competition-Partnership Matrix Map This creates a unique market structure where: Microsoft integrates OpenAI's technology across its product stack Both entities compete for direct enterprise customers Cloud services and AI capabilities overlap increasingly Search market competition intensifies with SearchGPT's introduction Market Power Distribution Figure 2: Market Power Dynamics The evolving dynamics reveal a shifting power balance in the AI ecosystem: Traditional competition frameworks struggle to categorise this relationship Both companies maintain strategic independence while leveraging shared resources Market opportunities drive expansion into overlapping territories Product differentiation becomes crucial for maintaining distinct identities Structural Evolution Figure 3: AI Industry Structure Evolution The relationship's transformation reflects broader market structure changes: The partnership model has evolved from pure collaboration to "coopetition" Both companies are developing independent capabilities while maintaining interdependence Microsoft's development of in-house AI models (MAI-1) indicates strategic hedging OpenAI's direct-to-consumer products suggest market independence aspirations Resource Allocation Dynamics The competition-collaboration balance creates unique resource allocation patterns: Computational resources flow through Microsoft's Azure platform Financial investments create mutual dependencies Talent and innovation capabilities remain distinct Market access and customer relationships overlap increasingly This complex framework challenges traditional antitrust approaches and necessitates new competition policy tools that can address the nuanced reality of modern tech partnerships. Conclusion & Recommendations The Microsoft-OpenAI case demonstrates that current competition frameworks require substantial recalibration to address emerging AI market dynamics. Several specific considerations emerge: Regulatory Architecture Requirements Competition authorities need specialized tools for evaluating AI partnerships where competitive boundaries are fluid Traditional market share metrics prove inadequate when assessing AI market power Vertical integration assessments must consider both immediate and potential future competitive impacts Data access and computational resource control require distinct evaluation metrics Market-Specific Considerations The definition of "essential facilities" in AI markets must extend beyond traditional infrastructure to include: Training data access mechanisms Computational resource availability Model architecture knowledge API access conditions Market power assessment should incorporate both current capabilities and future development potential Competition policy must balance innovation incentives with market access concerns Policy Implementation Framework Immediate regulatory priorities: Establishing clear guidelines for AI partnership disclosures Developing metrics for assessing AI market concentration Creating mechanisms for monitoring technological dependencies Setting standards for competitive access to essential AI resources Long-term considerations: Evolution of partnership structures in AI development Impact of AGI development on market competition Balance between open-source and proprietary AI development Global coordination of AI competition policies Recommendations Competition authorities should develop: Dynamic assessment tools for evaluating AI partnerships Frameworks for monitoring technological lock-in effects Mechanisms for ensuring competitive API access Standards for evaluating AI market concentration Policy frameworks must remain adaptable to technological evolution while maintaining competitive safeguards The Microsoft-OpenAI relationship thus serves as a crucial precedent for developing nuanced competition policies that can effectively govern the unique dynamics of AI market development while ensuring sustainable innovation and fair competition.
- CCI's Landmark Ruling on Meta's Privacy Practices
The Competition Commission of India's (CCI) recent press release announcing a substantial penalty of Rs. 213.14 crore on Meta marks a significant milestone in the regulation of digital platforms in India. This decision, centered on WhatsApp's 2021 Privacy Policy update, underscores the growing scrutiny of data practices and market dominance in the digital economy. The CCI's action reflects a proactive approach to addressing anti-competitive behaviours in the tech sector, particularly concerning data sharing and user privacy.This policy insight examines the implications of the CCI's decision, which goes beyond mere financial penalties to impose behavioural remedies aimed at reshaping Meta's data practices in India. The order's focus on user consent, data sharing restrictions, and transparency requirements signals a shift towards more stringent regulation of digital platforms. It also highlights the intersection of competition law with data protection concerns, setting a precedent that could influence regulatory approaches both in India and globally. As a draft of Digital Competition Bill was proposed in March 2024, this CCI action provides valuable insights into the regulator's perspective on digital market dynamics and its readiness to enforce competition laws in the digital sphere. The decision raises important questions about the balance between fostering innovation in the digital economy and protecting user rights and market competition. Detailed Breakdown of the CCI Press Release Penalty and Legal Basis The Competition Commission of India (CCI) has imposed a substantial penalty of Rs. 213.14 crore on Meta for abusing its dominant market position. This penalty is based on violations of multiple sections of the Competition Act: Section 4(2)(a)(i): Imposition of unfair conditions Section 4(2)(c): Creation of entry barriers and denial of market access Section 4(2)(e): Leveraging dominant position in one market to protect position in another Relevant Markets and Dominance The CCI identified two key markets in its investigation: OTT messaging apps through smartphones in India Online display advertising in India Meta, through WhatsApp, was found to be dominant in the OTT messaging app market and held a leading position in online display advertising. Privacy Policy Update and Its Implications The case centres on WhatsApp's 2021 Privacy Policy update, which: Mandated users to accept expanded data collection terms Required sharing of data with other Meta companies Removed the previous opt-out option for data sharing with Facebook Presented these changes on a "take-it-or-leave-it" basis Anti-Competitive Practices Identified The CCI concluded that Meta engaged in several anti-competitive practices: Imposing unfair conditions through the mandatory acceptance of expanded data collection and sharing Creating entry barriers for rivals in the display advertising market Leveraging its dominant position in OTT messaging to protect its position in online display advertising Remedial Measures The CCI has ordered several behavioral remedies to address these issues: Data Sharing Prohibition: WhatsApp is prohibited from sharing user data with other Meta companies for advertising purposes for 5 years. Transparency Requirements: WhatsApp must provide a detailed explanation of data sharing practices in its policy. User Consent: Data sharing cannot be a condition for accessing WhatsApp services in India. Opt-Out Options: Users must be given opt-out options for data sharing, including: An in-app notification with an opt-out option A prominent settings tab to review and modify data sharing choices All future policy updates must comply with these requirements. Significance of the Ruling This decision by the CCI is significant for several reasons: It addresses the intersection of data privacy and competition law It challenges the business model of major tech companies that rely on data sharing across platforms It sets a precedent for regulating "take-it-or-leave-it" privacy policies by dominant platforms It demonstrates the CCI's willingness to take strong action against anti-competitive practices in the digital economy Based on the CCI's order against Meta and WhatsApp, we can analyze its implementability, effectiveness, and implications for the Draft Digital Competition Bill's legislative perspective: Implementability and Enforcement Specific and Actionable Directives: The CCI's order includes clear, implementable directives such as: Prohibiting data sharing for advertising purposes for 5 years Mandating detailed explanations of data sharing practices Requiring opt-out options for users These specific measures demonstrate that the CCI can craft enforceable remedies for digital markets. Temporal Scope: The 5-year prohibition on data sharing for advertising purposes shows the CCI's willingness to impose long-term structural changes in business practices. User Interface Changes: Requiring WhatsApp to provide opt-out options through in-app notifications and settings demonstrates the CCI's ability to mandate specific changes to digital platforms' user interfaces. Where the Order Shows Teeth Substantial Financial Penalty: The ₹213.14 crore fine is significant and sends a strong message to digital platforms operating in India. Behavioural Remedies: Going beyond fines, the order mandates specific changes in WhatsApp's data practices and user interface, directly impacting Meta's business model. Broad Market Impact: By addressing both the OTT messaging and online display advertising markets, the CCI demonstrates its ability to tackle complex, multi-sided digital markets. Future Compliance: The order extends to future policy updates, ensuring ongoing compliance and preventing easy workarounds. Reflections on the Draft Digital Competition Bill's Legislative Perspective Ex-Ante Approach: While this action is ex-post, it signals the CCI's readiness to adopt a more proactive, ex-ante approach as proposed in the Digital Competition Bill. Focus on Data Practices: The order's emphasis on data sharing and user consent aligns with the bill's focus on regulating data practices of large digital platforms. User Choice and Transparency: The remedies ordered reflect the bill's intent to promote user choice and transparency in digital markets. Complementary Enforcement: This action under existing laws demonstrates how the proposed ex-ante framework could complement current ex-post enforcement, potentially addressing concerns more swiftly and effectively. Technical Expertise: The detailed analysis in the order suggests the CCI is developing the technical expertise needed to regulate digital markets effectively, as emphasised in the proposed bill. Conclusion In conclusion, the CCI's order against Meta and WhatsApp demonstrates that the regulator has the capability and willingness to implement and enforce significant measures against large digital platforms, even under the current legal framework. This action likely strengthens the case for the proposed ex-ante regulatory approach in the Digital Competition Bill, showing that the CCI is prepared to take on a more proactive role in shaping fair competition in India's digital markets.
- Book Review: Taming Silicon Valley by Gary Marcus
This is a review of " Taming Silicon Valley: How Can We Ensure AI Works for Us ", authored by Dr Gary Marcus. To introduce, Dr Marcus is Emeritus Professor of Psychology and Neural Science at New York University, US. He is a leading voice in the global artificial intelligence industry, especially the United States. One may agree or disagree with his assessments of the Generative AI use cases, and trends. However, his erudite points must be considered to understand how AI trends around the Silicon Valley are documented, and understood, beyond the book’s intrinsic focus on industry & policy issues around artificial intelligence. The book, at its best, gives an opportunity to dive into the introductory problems in the global AI ecosystem, in the Silicon Valley, and in some instances, even beyond. Mapping the Current State of ‘GenAI’ / RoughDraft AI Dr Marcus’s book, "Mapping the Current State of ‘GenAI’ / RoughDraft AI," provides essential examples of how Generative AI (GenAI) solutions appear appealing but have significant reliability and trust issues. The author begins by demonstrating how most Business-to-Consumer (B2C) GenAI ‘solutions’ look appealing, allowing readers to explore basic examples of prompts and AI-generated content to understand the ‘appealing’ element of any B2C GenAI tool, be it in text or visuals. The author compares the ‘Henrietta Incident’, where a misleading point about Dr Marcus led a GenAI tool to produce a plausible but error-riddled output, with an LLM alleging Elon Musk’s ‘death’ by mixing his ownership of Tesla Motors with Tesla driver fatalities. These examples highlight the shaky ground of GenAI tools in terms of reliability and trust, which many technology experts, lawyers, and policy specialists have not focused on, despite the obvious references to these errors. The ‘Chevy Tahoe’ and ‘BOMB’ examples fascinate, showing how GenAI tools consume inputs but don’t understand their outputs. Despite patching interpretive issues, ancillary problems persist. The ‘BOMB’ example demonstrates how masked writing can bypass guardrails, as these tools fail to understand how guardrails can be circumvented. The author responsibly avoids regarding guardrails around LLMs and GenAI as perfect. Many technology lawyers and specialists have misled people about these guardrails’ potential worldwide. The UK Government’s International Scientific Report at the Seoul AI Summit in May 2024 echoed the author’s views, noting the ineffectiveness of existing GenAI guardrails. The book serves as a no-brainer for people to understand the hyped-up expectations associated with GenAI and the consequences associated with it. The author’s approach of not over-explaining or oversimplifying the examples makes the content more accessible and engaging for readers. The Threats Associated with Generative AI The author provides interesting quotations from the Russian Federation Government’s Defence Ministry and Kate Crawford from the AI Now Institute as he delves into offering a breakdown of the 12 biggest immediate threats of Generative AI. Now, one important and underrated area of concern addressed in the sections is medical advice. Apart from deepfakes, the author’s reference to how LLM responses to medical questions were highly variable and inaccurate was necessary to discuss. This reminds us of a trend among influencers to convert their B2C-level content to handle increased consumer/client consulting queries, which could create a misinformed or disinformed engagement loop between the specialist/generalist and potential client. The author impressively refers to the problem of accidental misinformation, pointing out the ‘Garbage-in-Garbage-Out’ problem, which could drive internet traffic, especially in technical domains like STEM. The mention of citation loops of unreal case laws alludes to how Generative AI promotes a vicious and mediocre citation loop for any topic if not dealt with correctly. In addition, the author raises an important concern around defamation risks with Generative AI. The fabrication of content used to prove defamation creates a legal dilemma, as courts may struggle to determine who should be subject to legal recourse. The book is a must-read for all major stakeholders in the Bar and Bench to understand the ‘substandardism’ associated with GenAI and its legal risks. The author’s reference to Donald Rumsfeld’s "known knowns, known unknowns, and unknown unknowns" quote frames the potential risks associated with AI, particularly those we may not yet be aware of. Interestingly, Dr Marcus debunks myths around ‘literal extinction’ and ‘existential risk’, explaining that mere malignant training imparted to ChatGPT-like tools does not give them the ability to develop ‘genuine intentions’. He responsibly points out the risks of half-baked ideas like text-to-action to engineer second and third-order effects out of algorithmic activities enabled by Generative AI, making this book a fantastic explainer of the 12 threats of Generative AI. The Silicon Valley Groupthink and What it Means for India [While the sections covering Silicon Valley in this book do not explicitly mention the Indian AI ecosystem in depth, I have pointed out some normal parallels, which could be relatable to a limited extent.] The author refers to the usual hypocrisies associated with the United States-based Silicon Valley. Throughout the book, Dr Marcus has referred to the works of Soshanna Zuboff and the problem of surveillance capitalism, largely associated with the FAANG companies of North America, notably Google, Meta, and others. He provides a polite yet critical review of the promises held by companies like OpenAI and others in the larger AI research & B2C GenAI segments. The Apple-Facebook differences emphasised by Dr Marcus are intriguing. The author highlights a key point made by Frances Haugen, a former Facebook employee turned whistleblower, about the stark contrast between Apple and Facebook in terms of their business practices and transparency. Haugen argues that Apple, selling tangible products like iPhones, cannot easily deceive the public about their offerings’ essential characteristics. In contrast, Facebook’s highly personalised social network makes it challenging for users to assess the true nature and extent of the platform’s issues. Regarding OpenAI, the author points out how the ‘profits, schmofits’ problem, around high valuations, made companies like OpenAI and Anthropic give up their safety goals around AI building. Even in the name of AI Safety, the regurgitated ‘guardrails’ and measures have not necessarily put forward the goals of true AI Safety that well. This is why building AI Safety Institutes across the world (as well as something in the lines of CERN as recommended by the author) becomes necessary. The author makes a reasonable assessment of the over-hyped & messianic narrative built by Silicon Valley players, highlighting how the loop of overpromise has largely guided the narrative so far. He mentions the "Oh no, China will get to GPT-5" myth spread across quarters in Washington DC, which relates to hyped-up conversations on AI and geopolitics in the Indo-Pacific, India, and the United States. While the author mentions several relatable points around ‘slick video’ marketing and the abstract notion of ‘money gives them immense power’, it reminds me of the discourse around the Indian Digital Competition Bill. In India, the situation gets dire because most of the FAAMG companies in the B2C side have invested their resources in such a way that even if they are not profiting enough in some sectors, they are earning well by selling Indian data and providing relevant technology infrastructure. Dr Marcus points out the intellectual failures of science popularizing movements, like effective accelerationism (e-acc). While e-acc can still be subject to interest and awe, it does not make sense in the long run, with its zero-sum mindset. The author calls out the problems in the larger Valley-based accelerationist movements. To conclude this section, I would recommend going through a sensible response given by the CEO of Honeywell, Vimal Kapur , on how AI tools might affect hardly noticeable domains such as aerospace & energy. I believe the readers might feel more excited to read this incredible book. Remembering the 19th Century and the Insistence to Regulate AI The author's reference to quotes by Tom Wheeler and Madeleine Albright reminds me of a quote from former UK Prime Minister, Tony Blair , on a lighter note: “My thesis about modern politics is that the key political challenge today is the technological revolution, the 21st century equivalent of the 19th century Industrial Revolution. And politics has been slow to catch up.” While Blair's reference is largely political, the two quotes by Wheeler and Madeleine relate to the interesting commonalities between the 19th and 21st centuries. The author provides a solid basis as to why copyright laws are important when data scraping techniques in the GenAI ecosystem do not respect the autonomy & copy-rights of the authors whose content is consumed & grasped. The reference to quotes from Ed Newton-Rex and Pete Dietert on the GenAI-copyright issue highlights the ethical and legal complexities surrounding the use of creative works in training generative AI models. Dr Marcus emphasizes the urgent need for a more nuanced and ethical approach to AI development, particularly in the realm of creative industries. The author uses these examples to underscore a critical point: the current practices of many AI companies in harvesting and using creative works without proper permission or compensation are ethically questionable and potentially exploitative. Pete Dietert's stark warning about "digital replicants" amplifies the urgency of addressing these issues, extending the conversation beyond economic considerations to fundamental human rights, as recognised in the UNESCO Recommendation on the Ethics of AI of 2021 . Dr Marcus points out how the 'Data & Trust Alliance' webpage features appealing privacy and data protection-related legal buzzwords, but the details help in shielding companies more than protecting consumers. Such attempts of subversions are being tried in Western Europe, Northern America, and even parts of the Indo-Pacific Region, including India. The author focuses on algorithmic transparency & source transparency among the list of demands people should make. He refers to the larger black box problem as the core basis to legally justify why interpretability measures matter. With respect to consumer law and human rights, AI interpretability (Explainable AI) becomes necessary to have a gestation phase to see if there is any interpretability of the activities regularly visible in AI systems at a pre-launch stage. On source transparency, the author points out the role of content provenance (labelling) in enabling distinguishability between human-created content and synthetic content, so that the tendency to create "counterfeit people" is prevented and discouraged. The author refers to the problem of anthropomorphism, where many AI systems create a counterfeit perception among human beings and, via impersonation, could potentially downgrade their cognitive abilities. Among the eight suggestions made by Dr Marcus on how people can make a difference in bettering AI governance avenues, the author makes a reasonable point that voluntary guidelines must be negotiated with major technology companies. In the case of India, there have been some self-regulatory attempts, like an AI Advisory (non-binding) in March 2024, but more consistent efforts may be implemented, starting with voluntary guidelines, with sector-specific & sector-neutral priorities. Conclusion Overall, Dr Gary Marcus has written an excellent prologue to truly ‘tame’ the Silicon Valley in the simplest way possible for anyone who is not aware of technical & legal issues around Generative AI. As recommended, this book also gives a slight glance at improving some understanding around digital competition policy measures, and the effective use of consumer law frameworks, where competition policy remains ineffective. The book is not necessarily a detailed documentation on the state of AI Hype. However, the examples, and references mentioned in the book are enough for researchers in law, economics and policy to trace out the problems associated with the American & Global AI ecosystems.
- The 'Algorithmic' Sophistry of High Frequency Trading in India's Derivatives Market
The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as October 2024. A recent study conducted by the market regulator of the country, the Securities and Exchange Board of India (SEBI), shed light on tectonic disparities in the market space concerning equity derivatives. Per the study, the utilisation of algorithm trading for the purposes of proprietary trading and that of foreign funds resulted in gross profits that totalled an amount of ₹588.4 billion ($7 billion) from having traded in the equity derivates of Indian markets in the Financial Year that ended on March’24. [1] However, it was noted that in a disparate stark contrast, several individual traders faced monumental consequential losses. The study further detailed that almost 93% of the individual traders had suffered losses in the Equity Futures and Options (F&O) Segment, in the preceding three years, that is, from the financial years of 2022 to 2024 ; the aggregate losses totalling to an amount exceeding ₹1.8 lakh crore . Notably, in the immediately preceding Financial Year [2023 – March 2024] alone, the net losses incurred among individual traders approximated an amount of ₹75,000 crore. The findings of SEBI underscore the challenges faced by individual traders when the former is having to compete against a more technologically furthered, well-funded entity in the market space of derivatives. The insight clearly contends that institutional entities that have inculcated algo-trading strategies have a clear competitive edge over those who lack the former, i.e., individual traders. Understanding the Intricacies of Algorithm Trading High-Frequency Trading refers to the over-arching aspect of algorithm trading which is latency sensitive and done through the medium of an automated platform, that essentially focuses on trading. The same is facilitated through advanced computational systems that are technically capable of executing large orders ate a more efficient speed in order to achieve optimal prices at a level humans cannot match. The dominancy of algorithms in the domain of the global landscape of financial markets, have had an exponential growth the past decade. High-frequency trading (HFT) algorithms, aim at the execution of trades within fractions of a second. This high-speed computational system places institutional investors at a more profitable and higher pedestal than individual traders, who typically place reliance on manual trading strategies and consequently have an evident lack access in sophisticated analytics and real-time processing trading systems. Furthermore, HFT allows traders to trade a larger amount of shares frequently, by processing differences within marginal prices in split of a second, thereafter ensuring accuracy in the executing of a trade and enhancement of market liquidity. The premise is also paralleled in the Indian equity derivatives market with HFT firms reaping substantial profits. The study conducted by the India’s market regulator evidently sheds light on the comparable gains and losses among institutional traders and individual traders respectively. The insight expounds upon the sophistries on the competitive dynamics of the country’s derivatives market, and its superficial regulation over manual trading and computational trading. The Competitive Landscape of the Derivatives Market: The Odds Stacked Against Individual Traders The study revealed a disadvantageous plight of retail traders, with every nine out of ten retail traders having incurred losses over the preceding three FY. This thereafter raises the contentious debate surrounding viability of individual traders and market dynamics in the landscape of derivatives market. The lack of the requisite support and resources to individual traders would make the sustainability of the former difficult, especially with the backdrop of a growing trend in algorithm trading. HFT has been subjected to critique by several professionals, with the latter holding the former in contempt for unbalancing the playing field of derivatives market. Other disadvantageous impediments brought firth by such trading mechanism include: Market Noise Price volatility Strengthening of the mechanism of surveillance Heavier imposition of costs Market manipulation and consequent disruption in the structure of capital markets The Need to Regulate the Technological 'Arms Race' in Trading Given the evident differences in mechanisms of trading, there arises a pressing need for improving tools of trading and ensuring easier access to related educational resources for individual investors. SEBI, the capital market regulator of India, has the prerogative obligation to regulate such disparities. In 2016, a discussion paper was released by the former that attempted to address the various issues relating to HFT mechanisms. The same was done with the premise of instituting an environment of equitable and fair marketspace for every stakeholder therein involved. SEBI proposed the institution of a “Co-Location facility” done on a shared-basis that do not allow the installation of individual servers. This proposed move aims to potentially reduce the latency of having access to the trading system, and attempting to provide a tick-by-tick feed of data, that would be given free of cost to all trading stakeholders. SEBI further proposed a review mechanism over the requirements of trading with respect to usage of algo-trading softwares. The same is furthered by mandating stock exchanges for strengthening the regulatory framework of algo-trading, and consequently lead to the institution of a simulated environment of market for an initial test of the software, prior to its real-time application. [2] To add, SEBI has also undertaken a slew of measures to regulate the algo-trading and HFT. This includes [3] : Minimum time of rest for orders of stock Institution of a mechanism of maximum-order-message to measurements of trade, ratio Randomisation of the orders in stock and a review system on the tick-by-tick feed of data Institution of congestion charges to reduce the load on the market Thus, despite the rather unregulated stride on HFT in India, SEBI in vide has an overarching authority over the same through the provisions of SEBI Act, 1992. However, the same is prevailing in a rudimentary existence and thereafter, continues to usher in an age of unhealthy competitiveness among the traders in a capital market. References [1] Newsdesk, High Speed Traders reap $7bn profit from India’s options market, https://www.thenews.com.pk/print/1233452-high-speed-traders-reap-7bn-profit-from-india-s-options-market (last visited on 6 Oct, 2024). [2] Amit K Kashyap, et. al., Legality and issues relating to HFT in India, Taxmann, https://www.taxmann.com/research/company-and-sebi/top-story/105010000000017103/legality-and-issues-related-to-high-frequency-trading-in-india-experts-opinion (last visited on 6 Oct, 2024). [3] Id.
- New Report: Legal-Economic Issues in Indian AI Compute and Infrastructure [IPLR-IG-011]
We are thrilled to announce the release of our latest report, " Legal-Economic Issues in Indian AI Compute and Infrastructure" [IPLR-IG-011] , authored by the talented duo, Abhivardhan (our Founder) and Rasleen Kaur Dua (former Research Intern, the Indian Society of Artificial Intelligence and Law). This comprehensive study delves into the intricate challenges and opportunities that shape India's AI ecosystem. Our aim is to provide valuable insights for policymakers, entrepreneurs, and researchers navigating this complex landscape. 🧭💡 Read the complete report at https://indopacific.app/product/iplr-ig-011/ Key Highlights 🖥️ Impact of Compute Costs on AI Development in India Examining the AI compute landscape and the role of compute costs in India's AI development. 🏗️ AI-associated Challenges to Fair Competition in the Startup Ecosystem Analysing how compute costs and access to public computing infrastructure influence AI development, particularly for startups and small enterprises. 🌐 Addressing Tech MNCs under Indian Competition Policy Exploring how Indian competition and industrial policy on digital technology MNCs affects regulation and innovation catering. 🤝 India's Role in Global AI Trade and WTO Agreements Investigating India's stance on international trade policies related to AI, the impact of WTO agreements, and the potential for sector-specific AI trade agreements. 📈 Key Recommendations and Strategies for India's AI Development Offering actionable recommendations for enhancing India's AI development both domestically and in the context of global trade policies. As India strives to become a global AI powerhouse, it is crucial to address the legal and economic implications of this transformative technology. Our report aims to contribute to this important discourse and provide a roadmap for inclusive and sustainable AI development. 🌍💫 We invite you to download the full report from our website and join the conversation. Share your thoughts, experiences, and insights on the challenges and opportunities facing India's AI ecosystem. Together, we can shape a future where AI benefits all. 🙌💬 Stay tuned for more cutting-edge research and analysis from Indic Pacific Legal Research. 📣🔍 Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .