top of page

Search Results

Results found for empty search

  • Zero Knowledge Systems in Law & Policy

    Despite the market volatility attributable to cryptocurrencies, the scope of Web3 technologies and their business models is yet unexplored, especially in the Indian context. Few companies like Polygon, Coinbase India, Binance and others are addressing that. In this article, the purpose of Zero Knowledge System as a method to conduct cryptographic proofs is explored, and some policy questions on whether some ideas and assertions of ZKS can be integrated into the domains of law & policy are addressed, considering the role of India as a leader of the Global South. The Essence of Zero Knowledge in Web3 To begin in simple terms, a Zero Knowledge System is based on probabilistic models of proof verification and not deterministic models. It is one of the methods in cryptography used for entity authentication.. Let us understand it with the help of a diagram.

  • The Digital Personal Data Protection Act & Shaping AI Regulation in India

    As of August 11, 2023, the President of India has given assent to the Digital Personal Data Protection Act (DPDPA) , and it is clear that the legal instrument after its notification in the Official Gazette , is notified as a law. Now, there have been multiple briefs, insights and infographics which have been reproduced and published by several law firms across India. This article thus focuses on the key provisions of the Act, and explores how it would shape the trajectory of AI Regulation in India, especially considering the recent amendments in the Competition Act, 2002 and the trajectory for the upcoming Digital India Act, which is still in the process. You can read the analysis on the Digital India Act as proposed in March 2023 here . You can also find this complete primer of the important provisions of the Digital Personal Data Protection Act as provided with this insight , which have been discussed in this article. We urge you to download the file as we have discussed provisions which are described in this document.

  • USPTO Inventorship Guidance on AI Patentability for Indian Stakeholders

    The United States Patent and Trademark Office (USPTO) has recently issued guidance that seeks to clarify the murky waters of AI contributions in the realm of patents , a move that holds significant implications not just for American innovators but also for Indian stakeholders who are deeply entrenched in the global innovation ecosystem.As AI continues to challenge the traditional notions of creativity and inventorship, the USPTO's directions may serve as a beacon for navigating these uncharted territories. Let's see. For Indian researchers, startups, and multinational corporations, understanding and adapting to these guidelines is not just a matter of legal compliance but a strategic imperative that could define their competitive edge in the international market. In this insight, we will delve into the nuances of the USPTO's guidance on AI patentability , exploring its potential impact on the Indian landscape of innovation. We will examine how these directions might shape the future of AI development in India and what it means for Indian entities to align with global standards while fostering an environment that encourages human ingenuity and protects intellectual property rights. Through this lens, we aim to offer a comprehensive analysis that resonates with the ethos of Indian constitutionalism and the broader aspirations of India's technological advancement. The Inventorship Guidance for AI-Assisted Inventions This guidance, which went into effect on February 13, 2024, aims to strike a balance between promoting human ingenuity and investment in AI-assisted inventions while not stifling future innovation . We must remember that the Guidance did refer the DABUS cases in which Stephen Thaler's petitions on declaring an AI to be an inventor were denied.

  • Why the Indian Bid to Make GPAI an AI Regulator is Unpreprared

    India's recent proposal to elevate the Global Partnership on Artificial Intelligence (GPAI) to an intergovernmental body on AI has garnered significant attention in the international community. This move, while ambitious, raises important questions about the future of AI governance and regulation on a global scale. This brief examines and comments upon India's bid to enable the Global Partnership on Artificial Intelligence as an AI regulator, with special emphasis on the Global South outlining the key challenges associated with GPAI, MeITY and the AI Landscape we have today. India's Leadership in GPAI India, as the current chair of GPAI, has been instrumental in expanding the initiative to include more countries, aiming to transform it into a central body for global AI policy-making. The GPAI, which started with 15 nations, has now expanded to 29 and aims to include 65 countries by next year.

  • The G20 Delhi Declaration: Law & Policy Innovations

    The G20 New Delhi Leaders’ Declaration for sure, is a stupendous achievement to look forward to in multiple shades of public policy and international affairs. The most notable aspect of this Declaration, is that this declaration was accepted without any separate statements, and exceptions. Nevertheless, the legal and policy issues, on which this declaration reflects consensus, is truly an interesting achievement to look forward to. The most interesting and relevant issues addressed in the G20 New Delhi Leaders' Declaration , in the context of committing to innovative law & policy practices, were the following: Unlocking Trade for Growth (page 4) Strengthening Global Health and Implementing One Health Approach (page 8)

  • The Ethics of Advanced AI Assistants: Explained & Reviewed

    Recently, Google DeepMind had published a 200+ pages-long paper on the " Ethics of Advanced AI Assistants ". The paper in most ways is extensively authored, well-cited and requires a condensed review, and feedback. Hence, we have decided that VLiGTA, Indic Pacific's research division , may develop an infographic report encompassing various aspects of this well-researched paper (if necessary). This insight by Visual Legal Analytica features my review of this paper by Google DeepMind.

  • AI-Generated Texts and the Legal Landscape: A Technical Perspective

    Artificial Intelligence (AI) has significantly disrupted the competitive marketplace, particularly in the realm of text generation. AI systems like ChatGPT and Bard have been used to generate a wide array of literary and artistic content, including translations, news articles, poetry, and scripts[8]. However, this has led to complex issues surrounding intellectual property rights and copyright laws[8]. Copyright Laws and AI-Generated Content AI-generated content is produced by an inert entity using an algorithm, and therefore, it does not traditionally fall under copyright protection[8]. However, the U.S. Copyright Office has recently shown openness to granting ownership to AI-generated work on a "case-by-case" basis[5]. The key factor in determining copyright is the extent to which a human had creative control over the work's expression[5].

  • AI Seoul Summit 2024: Decoding the International Scientific Report on AI Safety

    The AI Seoul Summit on AI Safety, held in South Korea in 2024, has released a comprehensive international scientific report on AI safety . This report stands out from the myriad of AI policy and technology reports due to its depth and actionable insights. Here, we break down the key points from the report to understand the risks and challenges associated with general-purpose AI systems. 1. The Risk Surface of General-Purpose AI "The risk surface of a technology consists of all the ways it can cause harm through accidents or malicious use. The more general-purpose a technology is, the more extensive its risk exposure is expected to be. General-purpose AI models can be fine-tuned and applied in numerous application domains and used by a wide variety of users [...], leading to extremely broad risk surfaces and exposure, challenging effective risk management." General-purpose AI models, due to their versatility, have a broad risk surface. This means they can be applied in various domains, increasing the potential for both accidental and malicious harm. Managing these risks effectively is a significant challenge due to the extensive exposure these models have.

  • TESCREAL and AI-Related Risks

    TESCREAL serves as a lens through which we can examine the motivations and potential implications of cutting-edge technological developments, particularly in the field of artificial intelligence (AI). As these ideologies gain traction among tech leaders and innovators, they are increasingly shaping the trajectory of AI research and development. This insight brief explores the potential risks and challenges associated with the TESCREAL framework, focusing on anticompetitive concerns, the impact on skill estimation and workforce dynamics, and the need for sensitisation measures. By understanding these issues, we can better prepare for the societal and economic changes & risks that advanced & substandard AI technologies may bring. It is crucial to consider not only the promises but also the pitfalls of the hype rapid advancement. This brief aims to provide a balanced perspective on the TESCREAL ideologies and their intersection with AI development, offering insights into proactive measures that can be taken before formal regulations are implemented. Introduction to TESCREAL The emergence of TESCREAL as a conceptual framework marks a significant milestone in our understanding of the ideological underpinnings driving technological innovation, particularly in the realm of artificial intelligence. This acronym, coined by computer scientist Timnit Gebru and philosopher Émile P. Torres, encapsulates a constellation of interconnected philosophies that have profoundly shaped the trajectory of AI development and the broader tech landscape.

  • The Generative AI Patentability Landscape: Examining the WIPO Report

    This insight examines a recently published Report on Generative Artificial Intelligence Patents by the World Intellectual Property Organisation , as of mid-2024. Now, let's address a caveat before delving into the analysis and the WIPO report itself. It's important to note that this report may not serve as the definitive authority on AI patentability within the WIPO's international intellectual property law framework.   While the report provides valuable insights, certain sections discussing the substantive features of Generative AI and related aspects might not directly reflect WIPO's official stance on AI patentability. This caveat is crucial for two reasons: Evolving Landscape:  The AI patentability landscape is still developing, and individual countries are establishing their own legal frameworks, positions, and case law on the subject. International Framework:  The creation of an international intellectual property law framework under WIPO for AI patentability remains uncertain, as aspects related to economic-legal contractual rights and knowledge management may evolve. The Three Perspectives of Analysis in this Report This report is based on three key aspects of analysis, or let us say, three perspectives of analysis, when it examines the Generative AI Landscape: The first perspective covers the GenAI models. Patent filings related to GenAI are analyzed and assigned to different types of GenAI models (autoregressive models, diffusion models, generative adversarial networks (GAN), large language models (LLMs), variational autoencoders (VAE) and other GenAI models).

  • The French-German Report on AI Coding Assistants, Explained

    The rapid advancements in generative artificial intelligence (AI) have led to the development of AI coding assistants, which are increasingly being adopted in software development processes. In September 2024, the French Cybersecurity Agency (ANSSI) and the German Federal Office for Information Security (BSI) jointly published a report titled "AI Coding Assistants" to provide recommendations for the secure use of these tools. This legal insight aims to analyse the key findings from the ANSSI and BSI report. By examining the opportunities, risks, and recommendations outlined in the document, we can understand how India should approach the regulation of AI coding assistants to ensure their safe and responsible use in the software industry. The article highlights the main points from the ANSSI and BSI report, including the potential benefits of AI coding assistants, such as increased productivity and employee satisfaction, as well as the associated risks, like lack of confidentiality, automation bias, and the generation of insecure code. The recommendations provided by the French and German agencies for management and developers are also discussed.

  • Deciphering Australia’s Safe and Responsible AI Proposal

    This insight by Visual Legal Analytica is a response/ public submission to the Australian Government's recent Proposals Paper for introducing mandatory guardrails for AI in high-risk settings published in September 2024. This insight, authored by Mr Abhivardhan, our Founder, is submitted on behalf of Indic Pacific Legal Research LLP. Key Definitions Used in the Paper The key definitions provided in the Australian Government's September 2024 High Risk AI regulation proposal reflect a comprehensive and nuanced approach to AI governance: Broad Scope and Lifecycle Perspective

bottom of page