top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

90 items found

  • A Legal Perspective on AI-enabled Drug Discoveries

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of October 2023. In a field like drug discovery, the meticulous identification, development, and rigorous testing of novel medications or therapeutic approaches, are aimed at combatting a diverse array of diseases and medical conditions. In recent years, this landscape has been invigorated by the encouraged use of artificial intelligence (AI) technologies. AI, with its formidable computational prowess, holds the promise of revolutionising the drug discovery process, rendering it more efficient, cost-effective, and precise. Yet, amid this potential, a complex tapestry of legal and policy implications unfurls. AI's potential in the pharmaceutical sphere extends to simplifying and accelerating the drug development continuum. It might herald a transformative shift, converting drug discovery from a labour-intensive endeavour into a capital and data-intensive process. This transformation unfolds through the utilisation of robotics and the creation of intricate models encompassing genetic targets, drug properties, organ functions, disease mechanisms, pharmacokinetics, safety profiles, and efficacy assessments. In doing so, AI promises to usher in a new era of pharmaceutical development, one characterized by heightened efficiency and innovation. Over the past decade, Artificial Intelligence (AI) has left an imprint on the realm of drug discovery, with its potential observed within the sphere of small-molecule drug development. This influence has ushered forth a plethora of advantages, including unprecedented access to profound biological insights, refinements in the field of chemistry, success rates, and the potential to facilitate swift, cost-efficient discovery procedures. In essence, AI stands as a formidable instrument endowed with the capacity to surmount a litany of challenges and limitations that have traditionally beset the research and development landscape. The traditional paradigm of drug discovery often resembled a complex and uncertain game of trial and error, characterised by extensive testing of prospective compounds. Nevertheless, the future promises a more streamlined and data-driven approach, courtesy of AI algorithms equipped with the ability to engage in both supervised and unsupervised learning. These sophisticated algorithms possess the prowess to scrutinize vast datasets, pinpoint potential drug candidates, forecast their effectiveness, and optimize their molecular structures. This paradigm shift towards AI-driven drug discovery presents the potential to bring about substantial reductions in costs, hasten the overall process, and augment the probability of success. It is yet to be seen whether the integration of AI-based techniques into the realm of drug discovery serves as a promising avenue for researchers aiming to expedite the process and expeditiously deliver potentially life-saving drugs to patients. Nevertheless, fully capitalising on AI's capabilities necessitates a fundamental transformation of the entire drug discovery process. Companies must be prepared to invest in vital elements such as data infrastructure, cutting-edge technology, and the cultivation of new proficiencies and behaviours throughout the entire research and development landscape. Generic Legal and Policy Dilemmas As we delve into the legal and policy implications of AI in drug discovery, it becomes evident that these implications encompass a multifaceted spectrum of considerations. These facets span a wide range, encompassing vital aspects such as data privacy and security, intellectual property rights linked to AI-generated discoveries, the pressing need for regulatory compliance within AI applications, and the intricate ethical dimensions intertwined with the utilization of AI. Moreover, it is imperative to acknowledge that the effective deployment of AI in this context is contingent upon several pivotal factors. These encompass the accessibility and availability of high-quality data, the meticulous handling of ethical concerns, and the astute recognition of the inherent limitations associated with AI-driven approaches. Within this intricate web of legal and policy implications inherent to AI in drug discovery, one would encounter a host of considerations that warrant attention and scrutiny. Algorithm Reliability and Interpretability In this domain, meticulous attention is directed toward the reliability and interpretability of the algorithms that underpin AI-driven processes within drug discovery. Ensuring that these algorithms yield dependable and comprehensible outcomes is of paramount importance. Bias Mitigation Mitigating biases that may inadvertently manifest within the sphere of AI-driven drug discovery is a critical facet of consideration. Bias-free outcomes are essential to uphold the integrity of the discovery process. Data Privacy and Patient Confidentiality The overarching concerns of data privacy and the preservation of patient confidentiality loom large within the realm of AI-augmented drug discovery. Striking a balance between data accessibility and safeguarding sensitive patient information remains a pivotal challenge. Intellectual Property Rights Within the complex arena of intellectual property rights, issues surrounding the ownership and protection of discoveries stemming from AI-driven processes become a focal point. Clarifying ownership and patenting in such scenarios poses intricate challenges. Technological Misuse Taking precautions against technological misuse or unintended consequences stemming from the application of AI in drug discovery constitutes a proactive stance within this landscape. Safeguarding against adverse outcomes arising from misuse is a paramount concern. Ensuring Drug Safety and Efficacy Maintaining an unwavering commitment to upholding drug safety and efficacy standards represents a cornerstone of AI-infused discovery methodologies. Ensuring that AI-contributed discoveries meet rigorous safety and effectiveness criteria is non-negotiable. Concurrently, it is imperative to acknowledge the prevailing limitations intrinsically linked to AI-driven drug discovery. Limited Trust in AI's Value Scepticism regarding the perceived value of AI within drug discovery is a noteworthy hurdle to overcome. Building trust in AI as a valuable tool is an ongoing endeavour. Data Accessibility and Standardization Challenges Challenges associated with limited data access, low data maturity, and the absence of standardisation in data, tools, and capabilities emerge as formidable obstacles. Data Scarcity The scarcity of essential data, which forms the bedrock of effective AI-driven methodologies, poses a foundational challenge. Interoperability Constraints Interoperability constraints, stemming from the lack of seamless data exchange and integration, hinder the seamless operation of AI in this context. The Curse of Dimensionality Dealing with computational complexities arising from the curse of dimensionality is a technical challenge that necessitates careful consideration. Resource-Intensive and Inaccurate Outcomes The resource-intensive nature and occasionally suboptimal accuracy of AI-generated results serve as practical hurdles that demand attention. Limited Compound Universe The constraint on AI's independent efficacy in discovering novel drugs due to the minuscule fraction of the available chemical universe accessible through existing data is a prominent challenge. Real-time Policy Implications These multifaceted challenges underscore the critical importance of adopting a balanced approach that seamlessly melds traditional experimental methods with AI-powered techniques. Such an approach proves indispensable in harnessing the full spectrum of benefits that AI can offer within the realm of drug discovery. It is noteworthy that these challenges constitute dynamic areas of ongoing research and development, with novel advancements and innovative solutions continually emerging to address them. To shed light on some of these concerns, let's delve into a practical illustration. Consider Company X, a pharmaceutical firm embarking on the journey to develop a treatment for a rare disease, heavily leveraging AI-driven drug discovery techniques. The initial challenge confronting Company X lies in the acquisition of sufficient healthcare data, an indispensable resource for training their AI model to generate meaningful insights. Unfortunately, the quest for pertinent and reliable data proves to be an arduous one, primarily due to its scarcity in the field. Furthermore, even if Company X manages to unearth relevant datasets, there remains a pervasive risk of bias, deeply ingrained in historical healthcare data. This inherent bias has the potential to significantly taint the drug discovery process, eliciting substantial ethical concerns. Moving forward, Company X faces the complex task of striking a balance between the contributions of AI and human researchers. While AI brings formidable analytical capabilities to the table, human researchers possess a depth of creativity and nuanced understanding of the drug discovery process that could be underutilised if an excessive reliance on AI were to prevail. In the event that Company X successfully navigates these challenges and achieves a breakthrough drug, a formidable obstacle in the form of patentability looms. The ambiguity surrounding the patent protection of AI-generated drug discoveries raises critical questions. It's highly probable that the existing patent framework may not adequately accommodate and protect such AI-driven innovations. Furthermore, the scenario of unexpected side effects stemming from AI-generated drugs introduces a perplexing issue of liability. In the event of adverse effects, who bears responsibility: the company, the AI manufacturer, or the AI system itself? The intricacies of assigning liability in such circumstances remain largely uncharted territory. Subsequently, I have explored each of these concerns in detail, elucidating their far-reaching implications for AI-driven drug discovery. Data Availability One of the foundational pillars upon which AI-driven drug discovery stands is data. AI's effectiveness in identifying potential drug candidates and predicting their efficacy relies heavily on the availability of large and diverse datasets. These datasets are instrumental in training AI models to recognize patterns, relationships, and potential candidates. However, the lack of access to high-quality and comprehensive data can hinder AI's potential impact on the drug discovery process. Data availability concerns are rooted in the scarcity of comprehensive and accessible healthcare data. Various stakeholders, including pharmaceutical companies, research institutions, and healthcare providers, often possess vast amounts of data. However, due to privacy concerns, data silos, and a lack of standardised formats, these datasets are not readily available for AI-driven drug discovery. Policymakers and regulatory bodies need to address data sharing and privacy regulations in the context of AI-driven drug discovery. While it is crucial to protect individuals' sensitive healthcare information, mechanisms for secure and anonymised data sharing should be encouraged. Creating a legal framework that facilitates responsible data sharing among stakeholders can unlock the full potential of AI in this field. Ethical Concerns As AI becomes increasingly integrated into the drug discovery process, ethical concerns have emerged. The very algorithms that drive AI decision-making processes are not immune to the biases present in the data they are trained on. These biases can extend into the drug discovery domain, raising ethical concerns regarding the fairness of the algorithm and the potential propagation of biases in drug discovery. The ethical concerns for the enablement of AI in drug discovery revolve around the potential for AI-driven drug discovery to perpetuate or even exacerbate existing biases in healthcare. For example, if historical healthcare data used to train AI models reflects healthcare disparities or underrepresentation of certain populations, the AI may inadvertently perpetuate these disparities in drug discovery. Regulatory frameworks should require transparency in AI decision-making processes. AI developers should be encouraged to assess and address bias within their algorithms. Continuous monitoring and auditing of AI-driven drug discovery processes can help detect and mitigate biases. Additionally, it is crucial to ensure diversity and representativeness in training datasets to minimise bias-related ethical concerns. AI vs. Human Researchers The advent of AI in drug discovery has sparked a debate about the limitations of AI in comparison to human researchers and traditional research methods. While AI is undoubtedly a powerful tool, it is not a panacea, and there are concerns about over-reliance on AI-driven solutions. Some argue that AI should be viewed as a complementary tool to human researchers rather than a replacement. Human researchers bring a nuanced understanding of the complex biological and chemical processes involved in drug discovery. There are concerns that an overemphasis on AI may neglect the expertise and creativity that human researchers bring to the table. Policymakers and stakeholders must strike a balance. AI should be embraced as a valuable tool that enhances human capabilities rather than replacing them. Encouraging collaboration between AI-driven algorithms and human researchers can lead to innovative solutions and more effective drug discovery processes. Patent Eligibility The patent system plays a critical role in incentivizing innovation in the pharmaceutical industry. However, the integration of AI into the drug discovery process has raised questions about patent eligibility for AI-generated drug discoveries. Determining patent eligibility for AI-generated drug discoveries can be complex. Questions arise regarding the inventive step and the role of human inventors in the process. The traditional understanding of patents may not easily accommodate inventions where AI plays a significant role. Legal frameworks need to evolve to accommodate AI-driven inventions. This evolution should include clarifications regarding patent eligibility criteria when AI is a pivotal contributor to an invention. Policymakers should consider the role of AI as a tool in the creative process and establish guidelines for recognizing and protecting AI-generated innovations. Liability Issues In cases where AI-generated drugs have adverse effects or unintended consequences, questions of liability can arise. Determining responsibility becomes complex in AI-driven drug discovery. As AI becomes increasingly autonomous in the drug discovery process, it may be challenging to pinpoint liability in the event of adverse effects. Questions may arise about whether developers, users, or even AI systems themselves should be held accountable in specific circumstances. Legal systems should adapt to address liability concerns in the context of AI-driven drug discovery. Clear guidelines for assigning responsibility in cases of adverse effects or unintended consequences should be established. These guidelines should consider the level of autonomy and decision-making authority that AI systems possess. Conclusion & Recommendations The fusion of AI and drug discovery holds immense promise for revolutionizing healthcare and pharmaceuticals. However, the legal and policy implications are complex and multifaceted. A thoughtful, forward-thinking approach to regulation and ethical AI development can help harness the transformative potential of AI while addressing concerns and ensuring that this technology serves the betterment of society and human health. As AI continues to advance, the future of drug discovery appears brighter than ever, offering hope to patients and researchers alike. The legal and policy landscape must evolve in tandem with technological advancements to create a harmonious and innovative environment for AI-driven drug discovery. Effectively navigating the complex terrain of legal and policy implications surrounding the integration of AI in drug discovery necessitates a multifaceted approach that harmonises technological innovation with accountability and ethical considerations. This approach entails a comprehensive strategy aimed at optimizing the benefits of AI while safeguarding against potential pitfalls: Regulatory Oversight and Ethical Safeguards Establishing robust regulatory frameworks is paramount. These frameworks should strike a delicate balance, fostering AI innovation while upholding essential principles of data privacy, transparency, and ethical utilization. Addressing key aspects such as data sharing, bias mitigation, and accountability within AI-driven drug discovery processes should be a primary focus. By laying down clear guidelines and standards, these frameworks can provide the necessary structure for responsible AI integration. Data Collaboration and Accessibility Encouraging collaborative data-sharing initiatives among diverse stakeholders is essential. Access to comprehensive and varied datasets is a cornerstone of successful AI-driven drug discovery. To unlock the full potential of AI, mechanisms for secure and anonymized data sharing should be established. This approach not only promotes innovation but also ensures that AI researchers have access to the robust data necessary to drive breakthroughs. Promoting Ethical AI Development Responsible AI development practices should be actively promoted. This includes measures to address bias mitigation, algorithm transparency, and the implementation of accountability mechanisms. Continuous monitoring and auditing of AI-driven drug discovery processes should be encouraged to ensure ethical adherence throughout the lifecycle of AI applications. Legal Adaptation to AI Advancements The legal industry, especially legal experts & professionals specialised in legal matters pertaining to pharma companies, must remain agile and adaptable to keep pace with AI's evolving role in drug discovery. Patent laws and liability frameworks, in particular, require continuous updates to reflect the ever-expanding contributions of AI. Legal frameworks should provide clarity on patent eligibility criteria in cases where AI plays a substantial role in inventiveness. Moreover, guidelines for assigning responsibility in cases of adverse effects stemming from AI-generated discoveries should be established, ensuring accountability while fostering innovation. Human-AI Collaboration Emphasis Acknowledging the complementary nature of AI and human researchers is crucial. Encouraging collaboration between AI-driven algorithms and human researchers is pivotal to harnessing the innovative potential of both. This synergy allows AI to augment human capabilities, leading to more efficient drug discovery processes and improved outcomes. In extending the scope of these strategic approaches, it is imperative to recognize that the integration of AI in drug discovery is an evolving field. As such, continuous evaluation, refinement, and adaptation of legal and policy frameworks are essential. By embracing these multi-pronged strategies, we can not only leverage AI's transformative potential but also ensure that ethical considerations and accountability remain at the forefront of this groundbreaking journey. Further Readings https://www.afslaw.com/perspectives/alerts/legal-implications-ai-the-life-sciences-industry https://www.frontiersin.org/articles/10.3389/fsurg.2022.862322/full https://www.bcg.com/publications/2022/adopting-ai-in-pharmaceutical-discovery https://niper.gov.in/crips/dr_vishnu.pdf https://www.drugdiscoverytrends.com/ai-in-drug-discovery-analysis/ https://arxiv.org/abs/2212.08104 https://engineering.stanford.edu/magazine/promise-and-challenges-relying-ai-drug-development https://link.springer.com/article/10.1007/s11030-021-10266-8 https://link.springer.com/article/10.1007/s12257-020-0049-y

  • Unlocking the Language of AI & Law: A Comprehensive Glossary of Legal, Policy, and Technical Terms

    In the ever-evolving world of VLiGTA and ISAIL technical reports and publications, navigating through terms like General Intelligence Applications, Object-Oriented Design, Privacy by Design, Privacy by Default, Artificial Intelligence, and Anthropomorphization can often feel like deciphering a foreign language. Today, we are thrilled to announce an invaluable resource for those seeking to stay well-informed in the realm of AI and Law—the release of the comprehensive Glossary of Legal, Policy, and Technical Terms by Indic Pacific Legal Research. The Glossary This glossary is the culmination of our commitment to demystifying the intricate web of terminology that surrounds the intersection of artificial intelligence and the legal landscape. It is designed to equip professionals, enthusiasts, and anyone interested in AI and Law with a clear understanding of the critical terms that frequently appear as jargon in our work. Our glossary goes beyond mere definitions; it sheds light on the legal, policy, and technical aspects of these terms, making it an indispensable companion for navigating the complex terrain of AI and Law. Accessing the Glossary To access this invaluable resource, visit our website at indicpacific.com/glossary. Dive into a world of knowledge and empower yourself with a deeper understanding of the language of AI and Law. Sample Definitions Here's a glimpse of some intriguing definitions you'll find within the glossary: Algorithmic Activities and Operations: Understand the core concept that drives AI and Law. Class-of-Applications-by-Class-of-Application (CbC) approach: Explore a unique approach crucial in the AI domain. Indo-Pacific: Discover the significance of this term in the context of public policy. International Algorithmic Law: Grasp the importance of this term in shaping the legal landscape of AI. These are just a taste of the insights waiting for you within the glossary. Join the Conversation We invite you to join the conversation and expand your horizons in the world of AI and Law. Visit indicpacific.com/glossary today to explore the full glossary and unlock the language of AI & Law. Feel free to also check our Services Brochure.

  • Reinventing the Legal Profession for India: Proposals for Lawyers & Non-Lawyers

    This insight is a miniature proposal from the VLiGTA Team on how to change the way the Legal Profession in India exists, in multiple streams, such as the academia, litigation, corporate jobs, dispute resolution, freelancing, consulting and other categories of legal professions. This insight will keep updating the list of proposals we have offered to shape and reinvent the legal profession in India. Hence, this is not a final insight per se. The proposals we have suggested are in tandem with how people and stakeholders in the policy, industry and social sector space in India, who are not a part of the legal industry, could be helped by making some reasonable changes in various categories of legal professions. These suggestions are proposed to promote a healthy discourse. Nevertheless, in case we proceed with a deeper analysis, we would convert our proposals into a technical report. Hence, these suggestions must be understood with a reflective angle. The approach we have adopted is to propose solutions, and offer context to achieve policy clarity. The Legal Academia must be a Separate Class of Professionals It is appreciative that the Supreme Court of India and the members of the Bar have endorsed the creation of an Arbitration Bar in India, and have promoted court-ordered arbitration and mediation processes. Nevertheless, as for advocates - we have the Bar Council of India as a representative body, and for the National Law Universities - a Consortium to conduct CLAT and make key decisions, it is now more than important to have a proper Indian Council of Legal Academicians, which represents the interest of law teachers in India, who teach, research and work in universities. This must include all assistant professors, associate professors, full professors, lecturers, research associates and even fellows as designated by the University Grants Commission. An alternative could be to transfer the authority to regulate legal education in India to the University Grants Commission. Let us understand why have we proposed this. Although it is a contentious issue, but it would be untenable to have legal education regulated without teachers. Although the UGC regulations apply on legal education, law teachers are more capable and in a better state to handle matters of legal education. The fact that we do not focus on having better law academics from research associates to professors across law schools and departments in India, has led us to such a point that except few top law schools in India (maybe between 10-15, both government and private), the quality or standard of legal teaching has gone down. In fact, it has ossified even so further that vacancies for key and elective legal subjects has been on a rise. Further, lack of competence and experience comes from the fact that India's legal education system does not incentivise better pay and better work approach to become effective legal academics. However, for a new India where we are intending to become a key player in the Global South, we would have to improve legal education at a mass level. That is why standardisation of representation is needed. A hybrid model could be to first create a separate Indian Council of Legal Academicians in India, and then give dual authority to both the Bar Council of India and the Council of Legal Academicians to regulate legal education, be it professional, academic or executive in nature. However, in certain matters, the say of the Indian Council of Legal Academicians must be preferable by law. In that case, the Bar Council of India is not deprived of its authority, and the Indian Council of Legal Academicians. An act of Parliament could establish ICLA as a statutory authority, which could have 2 key bodies - an Executive Council, and a Representative Council. The Executive Council may have the Chairperson as an academic, with other members of the Council being one bureaucrat from the Ministry of Education, one bureaucrat from the University Grants Commission and a few academicians. The Representative Council could have academicians and researchers in the field of law represented from various parts of India, of various levels, as designated by the UGC, and also those who work in think tanks and research institutes. Again, these suggestions are proposed to promote a healthy discourse. Legal Education must be Democratised, Schematised and Digitised There are serious issues in how law teachers and educational institutions are teaching law. The first problem we see is that the legal education by virtue of its pedagogy techniques, and not the legal subject itself, is stagnating. Law is taught as if it is rote learning, or a mere exposition of poetry or any form of literature, with a literary or rote approach to learning. While value systems must be taught to people, they cannot be taught without a realist perspective. The moral and ontological background of any legal subject and its subtopic must be taught with a way to develop a sense of taxonomy in that field, and develop a mathematical and workflow-based understanding of law, both substantive & procedural. This could work in any field of law. In fact, one must teach any legal subject as a semi-experience of sorts, which is then challenged, re-assessed and then learnt properly by students & professionals. The second problem we see is that the mandate of education is different from the level or extent of pedagogy. There are certain things a law firm could teach, while some things could be taught better by advocates practicing in courts of law. The same applies to universities. Let us assume they ensure that teachers will be able to develop better pedagogy techniques, it would still not solve the larger problem of democratising learning of a subject, and the extent to which one as a student / a professional would be able to reciprocate what they have learned per se. It is not so easy to reciprocate what you learn in law. And hard and soft skills like, writing, speaking and others are obvious components, which must be dealt differently. In addition, considering the syllabus offered for 5-year, 7-year and 3-year law degrees and post-graduate law degrees, many universities are naturally bound to limit the scope of teaching to academic issues. This means while many universities and institutions have a deficit of representation of teachers and then even competence of teaching in those who already teach, adding to the fact that many of them have had financial crunch in the past, they would not be able to offer an infrastructure of learning & reciprocating. The third problem is the assumption of gatekeeping. Since the burden of democratising legal education lies primarily on the members of the Bar and the judiciary, considering the regulatory landscape of legal education in India, the academic fraternity has no onus to democratise legal education. There have been honest attempts by the Supreme Court, the Bar Council of India and even the Government of India through their Law Ministry to promote legal aid and ensure that law universities and departments engage in the reciprocity of legal education for all by virtue of pedagogy. However, how many legal aid camps and measures work, is another issue. Nevertheless legal aid by its virtue cannot address reciprocity of legal education. This leads to gatekeeping because most academic stakeholders (except those in top law institutions) have no awareness as to how should they teach and reciprocate legal learning, beyond aspiring law professionals and scholars. Law is not merely a profession but also a way of life for many. A democratic republic needs its businesses and citizens to know the law in a graduated fashion, which is experience-based and clearer. Now, while governments, statutory bodies and regulators have larger legal and political issues to handle, a sense of policy and legal clarity can be taught to people. This is how the half-baked assumption that gatekeeping happens in the legal profession can be addressed. Thus, we also propose that digitising legal education is a great way forward, if done with a concerted approach. This insight is subject to revision.

  • Unlocking the Creative Conundrum: Copyright Challenges in Text-Based Generative AI Tools

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of September 2023. The rise of text-based generative AI tools has revolutionized numerous industries by endowing them with the remarkable capability to generate text resembling human composition. However, this technological advancement has ushered in a host of challenges, chiefly within the domain of digital copyright. This article endeavours to unravel the intricate issue of text-based copyright within the realm of generative AI. It also delves into the legal actions taken against OpenAI in court, scrutinizes the limitations inherent in generative AI, and examines its implications across various sectors. By conducting a comprehensive examination, including an exploration of not only literary works created primarily for the purpose of entertainment, but also academic research, this article aims to illuminate the multifaceted issues entwining text-based generative AI and copyright law. In recent years, text-based generative AI tools have undergone rapid evolution, embedding themselves as indispensable components in content creation, chatbot development, and an array of other applications. Among these tools, ChatGPT, a creation of OpenAI, has garnered particular prominence. It harnesses formidable language models to produce text that remarkably emulates human composition. Although these AI tools have made substantial contributions across industries such as journalism, marketing, and entertainment, they have concurrently ignited contentious debates and triggered legal challenges revolving around digital copyright and the boundaries of creative expression. This article undertakes a comprehensive examination of the predicaments posed by text-based generative AI tools from a multifaceted perspective, encompassing legal, creative, and practical dimensions. The Text-Based Generative AI Revolution and Its Creative Constraints The emergence of text-based generative AI marks a significant milestone in the ever-evolving landscape of artificial intelligence. These remarkable AI tools possess the capacity to analyse extensive datasets, learn from them, and subsequently generate text that bears a striking resemblance to human language. This technological breakthrough has found diverse applications across industries, spanning from automating content generation for websites and marketing materials to providing prompt and contextually relevant responses to user inquiries. Furthermore, text-based generative AI has ventured into the realm of creativity, even attempting to craft fictional dialogues between well-known characters. The potential to produce coherent and contextually appropriate text has ushered in a new era of automation, increasing efficiency and productivity across various sectors. Nonetheless, it is imperative to recognize the inherent limitations of text-based generative AI. While these AI models excel at replicating human language patterns, they fundamentally lack genuine creativity. When assigned the task of generating dialogues, stories, or any form of creative content, they heavily rely on patterns and structures ingrained within the training data. Consequently, the outputs often bear an uncanny resemblance to existing works, raising valid concerns of unoriginality and the potential for copyright infringement. The essence of true innovation or the generation of entirely novel ideas remains beyond the capabilities of these AI tools. Their outputs are predominantly derived from statistical probabilities and learned patterns rather than the spark of creative inspiration. As a result, the integration of text-based generative AI in creative contexts has ignited a discourse that revolves around the authenticity of AI-generated content and its legal implications. Digging deeper into the shortcomings of text-based generative AI, it becomes evident that their limitations extend beyond issues of creativity. One of the fundamental challenges lies in context comprehension. While these AI models can generate coherent sentences, they often struggle to grasp nuanced or subtle contextual cues. This limitation can lead to inappropriate or nonsensical responses, particularly in scenarios that demand a profound understanding of context. Furthermore, the biases present in the training data can inadvertently surface in the generated content, perpetuating existing stereotypes and prejudices. These challenges not only pose practical problems but also underscore the ethical considerations surrounding the use of AI in content generation and human interaction. As the adoption of text-based generative AI continues to grow, it is imperative to address these issues comprehensively to harness their potential while mitigating their limitations. In essence, these legal battles not only highlight the necessity for the legal community to adapt to the era of AI but also serve as a call to action for policymakers and legislators to craft robust, forward-thinking legal frameworks capable of effectively addressing the intricate challenges stemming from the synergy of digital copyright and text-based generative AI. Balancing the protection of intellectual property rights with the promotion of innovation in this ever-evolving landscape remains a formidable task. As AI continues to shape our creative and legal landscapes, the need for clarity and coherence in copyright law becomes increasingly pressing. Digital Copyright and Civil Actions: Navigating the Legal Terrain with Text-Based AI The intersection of digital copyright and text-based generative AI represents a multifaceted and contentious legal terrain. Within the realm of law, digital copyright predominantly concerns safeguarding text-based works against unauthorized utilization or reproduction. The emergence of AI systems that generate text closely resembling copyrighted material has ignited a host of pertinent inquiries regarding copyright infringement. This, in turn, has led to a series of legal actions directed at OpenAI, the organization responsible for developing ChatGPT, within the United States judicial system. OpenAI, as the driving force behind ChatGPT and other text-based generative AI technologies, finds itself entangled in legal challenges linked to copyright infringement. Content creators and copyright holders contend that AI-generated text has the potential to undermine the market value of their original works. These legal disputes revolve around the crucial question of whether AI-generated text can be construed as fair use or a clear violation of established copyright law. The intricacies of these cases underscore the pressing requirement for well-defined legal frameworks capable of effectively addressing the unique and evolving challenges posed by generative AI in the realm of digital copyright. This intersection of technology and law necessitates a nuanced approach, given its implications not only for content creators but also for the broader domain of intellectual property rights in the digital era. These legal disputes epitomize the overarching endeavour to reconcile traditional copyright law principles with the transformative capabilities inherent in text-based generative AI. At the core of these legal deliberations lies the fundamental query concerning creativity and authorship in a world where machines are assuming an increasingly prominent role in content generation. One perspective asserts that AI should be perceived as a tool akin to a word processor, placing the onus of copyright infringement squarely on the shoulders of the human operator. Conversely, others contend that AI systems possess the capacity to autonomously produce text closely mirroring existing copyrighted works, necessitating a profound revaluation of copyright law itself. In essence, these legal contentions not only underscore the imperative for the legal community to adapt to the era of AI but also serve as a clarion call to policymakers and legislators to construct robust, forward-looking legal frameworks capable of adeptly addressing the intricate challenges stemming from the convergence of digital copyright and text-based generative AI. Striking a balance between safeguarding intellectual property rights and fostering innovation within this continuously evolving landscape presents a formidable undertaking. As AI continues to shape both our creative and legal domains, the demand for lucidity and coherence in copyright law becomes increasingly pressing. This intersection of technology and law necessitates a nuanced approach, carrying far-reaching implications not only for content creators but also for the broader terrain of intellectual property rights in the digital age. Navigating the Legal Maze: Copyright Challenges and Ethical Frontiers in AI Development The Authors Guild's Copyright Battle A consortium of American writers, under the banner of a trade group, has launched a collective legal action in federal court against OpenAI, the creator of ChatGPT. Orchestrated by the Authors Guild, this lawsuit champions the cause of over a dozen prominent authors, among them Jonathan Franzen, John Grisham, Jodi Picoult, George Saunders, and the esteemed writer behind "Game of Thrones," George R. R. Martin. The crux of their grievance centres on OpenAI's alleged illicit utilization of copyrighted materials to facilitate the training of its generative artificial intelligence software. The lawsuit filed by the Authors Guild against OpenAI, alleging copyright infringement in the development of ChatGPT, unveils a complex web of legal, social, economic, and ethical implications in the realm of text-based generative AI tools. On the legal front, this legal battle challenges the interpretation of fair use within U.S. copyright law. OpenAI argues that its use of data scraped from the internet falls under fair use, but the Authors Guild contends that the company illicitly accessed copyrighted works to train its AI system. This legal dispute raises fundamental questions about the boundaries of fair use in the age of AI and whether AI models can indeed replicate the creative expressions of human authors without violating intellectual property rights. From a social perspective, the Authors Guild's lawsuit shines a spotlight on the economic threats faced by writers in an era where generative AI could potentially displace human-authored content. The suit highlights instances where ChatGPT was used to generate low-quality e-books impersonating authors, eroding the livelihoods of human writers. The Authors Guild argues that unchecked generative AI development could lead to a substantial loss of creative industries jobs, echoing concerns raised by Goldman Sachs. Moreover, this legal action underscores the broader societal debate about preserving human creativity and innovation in creative outputs, as the proliferation of AI-generated content challenges the authenticity and originality of human-authored works. The Authors Guild's stance emphasizes the importance of writers' ability to control how their creations are utilized by generative AI, raising ethical questions about the intersection of technology and artistic integrity, and setting the stage for a broader discussion about the future of creative industries in the face of AI disruption Creative Backlash: Artists Challenge OpenAI in Copyright Battle Amid the creative community, mounting discontent with OpenAI's practices is becoming increasingly palpable. US comedian Sarah Silverman and two fellow authors have become vocal participants in this collective frustration by initiating legal proceedings against OpenAI, contributing to a growing chorus of creative voices challenging the company's actions. At the heart of their lawsuit lies the accusation of copyright infringement, with the plaintiffs vehemently asserting that OpenAI employed their literary creations without securing the necessary permissions to train its AI models. These legal actions epitomize a broader trend wherein creative individuals are resolutely asserting their rights in the face of unauthorized utilization of their intellectual property. This burgeoning movement carries the potential to not only redefine the landscape of AI development but also underscores the compelling urgency for ethical and legal considerations in the ever-evolving field of artificial intelligence. The grievances voiced by Sarah Silverman and her fellow authors exemplify a pivotal moment in the relationship between technology and the creative arts. This legal recourse serves as a potent reminder that even in the age of advanced AI, the rights and intellectual property of creators remain paramount. It further highlights the growing need for comprehensive ethical frameworks and robust legal safeguards to navigate the complex intersection of technology and artistic expression, ultimately shaping the future of AI development and its relationship with the creative community. Privacy and Intellectual Property Rights: A Dual Legal Challenge The lawsuits against OpenAI, particularly the allegations of copyright infringement and privacy violations, highlight the complex legal challenges that arise in the realm of AI development and deployment. On the one hand, the accusation that ChatGPT and DALL-E used copyrighted materials without proper consent underscores the need for AI developers to navigate copyright laws diligently. If the court rules in favour of the plaintiffs, it could set a precedent for stricter copyright regulations in AI training data, potentially reshaping how companies source and use data for machine learning models. This could have far-reaching implications for the AI industry, forcing organizations to be more transparent and careful about data sources and potentially driving innovation in AI model training techniques that rely less on copyrighted materials. On the other hand, the privacy violation allegations shed light on the growing concerns surrounding data privacy in the age of AI. The lawsuit claims that OpenAI collected personal information from users without their proper consent, which raises important questions about the ethical and legal boundaries of data collection and usage by AI systems. If the court finds merit in these claims, it could lead to more stringent regulations governing the collection and handling of user data by AI companies. This, in turn, may influence how AI models are developed and integrated into various applications, with a stronger emphasis on respecting user privacy and obtaining explicit consent for data usage. In sum, these lawsuits have the potential to reshape the legal and ethical landscape of the AI industry, emphasizing the need for a balanced approach that considers both intellectual property rights and data privacy concerns. Conclusion In conclusion, the rapid emergence of text-based generative AI, prominently represented by OpenAI's ChatGPT, has undeniably ushered in an era of unprecedented creativity, efficiency, and automation across various industries. However, this technological marvel is accompanied by a host of intricate challenges, notably within the realm of digital copyright. This article has delved into the complex interplay between text-based generative AI and copyright law, highlighting the legal battles, ethical dilemmas, and practical considerations that have arisen in this dynamic landscape. The legal actions initiated against OpenAI, both by the Authors Guild and individual creators like Sarah Silverman, serve as stark reminders of the evolving nature of creative expression in the digital age. These legal disputes stretch the boundaries of copyright law, prompting critical reflections on questions of authorship, creativity, and the transformative impact of AI on traditional creative industries. As technology continues its relentless evolution, it is imperative that our legal and ethical frameworks governing AI development evolve in tandem to ensure a harmonious and equitable environment for both creators and AI developers. Beyond copyright, the challenges extend to encompass privacy and data usage concerns, underlining the urgency of protecting user privacy in an increasingly data-centric world and necessitating a revaluation of data governance practices within the AI sector. In response to these multifaceted challenges, it falls upon not only the legal community but also policymakers, AI developers, and creators to collaborate in the search for solutions that strike a delicate balance between safeguarding intellectual property rights and fostering innovation. The future of AI and its role in creative content generation hinges on our ability to navigate this intricate terrain with wisdom, transparency, and a deep comprehension of the ethical implications at stake. As we venture into the uncharted territory of AI-driven creativity, one thing remains abundantly clear: the imperative for adaptability and forward-thinking approaches in shaping the legal, ethical, and practical dimensions of text-based generative AI. This journey is marked by complexity and uncertainty, but it is also replete with the potential to reshape how we create, interact with, and protect content in the digital age. The key lies in embracing the challenges and opportunities presented by this technological revolution while steadfastly upholding the principles of creativity, integrity, and respect for intellectual property rights, which have been the bedrock of our creative endeavours throughout history. Only through such a balanced approach can we fully unlock the vast potential of text-based generative AI while preserving the quintessence of human creativity that enriches our diverse and ever-evolving world.

  • New Report: Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005

    I am glad to announce another technical report for VLiGTA, i.e., Auditing AI Companies for Corporate Internal Investigations in India, VLiGTA-TR-005. . Internal investigations have become increasingly important in recent years, as companies face a growing range of risks, including financial fraud, corruption, and workplace misconduct. AI companies are no exception, and in fact, may face unique challenges in conducting internal investigations due to the complexity and opacity of AI systems. AI systems are often complex and opaque, making it difficult to understand how they work and to identify and investigate potential misconduct. This report provides guidance on how to audit AI companies for corporate internal investigations in India. AI companies face a number of unique risks that necessitate the conduct of internal investigations. First, AI systems are often complex and opaque, making it difficult to understand how they work and to identify and investigate potential misconduct. Second, AI systems may be used to collect and process sensitive data, which raises concerns about privacy and confidentiality. Third, AI systems are still relatively new and untested, and there is a risk that they could be used for malicious purposes. The report covers a wide range of topics, including: The importance of internal investigations in AI companies The challenges of auditing AI systems Useful practices for auditing AI companies from a perspective of corporate governance. The report is accessible at https://vligta.app/product/auditing-ai-companies-for-corporate-internal-investigations-in-india-vligta-tr-005/ Price: 400 INR.

  • Exploring the Ethical Landscape of AI-Driven Healthcare in India

    The author is a former research intern of the Indian Society of Artificial Intelligence and Law. In the ever-evolving landscape of global healthcare, Artificial Intelligence (AI) has emerged as a dynamic catalyst, fundamentally altering the methods by which medical data is collected, processed, and applied to elevate patient care. AI, as a distinct category of technology, bears the promise of augmenting diagnostic precision, treatment efficacy, and ultimately, patient well-being. However, the adoption of AI within the healthcare sector, particularly in India, has brought to the forefront a constellation of ethical considerations revolving around data collection and processing. This article embarks on an expansive exploration of AI-driven healthcare in India, with a particular focus on the intricate dynamics of data acquisition, analysis, and the ethical contours that envelop these advancements. In doing so, we aim to illuminate AI's pivotal role in the Indian healthcare landscape while adeptly navigating the intricate ethical considerations arising from its utilization in data-intensive domains. The significance of AI in the Indian healthcare milieu is profound, offering an unprecedented opportunity to surmount challenges posed by India's vast and diverse population, in tandem with its abundant data resources. These technological innovations have the potential to bridge disparities in healthcare accessibility, bolster disease management, and expedite medical research. Nonetheless, the deployment of AI in this multifaceted healthcare ecosystem is accompanied by a spectrum of challenges, most notably within the realm of patient data collection and processing. This article, therefore, seeks to provide profound insights into the transformative potential of AI within the Indian healthcare framework while vigilantly scrutinizing the ethical intricacies that emerge when harnessing these technologies to advance healthcare delivery and optimize patient outcomes.Top of Form AI’s Transformative Role in Indian Healthcare The landscape of healthcare in India is undergoing a profound transformation, with the rapid integration of Artificial Intelligence (AI) solutions addressing a multitude of challenges faced by the nation's healthcare system. As a country marked by significant disparities in healthcare accessibility, the adoption of AI technologies is serving as a potent equalizer. Start-ups and large Information and Communication Technology (ICT) companies alike are pioneering innovative AI-driven solutions that hold the potential to revolutionize healthcare delivery and patient outcomes. One of the most pressing challenges in Indian healthcare is the uneven ratio of skilled doctors to patients. AI is stepping in to bridge this gap by automating medical diagnosis, conducting automated analysis of medical tests, and even aiding in the detection and screening of diseases. These applications not only enhance the efficiency of healthcare providers but also enable timely diagnosis and treatment. Furthermore, AI is playing a pivotal role in extending personalized healthcare and high-quality medical services to rural and underserved areas where access to skilled healthcare professionals has historically been limited. Through wearable sensor-based medical devices and monitoring equipment, patients in remote regions can now receive continuous health monitoring and intervention. Additionally, AI is contributing to the training and upskilling of doctors and nurses in complex medical procedures, ensuring that the healthcare workforce remains equipped to meet evolving healthcare demands. Several noteworthy examples highlight the tangible impact of AI in the Indian healthcare sector. Institutions like the All India Institute of Medical Sciences (AIIMS) in Delhi have developed AI-powered tools capable of detecting COVID-19 in chest X-rays with remarkable accuracy. Start-ups like Niramai are leveraging AI to detect breast cancer at an early stage using thermal imaging, while Qure.ai is pioneering the detection of brain bleeds in CT scans. Sigtuple's AI-powered solution is being utilized to analyse blood samples, enabling the rapid detection of diseases like malaria and dengue. These innovations underscore the breadth and depth of AI's transformative potential in addressing healthcare challenges unique to India. Patient Data: The Foundation of AI-Driven Healthcare In the realm of AI-driven healthcare, patient data serves as the bedrock upon which transformative innovations are built. The importance of patient data cannot be overstated, as it fuels AI algorithms, enabling them to make precise diagnoses, recommend treatment plans, and predict disease trends. Patient data encompasses a wealth of information, including electronic health records (EHRs), medical imaging, genetic data, wearable sensor data, and even patient-generated data from health apps and devices. This vast and diverse trove of information is the lifeblood of AI applications in healthcare, offering valuable insights into patient health, disease progression, and treatment responses. The seamless collection and integration of this data are pivotal to realizing the full potential of AI in healthcare. However, the rapid proliferation of AI technologies in healthcare has raised concerns related to data privacy and ethical compliance. The fact that many of these AI solutions are owned and controlled by private entities has sparked privacy issues regarding data security and implementation. As healthcare data becomes increasingly digitized, there is a pressing need for robust safeguards to protect patient information. Additionally, the ability to deidentify or anonymize patient health data, a critical component of data privacy, may face challenges in the face of new algorithms that can potentially reidentify such data. Striking the right balance between harnessing AI's potential for healthcare innovation and safeguarding patient privacy remains a paramount ethical concern. Addressing these challenges is imperative to ensure that AI continues to play a constructive role in the Indian healthcare landscape, enhancing healthcare accessibility, and delivering improved outcomes while maintaining the highest standards of privacy and ethical compliance. Data Breaches in Indian Healthcare: A Growing Concern In recent years, India has witnessed a series of healthcare data breaches that have not only raised alarm but have also shed light on the critical intersection of AI, data security, and ethical concerns within the healthcare sector. One notable instance occurred in August 2019 when the healthcare records of a staggering 6.8 million individuals were compromised in India, signifying the scale of the challenge. In a broader global context, India ranked third in data breaches in 2021, trailing behind only the United States and Iran, with a reported 86 million breaches compared to the US's 212 million. The hacker behind this massive breach, identified as 'fallensky519' by a US cybersecurity firm, FireEye, was suspected to be affiliated with a Chinese hacker group. This breach underscored the vulnerabilities in India's healthcare data infrastructure and the urgency of addressing data security concerns, particularly in an era where AI systems are increasingly employed for data processing and analysis. Another alarming breach occurred in a large multi-speciality private hospital in Kerala, where complete patient records spanning five years, including test results, scans, and prescriptions, were exposed on the internet, accessible through unique patient IDs. The breach was initially uncovered by Dr. S Ganapathy, a physician in Kollam, who detected anomalies and forgeries within the medical records. Investigations revealed that the breach resulted from suboptimal security practices and a configuration issue at the hospital. This incident serves as a stark reminder of the need for stringent security protocols, particularly when AI systems are involved in managing sensitive patient data. Perhaps one of the most concerning breaches involved the leak of over a million medical records and 121 million medical images of Indian patients, including X-rays and scans, accessible to anyone online. The breach was identified by a German cybersecurity company, Greenbone Networks, which found 97 vulnerable systems in India. Patient records and images contained extensive personal information, including names, dates of birth, national IDs, and medical histories. This breach was attributed to the absence of password protection and encryption on the servers storing these records, raising questions about data security practices in healthcare facilities across the country. The AIIMS Cyberattack: Spotlight on Healthcare Data Vulnerabilities In December 2022, the Indian healthcare sector faced another significant challenge when it suffered 1.9 million cyberattacks, including a substantial one targeting the prestigious All India Institute of Medical Sciences (AIIMS) in New Delhi. The cyberattack, believed to be a ransomware attack, disrupted AIIMS' online services for a week and compromised the data of approximately 3-4 crore patients. While initial reports suggested that the hackers demanded a massive ransom in cryptocurrency, Delhi Police refuted these claims. A team of experts from the Indian Computer Emergency Response Team (CERT-in) and the National Informatics Centre (NIC) worked diligently to restore digital services at AIIMS. This incident underscored the vulnerabilities in India's healthcare infrastructure, particularly in the context of increasing AI integration and the imperative of fortifying data security measures to prevent such breaches. The intersection of AI, data breaches, and ethical concerns in the Indian healthcare landscape calls for heightened vigilance and comprehensive strategies to protect patient data and uphold ethical standards in the digital age. Collectively, these cases highlight the urgent need for India's healthcare sector to fortify data security measures, particularly given the increasing integration of AI in patient data processing and analysis. The convergence of AI, data breaches, and ethical dilemmas underscores the necessity of comprehensive strategies to protect patient data and uphold ethical standards while harnessing the potential of AI in healthcare. In navigating this complex landscape, India's healthcare sector must remain vigilant in safeguarding patient privacy and data security to ensure that the promises of AI are realized without compromising the fundamental principles of medical ethics Building Ethical Foundations for Indian Healthcare Data In light of the increasing data breaches and ethical concerns in Indian healthcare, it is imperative that the healthcare sector takes proactive steps to ensure ethical data practices and harness the full potential of AI. To begin with, ethical data practices should be at the forefront of healthcare operations, safeguarding patient privacy, confidentiality, and consent. This includes not only protecting patient data but also ensuring that patients have ownership and control over their health information. Moreover, accountability and distributive justice must be prioritized, with compensation for research-related harm and fair benefit sharing being integral components of ethical data practices. To facilitate these ethical data practices, the Indian government should play a pivotal role. Establishing a national digital health infrastructure that enables secure and interoperable data exchange among various stakeholders is crucial. Simultaneously, a robust legal and regulatory framework for data protection and governance should be put in place to provide clear guidelines for the handling of patient data. In addition to government initiatives, healthcare service providers and enterprises should customize AI solutions to meet the specific needs of the Indian healthcare landscape. This involves addressing local language, culture, and context to ensure that AI technologies are accessible and effective. Collaboration between healthcare providers and stakeholders, such as researchers, clinicians, and policymakers, is essential for validating and implementing AI models. Moreover, defining a comprehensive risk and governance framework for AI adoption, including ethical principles and protocols for data collection and processing, will help ensure responsible AI utilization. Investing in workforce training and capacity building is equally vital to leverage AI effectively and responsibly. Looking ahead, the prospects of AI in Indian healthcare are promising. AI can revolutionize various aspects of healthcare, including timely epidemic outbreak prediction, remote diagnostics and treatment, resource allocation optimization, precision medicine, and drug discovery. It can enhance patient care and streamline clinical workflows, reducing the cognitive burden on healthcare professionals. Furthermore, AI has the potential to foster innovation and collaboration in healthcare research by facilitating data sharing and analysis across institutions. As India moves forward in its journey with AI in healthcare, it must strike a delicate balance between technological advancement and ethical principles to ensure a brighter, more secure, and patient-centric future. Conclusion AI's transformative impact on Indian healthcare is undeniable, offering solutions to critical challenges and expanding access to quality care. However, ethical concerns regarding data privacy and security loom large. To navigate this transformation successfully, India must establish robust data protection measures, a supportive regulatory framework, and ethical practices that prioritize patient privacy. Collaboration among stakeholders is key to realizing a responsible and patient-centric AI-driven healthcare future. India has the opportunity to set a global example by embracing innovation while upholding the highest ethical standards, creating a healthcare landscape that is accessible, precise, and compassionate through the responsible adoption of AI.

  • Traversing the Data Portability Terrain: AI's Influence on Privacy and Autonomy

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of September 2023. In today's data-centric era, the convergence of data-driven technologies and the pervasive influence of Artificial Intelligence (AI) has thrust the issue of data portability into the spotlight. The value of personal data has soared, triggering critical discussions on who should have control over this valuable resource and who should benefit from it. Within this landscape, data portability has emerged as a crucial concept, enshrined in regulatory frameworks such as the 2018 EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). At its core, data portability empowers individuals with the right to obtain, copy, and seamlessly transfer their personal data across various platforms and services. This revolutionary concept redefines the traditional dynamics of data ownership and access, shifting control from data custodians to individuals themselves. It compels data holders to release personal data, even when it may not align with their immediate commercial interests. Advocates of data portability argue that this shift grants individuals greater agency over their digital footprints, enhancing their autonomy in the digital realm. The intersection of AI and data portability amplifies the significance of this concept. AI's insatiable hunger for data necessitates diverse and extensive datasets for optimal performance. Data portability, when seamlessly integrated with AI, has the potential to liberate and mobilize data for innovative applications far beyond its original context. This convergence unlocks unprecedented opportunities for data reuse and exploration while concurrently empowering individuals with greater control over the fate of their personal information. By scrutinizing its role in reshaping the data landscape and our perception of personal data, the article dissects the nuanced facets of data portability, highlighting its crucial function in preserving individual control in the AI-driven digital age. The article aims to demystify this complex landscape and shed light on its profound impact on the evolving digital ecosystem and individuals' control over their data in the AI-driven era. Understanding Data Portability Data portability is a versatile concept applicable in various contexts, especially within the legal framework of the European Union's General Data Protection Regulation (GDPR). Under GDPR, it is defined as a fundamental right empowering individuals to access their personal data from data controllers in a structured, machine-readable format. Equally important, it also grants individuals the capability to transfer their data between different controllers, provided such transfer is technically feasible. In essence, data portability empowers individuals to move their personal data, including complete datasets and subsets, in the digital realm and within automated processes. It's crucial to differentiate data portability from data interoperability. While interoperability relates to the seamless use of data across online platforms, data portability focuses on the individual's right to access and control their personal data. This concept holds immense significance in the digital marketplace by offering consumers the freedom to choose between platforms, preventing data from being trapped within closed ecosystems known as "walled gardens." This not only promotes competition and consumer welfare but also aligns with the principles of data protection and individual privacy. Data portability's roots trace back to initiatives like the Data Portability Project, established in 2007 through dataportability.org. This project aimed to foster unrestricted data mobility in commercial settings. Recognizing the challenges of safeguarding personal data, the European Council initiated a debate in 2010 that eventually led to the GDPR proposal. Acknowledging data portability's potential to enhance consumer choice, competition, and data protection, GDPR now enshrines it as a fundamental right. In the ever-evolving digital landscape, data portability empowers individuals, offering them greater control over their data and contributing to a more open and competitive digital marketplace. Empowering Individuals: Autonomy over Personal Data The paramount significance of granting individuals control over their personal data cannot be overstated in today's data-centric digital landscape. It fundamentally upholds the principles of individual autonomy, privacy, and self-determination. Allowing individuals the authority to access, retrieve, and transfer their personal data empowers them to make informed choices about its usage, thereby reshaping the power dynamic in the data-driven era. This control extends beyond mere ownership; it signifies the ability to dictate how personal information is harnessed, by whom, and for what purposes. By placing individuals at the centre of their data universe, data portability not only bolsters their digital autonomy but also engenders trust in data ecosystems, fostering a more transparent and accountable digital environment. A compelling case example of the transformative potential of data portability can be found within the realm of financial technology, or fintech. In this context, data portability translates into on-demand data sharing, exemplified by the ability to grant access to specific financial data, such as banking transactions or credit histories, to authorized third parties for tailored financial services. Imagine a scenario where an individual seeks to secure a loan or assess their financial health. Through data portability, they can grant temporary access to their financial data to a fintech service, allowing for instant analysis and personalized recommendations. This not only streamlines financial decision-making but also illustrates the real-world benefits of data autonomy. However, it is vital to tread carefully in this landscape to strike the right balance between data accessibility and security. Balancing privacy and convenience is the crux of the data portability challenge. While data portability empowers individuals by providing them control over their data, it also necessitates a delicate equilibrium between safeguarding privacy and delivering user-friendly experiences. Striking this balance is particularly relevant in scenarios where personal data flows seamlessly between platforms and services. While the convenience of data sharing and portability is undeniable, it must not come at the expense of privacy and security. Ensuring robust data protection mechanisms, user consent, and transparent data usage policies becomes imperative to maintain public trust in data ecosystems. In essence, data portability not only reshapes digital autonomy but also calls for a re-evaluation of data governance principles to safeguard individual privacy in an increasingly interconnected digital world. AI and Personal Data: A Complex Relationship The intricate relationship between Artificial Intelligence (AI) and personal data is rooted in AI's insatiable hunger for diverse and extensive datasets. AI systems, particularly machine learning models, depend heavily on the availability of vast and high-quality datasets to train, refine, and optimize their algorithms. Personal data, rich in its variety and depth, often forms the lifeblood of AI, enabling these systems to make predictions, recognize patterns, and deliver personalized services. Consequently, the intimate connection between AI and personal data raises critical questions about data access, usage, and consent. Understanding the depth of AI's reliance on personal data is central to unravelling the complex dynamics surrounding data portability in the age of intelligent algorithms. The intersection of AI and personal data is fraught with ethical concerns, chief among them being data exploitation. As AI systems become increasingly proficient at mining insights from personal data, there is a growing risk of individuals being subjected to intrusive profiling, microtargeting, and manipulation. Striking the right balance between innovation and data protection becomes paramount, with considerations such as consent, transparency, and fairness taking centre stage. Addressing these ethical concerns requires a nuanced approach that respects individual privacy while harnessing the power of AI to drive progress. Paradoxically, AI, which relies on personal data, also holds the potential to enhance data portability. AI-driven tools can facilitate the seamless extraction, transformation, and transfer of personal data, making it more accessible and useful for individuals. AI algorithms can assist in data anonymization, ensuring that privacy is maintained even as data is shared across platforms. Moreover, AI can empower individuals by providing them with insights into how their data is being used and facilitating data portability requests. This dual role of AI as both a consumer and an enabler of data portability underscores the need for a balanced approach that leverages AI's capabilities while safeguarding privacy and autonomy. Data Portability Across Sectors Data portability challenges vary across sectors, each presenting unique complexities. In healthcare, for instance, the interoperability of medical records is a pressing concern. In e-commerce, the portability of shopping history and preferences raises questions about data ownership and control. Social media platforms grapple with the transfer of user-generated content, involving both personal and non-personal data. These sector-specific challenges demand tailored solutions that address the intricacies of each domain. Healthcare In healthcare, data portability confronts the fragmentation of patient records among various providers, often stored in incompatible formats and systems. This fragmentation creates hurdles for the seamless sharing of crucial medical information, hindering efficient care coordination and patient-centric healthcare delivery. Patients frequently encounter difficulties in transferring their health data between healthcare institutions, limiting their ability to access comprehensive care. In such instances, the lack of access to comprehensive patient records can result in outdated or incomplete information being used for critical decision-making, potentially leading to adverse outcomes for patients and their treatment. The benefits of portable medical data in addressing these issues are multifaceted. Firstly, it prevents medical errors arising from the lack of insight into patients' complete medical history, including previous prescriptions and allergies, when they seek assistance from new healthcare providers. Moreover, it reduces the likelihood of misdiagnoses by providing a comprehensive view of medical records and symptom history, enabling more accurate assessments and treatment planning. Additionally, portable medical data optimizes treatments and prescriptions by offering insights into patients' medical history and symptoms when seeking care from new providers. This not only enhances patient outcomes but also streamlines the allocation of healthcare resources by centralizing patient data, reducing the need for redundant testing and administrative work. Furthermore, it shortens transfer periods for patients seeking care from new providers, ensuring timely access to essential medical information. A particularly notable benefit is the prevention of prescription medication misuse. With a collective Electronic Health Record (EHR) that provides insights into previous prescriptions from all healthcare providers, it becomes possible to identify and mitigate the risk of patients becoming addicted to prescribed medications, curbing unnecessary and potentially harmful repetitive prescriptions. In the healthcare sector, data portability emerges as an indispensable tool for improving patient care, minimizing errors, enhancing diagnoses, optimizing treatments, and ensuring more efficient resource utilization. It ultimately empowers patients and healthcare providers alike by enabling seamless access to critical medical information across the healthcare continuum. E-Commerce The e-commerce sector, on the other hand, grapples with a unique conundrum regarding data portability. It revolves around the transfer of user purchase history, preferences, and browsing behaviour while safeguarding sensitive financial information. Striking the right balance between facilitating data movement and preserving data security is paramount. Industry players must devise strategies that empower users to transfer relevant data without compromising their financial privacy. Enabling users to seamlessly switch platforms without losing access to their shopping history and preferences requires innovative solutions and strong data protection measures. Two prominent examples, Kroger and Amazon, illustrate how harnessing big data can transform the landscape of online retail. Kroger, in collaboration with Dunnhumby, has leveraged big data to analyse and manage information from a staggering 770 million consumers. A significant portion of Kroger's sales, approximately 95%, is attributed to their loyalty card program. Through this program, Kroger achieves remarkable results with close to 60% redemption rates and over $12 billion in incremental revenue since 2005. By utilizing big data and analytics, Kroger tailors its offerings to individual customer preferences, enhancing loyalty and driving sales through personalized experiences. Similarly, Amazon, a trailblazer in the ecommerce industry, prioritizes customer satisfaction above all else. Amazon's success story is intricately woven with its adept use of big data. They employ data to personalize customer interactions, predict trends, and continually improve the overall shopping experience. One notable example of this is their feature that suggests products based on the shopping behaviours of other users, resulting in a substantial 30% increase in sales. These case studies underscore how data portability, alongside big data analytics, empowers ecommerce platforms to not only scale their sales but also elevate customer satisfaction by tailoring offerings to individual preferences and predicting trends. In this context, data portability facilitates the seamless transfer of valuable customer insights, enabling ecommerce businesses to adapt and thrive in a dynamic online marketplace. Social Media Social media platforms grapple with complex data portability challenges, particularly concerning user-generated content, which includes a vast array of data such as photos, posts, and connections. Ensuring that users can seamlessly transfer their digital social lives across platforms while upholding the principles of data privacy and consent necessitates innovative solutions. For instance, consider a scenario where a user decides to transition from one social media platform to another. In doing so, they may want to take their years of posted content, photos, and connections with them. However, this process is far from straightforward. Each platform often employs its data storage and formatting standards, making the extraction and transfer of such content a daunting task. Moreover, privacy concerns loom large. Users must not only be able to export their data but also ensure that their private messages, photos, and personal information are not inadvertently exposed during the process. To address these challenges, social media platforms must develop sophisticated mechanisms that empower users with granular control over their content. This control extends to the ability to export, share, or permanently delete specific pieces of data. These mechanisms should also respect the intricacies of data privacy laws, such as the European Union's GDPR, which mandates strict rules for the handling of personal data. In essence, data portability in the realm of social media requires a delicate balance between offering users autonomy over their data and adhering to sector-specific regulations to safeguard privacy and security. It's a multifaceted endeavour that underscores the importance of robust data portability frameworks in the digital age. The implications of data portability extend beyond these individual sectors, influencing the broader digital ecosystem. Tailored solutions are imperative to reconcile sector-specific needs with overarching data protection principles. Collaboration among policymakers, industry stakeholders, and technology innovators is indispensable in crafting frameworks that facilitate data portability while upholding sector-specific regulations and standards. Addressing these sector-specific implications stands as a pivotal step in shaping the future of data portability, fostering greater user autonomy, and enhancing data accessibility across diverse sectors. As data portability becomes more prevalent, these challenges and solutions will continue to evolve, shaping the digital landscape in the years to come. The Economics of Data Privacy The economics of data privacy introduce a complex interplay of costs and benefits for businesses. Data portability has the potential to foster competition by reducing barriers to entry and granting smaller firms access to valuable data. However, alongside these benefits, businesses may also incur expenses related to data management, security, and compliance. These economic considerations significantly influence strategic decisions surrounding data sharing and portability. Moreover, data privacy regulations, including those governing data portability, introduce their own set of costs and benefits for both businesses and individuals. Compliance costs, such as the implementation of data portability mechanisms, can be substantial. However, it's crucial to recognize that data portability can stimulate innovation and competition, ultimately benefiting consumers. To comprehensively assess the impact of data privacy regulations on various stakeholders, a thorough economic analysis is essential. In the long term, economic sustainability hinges on finding a delicate equilibrium between data privacy and innovation. When appropriately implemented, data portability can indeed foster innovation by enabling the development of new services and applications while promoting trust, a vital factor for sustaining digital markets. Striking the right balance between privacy and innovation is of paramount importance for fostering a robust and economically sustainable digital ecosystem. To achieve this equilibrium, policymakers and businesses must collaborate effectively, navigating the intricate landscape to ensure that data portability serves as a catalyst for innovation while safeguarding individual privacy and digital autonomy. Data Portability: Striving for Achievability Despite the transformative potential of data portability, several challenges hinder its full realization in the digital landscape. One of the foremost obstacles lies in the technical complexities associated with seamlessly transferring data between platforms and services. Varying data formats, storage systems, and security protocols across different entities make interoperability a formidable challenge. This hurdle often results in friction during the data transfer process, limiting the effectiveness of data portability in practice. Additionally, privacy concerns present a significant roadblock. While data portability empowers individuals to control their data, it must operate within the boundaries of data protection laws. Striking the right balance between enabling data mobility and safeguarding personal privacy is a complex task. Ensuring that transferred data remains secure and private, particularly in cross-border transfers, poses significant legal and technological challenges. Furthermore, the lack of awareness and technical proficiency among individuals may impede the adoption of data portability. Many users are unaware of their data rights or lack the technical knowledge to effectively initiate and manage data transfers. Bridging this knowledge gap and simplifying the user experience is essential for data portability to achieve its intended goals. From a legal standpoint, the success of data portability hinges on robust regulations that provide clear guidance on its implementation. Existing frameworks like the GDPR have laid the foundation for data portability rights. However, further refinement and harmonization of these regulations are necessary to address the evolving challenges in the digital landscape. Legal frameworks should encourage collaboration between data controllers, ensuring that they develop standardized processes and technologies for data transfer. This could involve setting industry standards or best practices to streamline data portability and promote interoperability. Additionally, regulations should mandate transparency in data usage and transfer, requiring data controllers to provide clear information to users about how their data will be utilized and transferred. Data Portability: Effective Implementation Strategies Implementing data portability effectively requires a multifaceted approach that combines technological innovation, regulatory guidance, and user education. First and foremost, technology solutions should prioritize the development of standardized data formats and application programming interfaces (APIs) that facilitate seamless data transfer between platforms. These standardized formats would ensure that data can be easily understood and processed across various services, reducing interoperability challenges. Additionally, the creation of user-friendly tools and interfaces is essential to empower individuals to exercise their data portability rights effortlessly. Platforms should offer intuitive options for users to initiate data transfers, manage their data permissions, and track the status of ongoing transfers. This user-centric design approach can enhance the overall data portability experience. Regulatory bodies and policymakers must also play a pivotal role in driving effective data portability implementation. Clear and comprehensive regulations, such as those seen in the GDPR, should provide unambiguous guidelines for data controllers on their responsibilities regarding data portability. Regulatory frameworks should promote collaboration among industry stakeholders, encouraging the development of industry standards and best practices that ensure the smooth operation of data portability mechanisms. Moreover, periodic audits and assessments of data portability compliance can help maintain accountability and adherence to these regulations. Furthermore, user education and awareness campaigns are essential to inform individuals about their data rights and how to exercise them. Providing accessible information and resources about data portability can empower users to take control of their data and make informed decisions about data transfers. In summary, a successful implementation of data portability hinges on the convergence of technology, regulation, and user empowerment, with each element working in tandem to enable seamless data mobility while safeguarding privacy and security. Conclusion In conclusion, data portability stands at the forefront of today's data-centric era, where the intersection of data-driven technologies and Artificial Intelligence (AI) has reshaped the digital landscape. This concept, enshrined in regulatory frameworks like the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), fundamentally redefines the dynamics of data ownership and access. Data portability empowers individuals with the right to control their personal data, enabling them to obtain, copy, and seamlessly transfer it across various platforms and services. This shift in control from data custodians to individuals themselves grants greater agency over digital footprints, reinforcing their autonomy in the digital realm. The convergence of AI and data portability unlocks transformative potential, liberating data for innovative applications beyond its original context. It offers unprecedented opportunities for data reuse and exploration while simultaneously giving individuals more control over their personal information. As this article has explored, data portability has a profound impact on our evolving digital ecosystem, emphasizing the importance of individual control in the AI-driven age. To harness the full potential of data portability, a multifaceted approach involving technological innovation, regulatory guidance, and user empowerment is essential. Through standardized data formats, user-friendly tools, clear regulations, and robust user education, the implementation of data portability can become a catalyst for a more transparent, accountable, and innovative digital landscape, preserving privacy while empowering individuals in an interconnected world.

  • The G20 Delhi Declaration: Law & Policy Innovations

    The G20 New Delhi Leaders’ Declaration for sure, is a stupendous achievement to look forward to in multiple shades of public policy and international affairs. The most notable aspect of this Declaration, is that this declaration was accepted without any separate statements, and exceptions. Nevertheless, the legal and policy issues, on which this declaration reflects consensus, is truly an interesting achievement to look forward to. The most interesting and relevant issues addressed in the G20 New Delhi Leaders' Declaration, in the context of committing to innovative law & policy practices, were the following: Unlocking Trade for Growth (page 4) Strengthening Global Health and Implementing One Health Approach (page 8) Finance-Health Collaboration (page 10) Culture as a Transformative Driver of SDGs (page 11) Macroeconomic risks stemming from climate change and transition pathways (page 12) Designing a Circular Economy World (page 13) Delivering on Climate and Sustainable Finance (page 15) Reforming International Financial Institutions (page 19) Managing Global Debt Vulnerabilities (page 21) Building Digital Public Infrastructure (page 22) Crypto-assets: Policy and Regulation, Central Bank Digital Currency & Fostering Digital Ecosystems (page 23) Harnessing Artificial Intelligence (AI) Responsibly for Good and for All (page 24) International Taxation (page 24) This quick explainer on the G20 Delhi Declaration, takes a law & policy outlook on the G20 Delhi Declaration, with a perspective for law professionals and scholars as well. The Significance of the G20 Delhi Declaration & its Consensus The G20's Delhi Declaration ushers in an era considered momentous in the annals of history, emphasizing that the decisions made today by this global consortium will profoundly impact both humanity and the planet. This assertion, far from mere diplomatic rhetoric, underscores the paramount significance of this juncture, encapsulating the essence of India's remarkable accomplishment. Before delving into the substantive facets spanning various domains, the comprehensive agreement reached unanimously by all 20 member nations serves as a preamble, outlining the multifaceted political, economic, and environmental challenges that have ensnared our world. Significantly, this preamble elucidates India's pivotal role in safeguarding the interests of the global South, comprising the marginalized within the international order, as well as the impoverished and vulnerable populations within both affluent and less affluent nations. In this context, the text elucidates a set of well-defined principles and priorities. The most striking part of the Declaration is depicted as follows: Together we have an opportunity to build a better future. Just energy transitions can improve jobs and livelihoods, and strengthen economic resilience. We affirm that no country should have to choose between fighting poverty and fighting for our planet. We will pursue development models that implement sustainable, inclusive and just transitions globally, while leaving no one behind. Under the Indian presidency's stewardship, a resolute commitment has been made to avoid any trade-off between combatting poverty and addressing the pressing climate crisis. The document's overarching themes encompass fostering economic growth, rejuvenating the pursuit of sustainable development goals (SDGs), combatting the climate emergency, preparedness for health crises, reforming multilateral development banks (MDBs), grappling with the debt crisis, expanding digital public infrastructure (DPI), job creation, narrowing the gender gap, and amplifying the voice of the global South. These themes serve as the guiding principles and the very essence of the document. Unlocking Trade for Growth The declaration begins by highlighting the global economic situation. It mentions that global economic growth is currently below its long-term average and is characterized by unevenness. There is considerable uncertainty about the future outlook. The global financial conditions have tightened, which raises concerns about increased debt vulnerabilities. Persistent inflation and geoeconomic tensions are further contributing to uncertainties. The overall risk assessment indicates a tilt towards the downside. The text emphasizes the need for a coordinated approach to address these challenges. It suggests using various policy tools, including monetary, fiscal, financial, and structural policies, to promote economic growth, reduce inequalities, and maintain macroeconomic and financial stability. There is a recognition of the importance of cooperation among nations to achieve the 2030 Agenda for Sustainable Development. The text also highlights the importance of flexibility and agility in policymaking, citing examples of recent banking turbulence and how swift actions helped stabilize financial systems. It commends the Financial Stability Board (FSB), Standard Setting Bodies (SSBs), and certain jurisdictions for their efforts in examining lessons from banking turbulence and encourages them to continue this work. The use of macroprudential policies is endorsed to mitigate downside risks. Central banks' commitment to price stability and inflation control is stressed, along with the importance of clear communication of policy stances. The independence of central banks is deemed crucial for maintaining policy credibility. There's a commitment to prioritize temporary and targeted fiscal measures to protect vulnerable populations while ensuring medium-term fiscal sustainability. Supply-side policies that boost labor supply and enhance productivity are recognized as important for economic growth and inflation control. Also, the exchange rate commitment made in April 2021 by Finance Ministers and Central Bank Governors is reaffirmed. On trade and multilateralism, the text emphasizes the importance of a rules-based, non-discriminatory, fair, open, inclusive, equitable, sustainable, and transparent multilateral trading system, with the World Trade Organization (WTO) at its core. It reaffirms the commitment to ensure fair competition and discourage protectionism and market-distorting practices to create a favorable trade and investment environment. There's a call for WTO reform through an inclusive member-driven process, including the improvement of the dispute settlement system by 2024. The text acknowledges challenges faced by Micro, Small, and Medium-sized Enterprises (MSMEs) in accessing information and supports efforts to enhance MSMEs' access to information to promote their integration into international trade. It welcomes the adoption of a Generic Framework for Mapping Global Value Chains (GVC) to identify risks and build resilience. The High-Level Principles on Digitalization of Trade Documents are endorsed, with efforts to encourage implementation. Trade and environment policies should be mutually supportive, consistent with WTO and multilateral environmental agreements. The importance of the WTO's 'Aid for Trade' initiative for developing countries, especially Least Developed Countries (LDCs), is recognized, and efforts to mobilize resources for it are welcomed. This even gets interesting since the European Union did not express any discontent to this part of the text, considering their trade-related issues with India on their Carbon Border Adjustment Mechanism. The EU's CBAM is designed to tackle carbon leakage, which occurs when industries relocate to regions with less stringent climate policies, thereby undermining global efforts to combat climate change. CBAM aims to impose tariffs on carbon-intensive imports into the EU, ensuring that imported goods meet environmental standards similar to those imposed on domestic production. While the EU sees CBAM as a means to level the playing field and promote environmental sustainability, some trading partners, including India, have expressed concerns about its impact on global trade and the potential for discrimination. CBAM to impact between 15% and 40% of India’s annual steel exports which are made to Europe; failure to reduce the carbon footprint may result in lower profits in EU markets, and a possible loss of market share in Europe for Indian mills. Starting from October 1, 2023, and extending until December 31, 2025, the transitional phase of the Carbon Border Adjustment Mechanism (CBAM) will necessitate only quarterly reporting on the greenhouse gas emissions associated with specific products imported into the European Union (EU). These reports will encompass both direct and indirect emissions. Nevertheless, commencing in 2026, the acquisition of CBAM certificates will become obligatory to account for greenhouse gas emissions. The cost of these certificates will be tied to carbon prices established within the framework of the EU Emissions Trading System (ETS). As a consequence, CBAM will introduce an additional expense for exporters or producers targeting the EU market, with this cost-sharing arrangement having the potential to influence their marketing strategies. It is anticipated that other nations may also consider implementing policies akin to CBAM. The implications of the Carbon Border Adjustment Mechanism (CBAM) for India will be contingent upon the carbon footprint of the goods being exported and the availability of substitutes with lower emissions in the European Union (EU) market. Products with elevated carbon emissions are likely to face increased fees, potentially diminishing their competitiveness. However, if there are no eco-friendly alternatives to Indian exports within the EU market, the impact of CBAM on India's exports may be somewhat restricted. A significant hurdle for India lies in the absence of an emissions trading system akin to the EU's Emissions Trading System (ETS). The absence of such a system could pose challenges for Indian enterprises in demonstrating that their products are manufactured using low-carbon technologies, potentially resulting in higher CBAM costs. To sustain competitiveness in the global market and alleviate the effects of CBAM, India must institute a mechanism for pricing carbon and cultivate low-carbon technologies. This proactive approach will enable Indian businesses to adhere to CBAM regulations while concurrently reducing the carbon emissions associated with their products. Furthermore, India should reevaluate its export strategy and explore alternative markets where its products can maintain competitiveness, even in the face of CBAM's impact in the EU market. Considering the G20 Delhi Declaration, it would be interesting to notice how this goes. Strengthening Global Health and Implementing One Health Approach Here are the important excerpts from the declaration on Global Health and One Health Approach, which may interest law professionals: Look forward to a successful outcome of the ongoing negotiations at the Intergovernmental Negotiating Body (INB) for an ambitious, legally binding WHO convention, agreement or other international instruments on pandemic PPR (WHO CA+) by May 2024, as well as amendments to better implement the International Health Regulations (2005). Support the WHO-led inclusive consultative process for the development of an interim medical countermeasures coordination mechanism, with effective participation of LMICs and other developing countries, considering a network of networks approach, leveraging local and regional R&D and manufacturing capacities, and strengthening last mile delivery. This may be adapted in alignment with the WHO CA+. Not so long ago, India had agreed to the recent changes to the International Health Regulations, proposed by the United States. There has been minimal public awareness or discussion regarding the significant alterations to the International Health Regulations (IHR), despite the WHO Secretariat's distribution of the US proposal in January 2022 to state parties. The US proposal stands in contrast to a report issued by the WHO Director-General in November 2021, which outlined some of the amendments now presented by the US. This report also indicated that the IHR would not undergo renegotiation, sparking concerns about the amendment process. The attention given to the US-proposed amendments was further overshadowed by the buzz surrounding the commencement of negotiations for a new treaty on pandemic preparedness and response by 2024. This treaty's scope, content, and ultimate outcome had remained uncertain, as did its relationship with the existing legal framework of the IHR. Here is an excerpt from an article by Dr Silvia Behrendt and Dr Amrei Müller for EJIL! Talk, which explains the US-proposed amendments and their overarching effect. Finally, the US amendments propose to reduce the time during which state parties to the IHR can reject, or enter reservations to, future IHR amendments that were adopted by a simple majority of the WHA from 18 months to six months (proposed US amendments to Article 59 IHR). Thus, in future, if states do not opt out within six months, amendments enter into force for them automatically in line with Article 22 WHO Constitution and the amended Article 59 IHR. This leaves states rather limited amount of time thoroughly evaluate the legal and practical implications of IHR amendments, including for their domestic health policies and budgeting. Interestingly, India's agreement to the proposed changes by the United States, comes in line with the 9-point plan proposed by the Ministry of Health and Family Welfare. Here are some excerpts from the plan, which helped a lot in making WHO as a multilateral body, and the IHR system pragmatic in its purpose of governance: It is important to devise objective criteria with clear parameters for declaring PHEIC. It should also be possible for DG WHO to declare a PHEIC if in his/her assessment there is a broad agreement, though not a consensus, within the IHR Emergency Committee and not to wait for a consensus to emerge. The emphasis must be on transparency and promptness in the declaration process. There is no framework or mechanism to ensure that the details on funding & financing are disclosed at a micro level which is a crucial element. There should be a quarterly review of ongoing WHO activities in the country by Member States with the WHO Country Office so as to align expenditure by WHO in consonance with country priorities. It is important that the member States have a greater say in the functioning of the WHO, given that it is the States which are responsible for implementation on ground of the technical advice and recommendations coming from the WHO. The final amendments to the IHR, interestingly, make it clear that countries will be able to submit reservations on PHEICs accordingly. Finance-Health Collaboration The part on Finance-Health Collaboration in the declaration highlights the significant collaboration between finance and health ministries within the Joint Finance and Health Task Force (JFHTF). This collaboration is of paramount importance for several reasons: Strengthening Global Health Architecture: The collaboration underscores the commitment of G20 nations to fortify the global health architecture, specifically for pandemic prevention, preparedness, and response. This is a critical step in light of the ongoing and potential future public health crises. Economic Vulnerabilities and Risks Assessment: The reference to the Framework on Economic Vulnerabilities and Risks (FEVR) and the initial report arising from collaboration between the World Health Organization (WHO), World Bank, IMF, and European Investment Bank (EIB) signifies the recognition of the intertwined nature of health and economic well-being. It acknowledges that pandemics pose significant economic risks, and it highlights the importance of assessing these risks comprehensively. Country-Specific Considerations: The commitment to considering country-specific circumstances in assessing economic vulnerabilities and risks due to evolving pandemic threats demonstrates a nuanced approach. This recognizes that the impact of pandemics can vary widely among nations and regions. Best Practices and Financing Mechanisms: The text emphasizes the importance of sharing best practices in finance-health institutional arrangements during COVID-19. This knowledge exchange can improve readiness for future pandemics. Additionally, it acknowledges the need for optimizing and better coordinating financing mechanisms to respond promptly and efficiently to pandemics, indicating a focus on practical solutions. Pandemic Fund and Donor Support: The mention of the Pandemic Fund and the forthcoming Call for Proposals highlights the commitment to creating financial mechanisms for pandemic response. Encouraging new donors and co-investment underscores the importance of broad-based financial support for addressing global health challenges. Accountability and Reporting: The call for the Task Force to report back to Finance and Health Ministers in 2024 underscores the accountability and commitment of G20 nations to making tangible progress in strengthening global health systems and addressing the economic implications of pandemics. Culture as a Transformative Driver of SDGs The part on Culture & SDGs, which emphasises the significance of culture as a transformative driver of Sustainable Development Goals (SDGs), carries several key points of significance: Recognition of Culture's Intrinsic Value: The declaration underscores the intrinsic value of culture as a force that can drive transformative change. This recognition goes beyond viewing culture solely as a form of artistic expression; it acknowledges its profound influence on societal development and progress. Inclusion of Culture as a Standalone Goal: The call for including culture as a standalone goal in future discussions on a post-2030 development agenda is noteworthy. It indicates a broader recognition of culture as a fundamental dimension of sustainable development, reflecting its potential to contribute to various aspects of society, from social inclusion to economic growth. Fight Against Illicit Trafficking: The commitment to combat illicit trafficking of cultural property at different levels—national, regional, and international—is vital. This reflects the acknowledgment of the importance of preserving cultural heritage and returning it to its countries and communities of origin. This commitment is aligned with international efforts to safeguard cultural artifacts and combat their illegal trade. Strengthening Cultural Diplomacy: The emphasis on strengthening cultural diplomacy and intercultural exchanges underscores the role of culture in fostering mutual understanding and cooperation among nations. It reflects the belief that cultural dialogue can contribute to peace and global harmony. Protection of Living Cultural Heritage: The declaration highlights the need to protect living cultural heritage, including intellectual property. This recognition is significant in the context of contemporary challenges, such as over-commercialisation and misappropriation of cultural elements. It acknowledges the potential negative impact of such practices on the sustainability and livelihoods of practitioners and Indigenous Peoples. In summary, this text reflects a multifaceted perspective on culture's role in sustainable development. It underscores culture's intrinsic value as a driver of positive change, its significance as a potential standalone goal in future development agendas, and the importance of preserving and protecting cultural heritage. It also recognizes the need to address contemporary challenges related to cultural practices and intellectual property. Macroeconomic risks stemming from climate change and transition pathways This part addresses macroeconomic risks associated with climate change and transition pathways, and carries significant importance in several aspects: Recognition of Macroeconomic Costs: The text acknowledges the substantial macroeconomic costs linked to the physical impacts of climate change. This recognition is crucial as it underscores the economic implications of environmental challenges, reinforcing the idea that climate change is not just an environmental issue but a core economic concern. Cost-Benefit Analysis: The declaration emphasizes that the cost of inaction in the face of climate change far exceeds the cost of implementing orderly and just transitions. This cost-benefit analysis highlights the economic rationale for taking proactive measures to address climate-related risks and adopt sustainable practices. International Dialogue and Cooperation: The emphasis on international dialogue and cooperation, particularly in finance and technology transfer, reflects the recognition that climate change is a global challenge requiring collaborative solutions. This aligns with the principle of shared responsibility and underscores the importance of nations working together to address climate issues. Assessing Macroeconomic Impacts: The declaration stresses the need to assess and account for the short, medium, and long-term macroeconomic impact of both climate change and transition policies. This holistic approach recognizes that climate actions can have ripple effects on various economic factors, including growth, inflation, and employment. Commitment to Further Work: The text expresses a commitment to consider further work on the macroeconomic implications of climate change and transition policies, particularly concerning fiscal and monetary policies. This forward-looking stance demonstrates a willingness to adapt economic policies to address climate-related challenges effectively. Designing a Circular Economy World This part, which focuses on designing a circular economy world, carries several significant implications: Decoupling Economic Growth from Environmental Degradation: The text acknowledges the need to decouple economic growth from environmental harm. This is a crucial recognition that the traditional model of economic development often comes at the cost of natural resources and ecological damage. By emphasizing the importance of decoupling, the declaration signals a commitment to pursuing economic growth in a more sustainable and environmentally responsible manner. Promoting Sustainable Consumption and Production: The acknowledgment of the critical role played by circular economy principles, extended producer responsibility, and resource efficiency underscores a commitment to reshaping the way products are designed, produced, and consumed. This shift towards sustainable consumption and production patterns is essential for reducing environmental impact and conserving resources. Launch of Resource Efficiency and Circular Economy Industry Coalition (RECEIC): The launch of RECEIC under the Indian presidency signifies a concrete step towards fostering international collaboration and cooperation in promoting circular economy practices. This coalition is expected to serve as a platform for knowledge sharing, policy development, and innovation in resource efficiency and circular economy initiatives. Commitment to Environmentally Sound Waste Management: The commitment to enhance environmentally sound waste management aligns with global efforts to reduce waste and minimize its environmental impact. It signals a recognition that waste management practices need to be improved to prevent pollution and resource depletion. Substantial Waste Reduction by 2030: The goal of substantially reducing waste generation by 2030 is both ambitious and significant. It demonstrates a commitment to setting specific targets and timelines for reducing waste, which is essential for achieving sustainability goals. Zero Waste Initiatives: The highlighting of zero waste initiatives underscores a commitment to minimizing waste generation and promoting recycling and reuse. Zero waste principles aim to reduce waste to landfills and incineration, emphasizing sustainable resource management. It is therefore interesting to notice how the Government of India pushed the initiative of circular economy, as the Scandinavian countries, couldn't. Of course, the Scandinavian economic model cannot be superimposed on the Global South and the rest of the world. Delivering on Climate and Sustainable Finance Some notable achievements on Climate and Sustainable Finance under the Indian Presidency are described as follows: Recognition of Climate Finance Importance: The declaration recognizes the crucial role of finance in addressing climate change. It acknowledges that substantial financial resources are required to achieve climate goals under the Paris Agreement, emphasizing the need to align financial flows with climate objectives. Scaling Up Sustainable Finance: The text emphasizes the importance of scaling up sustainable finance, including through blended financial instruments and risk-sharing facilities. This indicates a commitment to mobilizing both public and private capital for climate adaptation and mitigation efforts. Transition Finance Framework: The reference to the Transition Finance Framework signifies a structured approach to transitioning toward a more sustainable financial system. It highlights the need to align financial activities with environmental and climate goals. Support for Developing Countries: The declaration acknowledges the significant financial needs of developing countries to implement their Nationally Determined Contributions (NDCs) and transition to clean energy technologies. It reiterates the commitment to mobilize climate finance of USD 100 billion per year by 2025 to support developing countries. Loss and Damage Funding: The commitment to establishing a fund to respond to loss and damage from climate change for particularly vulnerable developing countries is noteworthy. This reflects a recognition of the disproportionate impact of climate change on these nations and a commitment to assist them. Collective Quantified Goal (NCQG): The call for setting an ambitious, transparent, and trackable New Collective Quantified Goal (NCQG) of climate finance in 2024, starting from USD 100 billion per year, demonstrates a commitment to clear and measurable targets for climate finance. Adaptation Finance: The urging of developed countries to double their provision of adaptation finance by 2025 emphasizes the need to support vulnerable nations in adapting to climate change impacts. Role of Financial Institutions: The acknowledgment of the vital role of private climate finance, alongside public finance, underscores the importance of leveraging both public and private sectors to fund climate projects. Reforming International Financial Institutions Now, in the context of International Law, and multilateral financial institutions, the part on international financial institutions, especially the Multilateral Development Banks (MDBs), is one of the most significant law & policy achievements under the Indian Presidency. Recognition of the 21st Century Challenges: It acknowledges that the 21st century presents unique challenges, including the scale of need and depth of shocks facing developing countries. This recognition reflects an understanding that the global economic landscape is evolving, and traditional development finance systems need to adapt accordingly. Enhancing Multilateral Development Banks (MDBs): The declaration highlights the commitment to improving MDBs by enhancing their operating models, responsiveness, accessibility, and financing capacity. This signifies a recognition of the central role that MDBs play in mobilizing resources for development, especially in developing countries. Scaling Up Development Financing: The reference to moving from "billions to trillions" of dollars for development underscores the ambition to scale up financing significantly. This reflects the urgency of addressing global challenges such as poverty reduction, infrastructure development, and climate change. Voice and Representation of Developing Countries: The emphasis on enhancing the representation and voice of developing countries in global international economic and financial institutions is crucial. It acknowledges the need for a more inclusive and equitable decision-making process, ensuring that the interests and perspectives of developing countries are considered. Maximizing Development Impact: The overall goal of these reforms is to maximize the development impact of international financial institutions. This includes addressing poverty, addressing global challenges, and ensuring that development financing is effectively utilized to achieve meaningful outcomes. Review of Capital Adequacy Frameworks (CAFs): The text mentions the G20 Roadmap for Implementing the Recommendations of the G20 Independent Review of MDBs' Capital Adequacy Frameworks (CAFs). This roadmap outlines a comprehensive plan to enhance the financial capacity and effectiveness of MDBs. The endorsement of this roadmap signifies a commitment to its ambitious implementation. Safeguarding Financial Sustainability: The text emphasizes the importance of implementing these reforms within the MDBs' governance frameworks while safeguarding their long-term financial sustainability. This reflects a balanced approach that ensures that the reforms do not compromise the financial stability and credibility of these institutions. Progress and Collaboration: The text acknowledges the progress made by MDBs in implementing CAF recommendations, including redefining risk appetite and fostering financial innovation. It highlights the collaborative efforts among MDBs in areas such as data sharing (Global Emerging Markets or GEMs data) and future collaboration prospects in hybrid capital, callable capital, and guarantees. Transparency and Accountability: There is an emphasis on transparency and accountability through enhanced dialogue between MDBs, Credit Rating Agencies, and shareholders. This demonstrates a commitment to open communication and clear exchange of information related to the reforms. Estimated Lending Headroom: The text mentions that the initial CAF measures, including those under implementation and consideration, could potentially lead to an additional lending capacity of approximately USD 200 billion over the next decade. This figure underscores the financial impact of the reforms and their potential to significantly increase development financing. Evolution of MDBs: The text calls for comprehensive efforts by MDBs to evolve their vision, incentive structures, operational approaches, and financial capacities. This indicates a recognition that MDBs must adapt to address a wide range of global challenges, including the Sustainable Development Goals (SDGs), while staying consistent with their mandates. Managing Global Debt Vulnerabilities On Global Debt Vulnerabilities, the following aspects addressed in the Declaration, seem quite plausible: Focus on Debt Vulnerabilities: It emphasizes the importance of addressing debt vulnerabilities in low and middle-income countries. This recognition acknowledges the challenges these countries face in managing their debt burdens, which can hinder their economic growth and development. Commitment to Common Framework: The text reaffirms the commitment to the Common Framework for Debt Treatments beyond the Debt Service Suspension Initiative (DSSI). This framework provides a structured approach to addressing debt challenges, and the G20's commitment to its implementation is crucial for providing debt relief to eligible countries. Resolution for Specific Countries: The declaration highlights specific countries such as Zambia, Ghana, Ethiopia, and Sri Lanka, where debt situations are of concern. The G20's call for swift resolutions in these cases underscores its commitment to assisting countries in distress. Global Sovereign Debt Roundtable (GSDR): The reference to the GSDR demonstrates an effort to enhance communication and understanding among key stakeholders involved in debt treatments. This collaboration is essential for coordinating effective debt relief efforts. Debt Transparency: The text acknowledges the importance of debt transparency and encourages all stakeholders, including private creditors, to contribute data. This transparency is crucial for assessing the debt situation accurately and making informed decisions regarding debt treatments. Voluntary Contributions: The mention of private sector lenders contributing data voluntarily to the joint Institute of International Finance (IIF)/OECD Data Repository Portal is noteworthy. It reflects a cooperative effort to provide comprehensive information for debt assessments. Building Digital Public Infrastructure Since India is a leader in enabling Digital Public Infrastructure, the points agreed by the G20 on enabling a Global DPI Repository, are certainly noteworthy: Emphasis on Digital Public Infrastructure: The text recognizes the critical role that safe, secure, and inclusive digital public infrastructure plays in fostering resilience, enabling service delivery, and driving innovation. This recognition underscores the importance of robust digital infrastructure as a foundation for economic growth and development. Respect for Human Rights and Privacy: It emphasizes the need for digital public infrastructure to be respectful of human rights, personal data, privacy, and intellectual property rights. This commitment highlights the G20's dedication to ensuring that digital advancements are made in a responsible and ethical manner. G20 Framework for Systems of DPI: The reference to the G20 Framework for Systems of Digital Public Infrastructure signifies the development of a structured approach to guide the development, deployment, and governance of DPI. This framework provides a common set of principles for G20 members to follow, ensuring consistency and alignment in their digital infrastructure efforts. Global Digital Public Infrastructure Repository (GDPIR): The acknowledgment of India's plan to establish a Global Digital Public Infrastructure Repository is significant. This repository, comprising voluntarily shared DPI from G20 members and beyond, can serve as a valuable resource for countries looking to develop their own digital infrastructure. It promotes knowledge sharing and collaboration among nations. One Future Alliance (OFA): The mention of the One Future Alliance (OFA) initiative demonstrates a commitment to building capacity and providing technical assistance and funding support for implementing DPI in low- and middle-income countries (LMICs). This initiative reflects the G20's focus on inclusivity and helping LMICs harness the benefits of digital infrastructure. In summary, this text underscores the G20's recognition of the transformative potential of digital public infrastructure in various aspects of governance and economic development. It outlines a framework, repository, and initiative aimed at promoting responsible and inclusive digital infrastructure development while respecting human rights and privacy. Crypto-assets: Policy and Regulation, Central Bank Digital Currency & Fostering Digital Ecosystems We congratulate Tanvi Ratna, and her team at Policy 4.0, for their recommendations on Crypto-assets being integrated in the G20 Delhi Declaration. Recognition by Global Regulators and SSBs: Policy 4.0's foundational work on understanding interdependencies in crypto-assets gained recognition and traction from major global regulators and standard-setting bodies (SSBs). Inclusion in FSB Recommendations: The fact that Policy 4.0's work is now part of Recommendation 8 of the 9 final Financial Stability Board (FSB) rules is a significant achievement. Transparency and Safety in the Crypto Ecosystem: The implication of this rule is that it may lead to disclosures on systemically risky off-chain centralized finance (CeFi) activity. This disclosure requirement can enhance transparency and safety within the crypto ecosystem, benefiting not only CeFi but also decentralized finance (DeFi) and Web3 players. It aligns with the G20's emphasis on creating a comprehensive policy and regulatory framework for crypto-assets. Here is an effective summarisation of the things agreed upon: Acknowledgment of Crypto-Asset Risks: The G20 recognizes the fast-paced developments in the crypto-asset ecosystem and the associated risks. This acknowledgment reflects a proactive approach in understanding and addressing the challenges posed by the growing use of cryptocurrencies and digital assets. Endorsement of FSB Recommendations: The G20 endorses the Financial Stability Board's (FSB) high-level recommendations for the regulation, supervision, and oversight of crypto-assets and global stablecoin arrangements. This endorsement signifies the importance of creating a regulatory framework to manage these assets effectively and protect consumers and financial stability. Global Consistency in Regulation: The G20 calls for consistent global implementation of crypto-asset regulations to avoid regulatory arbitrage. This emphasis on global consistency aims to create a level playing field and prevent regulatory gaps that could be exploited by crypto-market participants. Comprehensive Policy and Regulatory Framework: The mention of the IMF-FSB Synthesis Paper and Roadmap indicates a commitment to developing a comprehensive policy and regulatory framework for crypto-assets. This framework takes into account various risks, including those specific to emerging market and developing economies (EMDEs). It demonstrates the G20's intention to address issues such as money laundering and terrorism financing risks associated with cryptocurrencies. Focus on CBDCs: The G20 acknowledges the potential macro-financial implications of CBDCs, particularly in the context of cross-border payments and the international monetary and financial system. This recognition suggests a growing interest in exploring the benefits and challenges of CBDC adoption on a global scale. Engagement with International Organizations: The references to reports from the BIS Innovation Hub (BISIH) and the IMF indicate that the G20 is actively engaging with international organizations to gather insights and expertise on crypto-assets and CBDCs. This collaborative approach fosters a deeper understanding of these technologies and their potential impact. Harnessing Artificial Intelligence (AI) Responsibly for Good and for All The points on affirming Responsible AI ethics based on G7 Hiroshima & previous G20 meetings, purportedly, are reasonable and quite generic. There is nothing new offered in proposition. However, this may be a good form of embrace per se: Global Embrace of AI: The G20 recognizes the rapid progress of AI and its potential to drive economic prosperity and digital expansion globally. This acknowledgment signifies the importance of AI in shaping the future of economies and societies worldwide. Public Good and Responsible Use: The G20 emphasizes the need to leverage AI for the public good. It underscores the importance of deploying AI solutions responsibly, inclusively, and in a human-centric manner. This approach aligns with the goal of ensuring that AI technologies benefit all individuals and communities. Protection of Human Rights: The text highlights the critical importance of protecting human rights when developing, deploying, and using AI. It emphasizes the need for transparency, fairness, accountability, and ethics in AI systems, addressing concerns related to biases, privacy, and data protection. This commitment reflects a dedication to safeguarding the well-being and rights of individuals in the AI era. International Cooperation: The G20 emphasizes the significance of international cooperation and discussions on AI governance. This reflects the global nature of AI challenges and the need for collaborative efforts to establish common principles and standards for responsible AI development and deployment. Reaffirmation of G20 AI Principles: The G20 reaffirms its commitment to the G20 AI Principles (2019), which provide a foundational framework for the responsible use of AI. This reaffirmation underscores the continuity of global efforts in promoting ethical and responsible AI practices. Pro-Innovation Regulatory Approach: The G20 expresses a commitment to pursue a pro-innovation regulatory and governance approach to AI. This approach seeks to maximize the benefits of AI while acknowledging and mitigating the associated risks. It signifies a balanced perspective that encourages innovation while ensuring responsible AI use. AI for Sustainable Development: The G20 recognizes the potential of AI in contributing to the achievement of Sustainable Development Goals (SDGs). This acknowledgment highlights AI's role in addressing global challenges, such as healthcare, education, and environmental sustainability. International Taxation The part on international taxation is noteworthy for its own reasons. However, none of them could be implemented in a short-term aspect for now. It would depend how it goes ahead, and would be delightful to be looked forward to. Global Commitment to Tax Reform: The G20 reaffirms its commitment to achieving a globally fair, sustainable, and modern international tax system. This commitment reflects the recognition that the existing international tax framework needs to evolve to meet the challenges of the 21st century. Two-Pillar International Tax Package: The text acknowledges significant progress on both pillars of the international tax package. Pillar One addresses the allocation of taxing rights, especially for digital businesses, while Pillar Two focuses on ensuring a minimum level of taxation for multinational corporations. This signals a concerted effort by G20 members to address tax challenges arising from the digital economy and international profit shifting. Multilateral Convention (MLC): The reference to the MLC's text and the goal of preparing it for signature in the second half of 2023 underlines the commitment to multilateral cooperation in implementing tax reforms. The MLC is a crucial instrument for aligning tax rules across multiple jurisdictions. Capacity Building for Developing Countries: The declaration recognizes the need for capacity building to help developing countries effectively implement the two-pillar international tax package. This commitment to support developing nations ensures that they can participate in and benefit from the evolving international tax framework. Crypto-Asset Reporting Framework (CARF): The mention of CARF and amendments to the Common Reporting Standard (CRS) demonstrates the G20's recognition of the importance of addressing tax evasion and money laundering associated with crypto-assets. These measures aim to enhance transparency in the use of digital assets for tax purposes. Global Forum on Transparency: The Global Forum's role in coordinating exchanges of tax-related information and the proposed timeline for CARF exchanges by 2027 signal a commitment to international tax transparency. This aligns with global efforts to combat tax evasion and enhance information sharing among tax authorities. A Critical Perspective Now that the key features of the G20 Delhi Declaration have been described, and the specific portions of interest have been discussed, here is a critical perspective on the achievements of this declaration, as far as the Indian presidency is concerned. Merely agreeing to a declaration does not signify implementation. However, with the G20 Declarations, it is well-known that such declarations have a consultative and reflective value upon the frowning state of multilateral institutions and forums. Considering this, the G20 Delhi Declaration is a stupendous achievement - since there is set to happen a virtual edition of the G20 Summit (India) in November, before Brazil takes over the Presidency, and a 60-day timeline is proposed by India to ensure and see how much implementation of certain ideas proposed in the declaration could be done. Insiders point out that if the Indian Presidency would have failed in achieving a proper declaration, as a consensus document of sorts, then it would have poorly reflected on the G20, and its member countries as well. If we look at the role of major powers, China and Russia avoided sending their Heads of State to the forum for their own reasons, owing to the Ukraine-Russia situation and diplomatic couture, since they knew India would get the largest share of credit for the Declaration. Interestingly, the Declaration benefits both Russia and China for political reasons or any aspect of global stability. A lot of propositions in the declaration are overweight or broad - for example - on artificial intelligence and international taxation. However, the commitments underlined on debt restructuring, cryptocurrencies, CBDCs, virtual digital assets, climate finance, multilateral development banks and international health governance, are underrated, and must be appreciated. The Declaration strikes a reasonable balance between the extremes of rules-based multilateralism and value-based multilateralism. It acknowledges a lot of abstract yet symbolic ideas (values-based) while remains rooted in reality on multiple issues with a procedural touch, and systemic credibility (rules-based). The Peoples' Presidency argument posed by India is not overrated. However, except the Diplomatic and Ministerial meetings of the G20 in 2023, the rest of the events and meet-ups organised were full of hype, and rendered no significance to the Presidency. We can say that all the G20 Diplomatic and Ministerial Meetings were successful - while the engagement group meetings and the ancillary events were of no significance, except maybe of certain exceptions like the T20 (led by the Observer Research Foundation), the B20, the SAI20 (led by India's Comptroller and Auditor General), the Scientific Advisors Roundtable, the Startup20 and others. This also shows that India had used enough goodwill, in good faith to at least make countries agree on values-based, rules-based and practice-based principles, and policy action points, to make sense at multiple levels. The Principle of Subsidiarity in International Law, was truly utilised by India to make its ambitious presidency successful via the G20 as a forum.

  • Navigating Generative AI Investments: Unleashing Potential and Tackling Challenges

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of September 2023. Introduction In recent years, the landscape of artificial intelligence (AI) has been significantly reshaped by the emergence of generative AI start-ups. These ventures, driven by innovative algorithms and cutting-edge technologies, have unlocked the potential for machines to autonomously produce content, thereby revolutionising various industries. However, the intersection of generative AI and investments raises multifaceted issues that warrant close examination. Generative AI is a subset of artificial intelligence enabling machines to produce content resembling human creations, encompassing text, images, music, and more. Its applications span creative content generation, design enhancement, data synthesis, and problem-solving across various sectors. It significantly aids artists, writers, and designers in content creation while also driving breakthroughs in healthcare, scientific research, and data analysis, previously deemed unattainable. As per the study conducted by McKinsey and Company, latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually and has the ability to substantially increase labour productivity across the economy. This article delves into the intricate terrain of risks and rewards entailed in investments within the realm of generative AI start-ups. Through meticulous evaluation, the feasibility of diverse applications will be analysed, while delving into the underlying factors shaping the contrast between complimentary and premium usage models. Imbued with an examination of economic intricacies and the landscapes of investment, the article's intent is to furnish a holistic comprehension of the manifold challenges and prospects unfurled by generative AI start-ups, while endeavouring to illuminate their potential ramifications on the broader expanse of the AI landscape. The sustainability of generative AI projects is examined, considering the shift from traditional funding models to innovative strategies that resonate with user preferences. The article concludes by underscoring the intricate interplay between funding, innovation, and societal impact, shaping generative AI's trajectory in a dynamic landscape. The Role of Consistent Funding in Developing Generative AI Developing generative AI faces substantial challenges, primarily stemming from demanding computational needs and intricate data privacy concerns. The intensive computing requirements, driven by complex algorithms and neural networks, create cost barriers and accessibility issues for start-ups and researchers with limited resources. This hampers meaningful generative AI development, especially in regions lacking adequate infrastructure. Additionally, data privacy intricacies arise from the creation and manipulation of training data, presenting ethical dilemmas and regulatory compliance hurdles. Scarcity of suitable training datasets further compounds the challenge, hindering the effectiveness of generative AI models. Amidst these challenges, consistent funding emerges as a vital catalyst. Overcoming the computational intensity obstacle requires substantial financial investment in advanced hardware. Similarly, acquiring and managing diverse and relevant data comes with costs related to data compliance and privacy regulations. Additionally, the scarcity of skilled professionals in this complex field necessitates competitive salaries to attract and retain talent. Ensuring steady funding not only addresses these challenges but also maintains a continuous trajectory of research and development, fostering the creation of effective and ethical generative AI systems. Investment Challenges and Concerns The rapid expansion of generative AI presents a complex landscape fraught with significant apprehensions. Amid the potential societal advantages, a looming surge of start-up failures emerges, propelled by a proliferation of generic AI start-ups, spurred by the interests of venture capitalists. A central concern is the conspicuous absence of distinctive product offerings, as a multitude of enterprises plunge into the field devoid of ground breaking solutions. This dearth in differentiation and value proposition becomes pronounced, especially within text generation, where renowned tools such as OpenAI's offering reign supreme. This predicament poses a formidable challenge to start-ups operating in the Business-to-Consumer (B2C) domain, grappling with feeble customer-solution alignment. Concurrently, this underscores the ascendancy of Business-to-Business (B2B) applications intricately interwoven with enterprise operations. The efficacy of generative AI in content creation is undebatable, yet it cedes ground to classification algorithms in the realm of pattern detection and anomaly identification, casting a shadow of mistrust in production environments. Prioritising start-ups that focus on addressing specific challenges rather than accentuating their technological prowess emerges as a clarion call for emphasizing pragmatic value. Moreover, the establishment and vigilant stewardship of corporate protocols attain paramount significance, safeguarding data privacy and upholding the sanctity of sensitive corporate information. This assumes heightened importance, considering the inherent risks of inadvertently exposing intellectual property while training publicly accessible Large Language Models using proprietary data. Striking an optimal equilibrium between embracing technology's potential and discerning tangible returns warrants meticulous resource allocation, as undue hesitancy risks relegation to the fringes of competitiveness vis-à-vis counterparts leveraging the evolving capacities of AI to reshape industries. The Freemium Model and Financial Viability Amidst the surge of investments into generative AI start-ups, a critical concern materializes – the sustainable viability of projects within this burgeoning sphere. A subset of these ventures, though adorned with technological marvels, grapples with challenges of practical applicability and enduring value. This critical juncture necessitates astute project selection, ensuring simultaneous strides in technological advancement and quantifiable gains for investors and society at large. Moreover, the financial robustness of key stakeholders in the generative AI sector remains a cause for vigilance. Despite substantial infusions of capital, these start-ups remain susceptible to the spectre of financial instability, possibly compromising the security of investors. This heightened concern gains significance considering the broad accessibility of generative AI to the public, resulting in the widespread utilization of AI-generated content without the need for premium services. As a result, inquiries arise regarding the long-term financial sustainability of start-ups operating within this domain. Moreover, the dynamic evolution of the AI landscape necessitates a deeper examination of the trajectory of AI-powered models such as ChatGPT, particularly from an economic standpoint. As an increasing number of users veer towards gratuitous usage over premium subscriptions, the conundrum of perpetuating funding streams for advancing AI technologies acquires renewed prominence. This intricate duality accentuates the underlying challenges tied to investment dynamics, resonating not only within the realms of pioneering start-ups but echoing across the broader expanse of the AI ecosystem. Future of AI-Powered Models and Funding The future trajectory of AI-powered models is poised for a paradigm shift, presenting a landscape rich with possibilities and intricate challenges. The innovation encapsulated by AI-powered models, exemplified by the likes of ChatGPT, holds the potential to reshape industries and human interaction with technology on an unprecedented scale. However, this potential future is not devoid of uncertainties, particularly in the context of funding models that sustain these ground-breaking advancements. One pivotal concern shaping the future of AI-powered models revolves around the sustenance of funding streams. As these models progress in sophistication and utility, the question of how to secure adequate funding becomes increasingly critical. The traditional funding model that relies on premium subscriptions or paid services encounters hurdles in a landscape where the preference for free access is prevalent among users. The challenge is underscored by the delicate balance between democratizing access to AI-powered capabilities and ensuring the financial viability of the platforms offering them. In light of concerns about a lack of subscribers for premium models of generative AIs, the future lies in innovative monetization strategies that resonate with users, while maintaining the financial health of the AI-powered model ecosystem. The evolving funding landscape for AI-powered models reflects the imperative to adapt to changing user preferences while driving innovation. Beyond traditional venture capital channels, novel funding avenues such as corporate partnerships, government grants, and community-driven initiatives are gaining prominence. These diversified funding mechanisms not only reflect the increasing recognition of AI's transformative potential but also signal a more democratized approach to funding, aiming to align the interests of developers, investors, and users. The future of AI-powered models and their funding hinges on the ability to navigate this intricate landscape, where sustainability and innovation are delicately balanced to foster progress in AI while addressing the challenges posed by shifting user dynamics. Sustainable Funding Solutions for Generative AI As the demand for generative AI solutions surges, the quest for sustainable funding avenues gains paramount importance. Exploring innovative strategies can not only mitigate financial challenges but also align with broader environmental and societal goals. Leveraging existing large generative models emerges as a pragmatic solution to streamline resources. Rather than embarking on costly and time-consuming model creation from scratch, companies can capitalize on pre-existing models and fine-tune them to meet specific needs. This approach not only saves time but also taps into the high-quality outputs of models that have been trained on expansive datasets. By building upon the foundations already established, start-ups can channel their resources more efficiently, enabling significant cost reductions while maintaining the quality and efficacy of their generative AI solutions. Efficiency extends beyond output quality to energy conservation. The resource-intensive nature of generative AI can translate into substantial energy consumption, triggering both economic and environmental concerns. Employing energy-conserving computational methods stands as a pivotal solution in this realm. Techniques like pruning, quantization, distillation, and sparsification allow companies to optimise their models, reducing energy consumption and the associated carbon footprint. Such initiatives align with sustainable practices, not only minimising operational costs but also positioning generative AI ventures as environmentally responsible actors. This dual benefit extends beyond immediate funding considerations, resonating with stakeholders who prioritize environmentally conscious practices, thereby potentially attracting more support and investment from environmentally-conscious investors. Additionally, the principle of resource optimization can be extended to model and resource reuse. Generative AI inherently possesses the capacity to generate diverse and novel outputs using the same model, reducing the need for constant model creation or data acquisition. By repurposing existing models and resources for various applications, companies can drastically cut costs and expedite development timelines. This reuse-driven approach not only yields financial efficiencies but also supports sustainable development, as it minimizes unnecessary duplication of efforts and resources. Furthermore, aligning generative AI initiatives with Environmental, Social, and Governance (ESG) objectives can act as a strategic approach to sustainable funding. By demonstrating how generative AI contributes to addressing societal challenges, such as waste reduction, healthcare enhancement, or educational empowerment, companies can attract investors and customers who are increasingly attuned to ESG concerns. This alignment not only reflects a commitment to responsible innovation but also widens the pool of potential supporters, fostering financial sustainability that echoes societal impact. Conclusion In the realm of generative AI, the fusion of innovation and investment unfolds a landscape rich with potential and complexities. The surge of generative AI ventures underscores the significance of funding models that sustain their growth and development. The future of AI-powered models holds transformative promise, underscored by their ability to reshape industries and human interactions. However, the path to this future is marked by the need for adaptive funding mechanisms. The delicate equilibrium between democratising access to AI capabilities and ensuring financial sustainability necessitates innovative monetisation strategies. The shift towards diversified funding avenues, encompassing corporate partnerships, government grants, and community-driven initiatives, not only acknowledges the transformative potential of AI but also aligns with a more inclusive funding approach. As generative AI journeys towards sustainability and innovation, its challenges and solutions reflect the broader evolution of the AI landscape. By capitalising on existing models, optimising energy consumption, reusing resources, and aligning with ESG goals, sustainable funding avenues can be cultivated. These strategies not only address immediate financial considerations but also echo the commitment to responsible innovation. As generative AI ventures unfold, it is this delicate interplay between funding, innovation, and societal impact that will shape their trajectory, weaving a future where technology thrives in harmony with the needs of society. In the dynamic and ever-evolving landscape of generative AI investments, an adept navigation of challenges and harnessing of opportunities hold the key to unleashing the full potential of this transformative technology.

  • New Report: Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004

    This is the first report on legal & policy aspects related to Web3 technologies, developed by VLiGTA, the research & innovation division of Indic Pacific Legal Research. In this report, we have offered a comprehensive overview of state of Web3 policy & governance outlooks in India. The report addresses the state of India’s successful Digital Public Infrastructure, and examines the state of technology governance as well. Further, the report focuses on several kinds of blockchain consensus algorithms, and the issues related to transitioning from using Web2 system infrastructure to Web3 system infrastructure. Sanad Arora’s contributions in Chapter 2 are remarkable. Akash Manwani’s contributions in Chapters 5 and 6 are unique, specific and relevant to the current discourse. It has been my honour to contribute to Chapters 3, 4 and 6, to offer informed perspectives and analyses. We have offered thought models and suggestions in the form of use cases of Web3 in areas such as data portability, voting, supply management, decentralised exchanges and zero-knowledge taxes. With this general technical report, we hope to offer more contributions in India’s Web3 policy space, in future. You can find a glance of the report here. You can access the complete report on the VLiGTA App. Price: 400 INR The conclusions and recommendations provided in the report are described here as well. Conclusion The choice between centralized and decentralized technology infrastructures should be made thoughtfully, considering the specific needs and objectives of each application. Decentralized approaches offer greater transparency and data integrity but may require careful scalability planning. On the other hand, centralized models can provide efficiency and centralized control but may face challenges related to transparency and accountability. Now, we already see that at Union and State levels, India is trying to develop and provide scalable and sustainable Web3 solutions. For sure, the CDAC proposals of a National Blockchain Service & the Unified Blockchain Network, termed under Blockchain-as-a-Service (BaaS) are ambitious and clear about their objectives. This report concludes with two kinds of recommendations – general and specific. We have offered tailor-made and practical recommendations, which may be workable and could be adapted with. Recommendations from VLiGTA-TR-004 Mitigating Structural Limitations Continuously assess the Digital Public Infrastructure (DPI) to identify and address structural limitations as Web3 technologies are integrated. Adopt a Web 2.5 approach that combines the strengths of both Web2 and Web3 ecosystems to mitigate potential limitations. Collaborate with stakeholders to develop strategies for a seamless transition to Web3 technologies while ensuring the DPI's robustness. Using certain blockchain consensus algorithms could certainly be helpful for the Government to invent taxonomies of governance, compliance and transparency when DPI components are built on chains. Utilizing open source Web3 and Web2 technologies in conjugation would change the economics of infrastructure-related solutions offered by the Government of India under India Stack, and even the proposed National Blockchain Service & the Unified Blockchain Network. The National Blockchain Service (NBS) and Unified Blockchain Network (UBN) proposals present significant opportunities for enhancing data management, tax filing, voting systems, and global supply chain tracking within India. Leveraging blockchain technology can contribute to increased transparency, security, and efficiency across these domains. Governance Clarity and Risk Mitigation Establish clear policies and governance frameworks for the adoption of Web3 infrastructure, encompassing both political and technical aspects. Ensure that decision-making processes are agile and effective, even in the face of complex challenges (polycrisis). Safeguard against policy paralysis and potential disruptions to administrative and regulatory systems through proactive risk mitigation measures. Decentralised Exchanges (DEXs) for Government-to-Government Transactions The prospective advantages arising from DEX implementation in the sphere of inter-ministerial and governmental financial operations are substantial. DEXs may be endowed with interoperability capabilities, enabling seamless fund transference amongst diverse governmental departments. This enhancement would serve to elevate the efficiency of financial transactions, concurrently mitigating the specter of fraudulent activities. DEXs could be harnessed as a facilitative medium for the exchange of Indian Central Bank Digital Currencies (CBDCs) among various government entities. This envisioned application stands to foster the adoption of CBDCs and streamline governmental financial management. DEXs hold the potential to abate the inherent risks associated with bureaucratic participation in financial transactions by endowing them with a secure and transparent conduit for fund exchanges. Consequently, governmental personnel could redirect their focus towards policy formulation and execution. DEXs can assume either an open-ended or close-ended configuration, affording governmental authorities the prerogative to select the most pertinent model aligned with their specific requisites. An open-ended DEX would grant unrestricted participation, while a close-ended variant would restrict access to authorized users. The degree of centralization, decentralization, and federalization of DEXs may fluctuate contingent upon the system's unique architectural design. Centralized DEXs would be subject to sole-entity control, whereas decentralized counterparts would operate within a network of nodes. Federalized DEXs would emerge as an amalgamation of these paradigms, featuring government-operated nodes and privately controlled nodes. DEXs are adaptable for deployment either in retail contexts, catering to individuals and commercial entities in fund exchange, or in government-to-government scenarios, serving as the conduit for intergovernmental fund transfers. However, judicious scrutiny and meticulous tailoring of the DEX system's specifications are imperative to ensure seamless alignment with the government's distinct exigencies. Consequently, the government should engage in a comprehensive feasibility assessment concerning the prospective integration of such a system in the immediate future. Purposeful Choices for Web3 Adoption Embrace a technology-neutral approach to accommodate various Web3 use cases while aligning with India's policy vision. Focus on development-oriented strategies that leverage Web3 technologies to address societal and economic challenges. Encourage socio-technical mobility by fostering an environment where both public and private sectors can adapt and innovate with Web3 tools. Leverage leapfrogged access points to ensure that Web3 technologies are accessible and beneficial to a broad spectrum of the population. National Blockchain Service (NBS) Implementation Implement the NBS infrastructure as per the proposed four-part regional approach to enhance accessibility and efficiency. Ensure seamless integration of the NBS with existing government systems like India Stack and Web2 DPI solutions for improved service delivery. Data Portability Consider leveraging blockchain technology to enable data portability, following the principles of data fluidity. Explore the use of Decentralized Applications (DApps) or Decentralized Autonomous Organizations (DAOs) for data portability within a blockchain framework. Categorize data based on risk levels to determine the extent of portability. Zero-Knowledge Taxes (ZKT) Develop a secure and tamper-proof blockchain network for ZKT implementation, ensuring data privacy. Consider adopting blockchain consensus algorithms for generating and verifying zero-knowledge proofs. Assess the cost and security factors in choosing between trusting third-party applications or a decentralized blockchain network for ZKT. Decentralized Voting Evaluate the potential benefits of blockchain-based decentralized voting, including enhanced transparency and security. Consider the adoption of cryptographic credentials for voters to ensure anonymity and authentication. Weigh the trade-offs between centralized and decentralized voting systems, taking into account specific use cases and the level of trust in centralized entities. Global Supply Chain Tracking Implement blockchain-powered supply chain tracking to improve transparency and traceability. Leverage blockchain to verify product authenticity and ethical certifications, providing consumers with trusted information. Carefully assess the choice between centralized and decentralized supply chain tracking based on the desired level of control and transparency. Get access to the complete report at 400 INR. Access the full report at https://vligta.app/product/reinventing-regulating-policy-use-cases-of-web3-for-india-vligta-tr-004/

  • Zero Knowledge Systems in Law & Policy

    Despite the market volatility attributable to cryptocurrencies, the scope of Web3 technologies and their business models is yet unexplored, especially in the Indian context. Few companies like Polygon, Coinbase India, Binance and others are addressing that. In this article, the purpose of Zero Knowledge System as a method to conduct cryptographic proofs is explored, and some policy questions on whether some ideas and assertions of ZKS can be integrated into the domains of law & policy are addressed, considering the role of India as a leader of the Global South. The Essence of Zero Knowledge in Web3 To begin in simple terms, a Zero Knowledge System is based on probabilistic models of proof verification and not deterministic models. It is one of the methods in cryptography used for entity authentication.. Let us understand it with the help of a diagram. Imagine for a moment that you may be required to prove something to somebody. Anyone in obvious terms would say that to prove anything, something has to be revealed. Let us say you have to prove people that "I have something K in possession" without showing K in possession. Now, taking directly this into the digital context, it means that you have to prove that you have K without showing K to the person. In that case, you are a prover, and the person who is asking for a proof is a verifier. Such a system, through which you prove something without revealing the key information it is known as a Zero Knowledge System. Now, every Zero Knowledge System (ZKS) has three important features. First, the rules of use of the system must be adhered, and the statement of proof must be true, so that the verifier does not require any third-party means to get the validity. Second, the idea is not to achieve a 100% convincing and true statement but to prove to the verifier that the statement has a probability to be true. In many cases of ZKS, it may not be possible to prove a statement of proof to be 100% / exactly true in real life. Third, the verifier would not know the key information behind the proof statement made by the prover. The essence of having such a systemic effort is simple. When public and private blockchains under a distributed ledger system are used, cryptography may help in finding out the relevant details of the people who were involved in the cryptocurrency transactions, in the case of a public blockchain. However, the effort of ZKS is to remove identifiable information as the means of verification. In fact, in July 2022, Polygon, one of the most ambitious Web3 ecosystem companies, from Bengaluru (and Singapore) declared that they have developed a Zero-Knowledge Scaling Solution, which is fully compatible with Ethereum. In this update, it is explained how the solution works: The ZK proof technology works by batching transactions into groups, which are then relayed to the Ethereum Network as a single, bulk transaction. The 'gas fee' for the single transaction is then split between all the participants involved, dramatically lowering fees. For developers of payment and DeFi applications, Polygon zkEVM's high security and censorship resistance makes it a more attractive option than other Layer 2 scaling solutions. Unlike Optimistic roll-ups where users have to wait for as long as seven days for deposits and withdrawals, zk-Rollups offer faster settlement and far better capital efficiency. [...] Polygon zkEVM is a Layer 2 scaling solution that enables developers to execute arbitrary transactions, including smart contracts off-chain rapidly and inexpensively while keeping all proofs and data provenance on the secure Ethereum blockchain. In addition, Polygon had published a thesis on democratising ZKS. Recently, an infographic was published by Polygon about zkEVM: Now, in terms of understanding probability theory in maths, Zero Knowledge Proofs may be distinguished into three variants, despite their bi-products of use could be multiple. Perfect Zero Knowledge (Pzk) Statistical Zero Knowledge (Szk) Computational Zero Knowledge (Czk) Pzk implies that when the proof of knowledge shows exact probability distribution of the likelihood, as if a simulator does. Szk happens to be when the simulator and the proof system's probability distributions are just statistically close, and not the same. The case of Czk applies when an algorithm is unable to distinguish between the proof system and simulator's distributions. This shows that simulations and proof systems when are tested, the chaos of bringing identifiable key information is out of options, and the process of verification is enabled. Recently, Cloudfare had also developed Zero Knowledge Proofs for Private Web Attestation with Cross/Multi-vendor Hardware. In this diagram above, it is explained how Cloudfare's WebAuthn feature works within the frame of Zero Knowledge Systems. This is not a public-level use case, because such a functionality is possible to be used in close-ended institutions where trust is high, such as financial institutions. Plus, the servers and certificate chains along with the hardware security key are close ended. This at least justifies another possibility of using ZKS. Now, the purpose of this article is to propose and check if the mathematical conception of ZKS, could be applicable in law and public policy. A design thinking approach has been applied to address this. In the next sections, the possibilities of integrating ZKS into law and public policy domains are addressed. Legal Systems with Zero Knowledge In law, you can divide the basis of integrating ZKS in two forms - hard law and soft law. Let us address hard law first. Hard law systems are defined by the model of positive law, top-down governance and a regulatory landscape which reflects public interests. Zero Knowledge and Hard Law Now, the transformation of modern legal instruments shows that top-down governance, justified by addressing rule of law concerns, matter. It may be assumed in an ordinary way that Zero Knowledge Systems are suitable to soft law governance and regulating propositions and may not fit into the realm of hard law. However, we have to understand that the same was proposed about Web2 technologies. Interestingly, in technology and IP law domains, that integration began with legal reforms in the realm of telecom law. Accepting a definition and some basic understanding of information and communication technologies (ICT) was important since that created a space of opening up to new legal understandings. The concept of cyberspace, an integral aspect of Web1 and Web2, is understood through multiple kinds of legal fiction, which may be even attributed to how international space law evolved. In addition, due to certain unsustainable Web3 business activities (FTX for example) - there are certain ideas in the realm of Web3, which must be harmonised for good. This is why we have to revisit two things before moving on to Soft Law - (1) making Web3 habitual to the hard law instruments and systems (which is Law 2.0); and (2) making an enriched and mature pathway to formalise the transition from Web2 to Web3 as an infrastructure, as well as a social ecosystem. Now, we could have opted for analysing the integration of distributed ledger (or blockchains) into law and public policy domains. However, limiting Law and Web3 research perspectives to crypto is unnecessary and unjust since the domain of Web3 offers innumerable possibilities. Also, there are multiple emerging methods of cryptography, from Proof-of-Work to Proof-of-Acceptance. Choosing Zero Knowledge Protocols / Proofs / Systems is a unique choice due to its special features and the logical uniqueness of the concept itself. Making Web3 Habitable to Law 2.0 A Zero Knowledge Proof signifies that nothing which is identifiable and subject to validation, is disclosed to the verifier. In a hard law system, this may be considered contrarian to systems and their regulatory and judicial bodies, who consider that the proofs must be backed by things which are tangibly disclosed. Of course, Zero Knowledge Systems by design if are imposed bluntly like this on Law 2.0, would not work. The reason is that both ZKS and Law 2.0 as it exists, are not building interoperability and compatibility. Now, there is an interesting example from Firozabad, Uttar Pradesh, where the Uttar Pradesh Police has implemented a Public Grievance Management System for the city of Firozabad. Here is a Twitter thread by Sandeep Naliwal, the co-Founder of Polygon. The basic premise behind the purpose of a blockchain-enabled grievance registration system is that FIRs are registered online and police authorities cannot deny that the grievances are registered. No lower-level officers can claim nothing was registered, and this could be regarded as a reformist move. Let us also understand how the Public Grievances Redressal System works as described by the Firozabad Police. This diagram clearly explains how FPGMS implemented by the Firozabad Police works. Now, such innovations, no wonder, are appreciated. However, these solutions are too generic, and yet only address some basic issues related to our systems. Although some district / state authorities may prefer such kinds of solutions, from a policy perspective, they are merely symbolic and nothing else. Yet, if such frugal innovations are preferred, it is appropriate. In addition, it may be assumed that using blockchains as such for these solutions could be a direct method to resolve many things, which is not true. Now, solutions like discussed above, may also be applied through Zero Knowledge Systems, where let us say certain public-to-government systems of engagement are designed in such a way that if treat an individual (not necessarily in the case of grievance redressal only but also in various cases) may choose one among a set of defined Zero Knowledge Protocols (ZKP) to engage with the government, while authentication (or entity verification) is done through ZKP where if the government is the verifier, then they can get probabilistic results to check if the proofs explain. However, such governance solutions may be unnecessary in application where technical expertise to access data and metadata for evidentiary and internal evaluation become necessary. Plus, you would also require algorithmic solutions to ensure this to happen, which again, thanks to the black box problem, could make things problematic. This means that Zero Knowledge Protocols cannot be used as outliers like that. Yes, when it comes to government identification documents, such as Aadhar, PAN and others, then at some critical level of urgency or due diligence, Zero Knowledge Systems can be enforced to ensure parity and privacy of individuals. However, there is another aspect of Zero Knowledge Systems which may be integrated into the legal domain. Let us say that a regulator has to designate levels of engagement with stakeholders, parties to a regulatory dispute or their counterparts, and they wish to develop certain Zero Knowledge Proofs where verification is essential to the level of engagement. In that case, it could be made possible. Let us break this proposition into 3 forms: (1) stakeholders; (2) parties to a regulatory dispute; and (3) their counterparts. In case (1), let us say a competition regulator has to designate the level of engagement of the stakeholders. The rationale is clear: they would like to optimise engagement levels to designate necessities and priorities (and not to block the stakeholders from even engaging). If engagement is limited to analysis of comments and suggestions, then ZKP is not required. However, if the engagement is multi-sectoral, where stakeholders are same or different, or their focus areas converge - only to make things complicated, then ZKP can be applied to designate certain level-playing criteria for the stakeholders, such that multiple horizontal-level ZKPs (whose purposes of use intersect a lot) can be created. Many times, it is stated that multiple public-level stakeholders such as members of the media, civil society, etc., are found leaking critical information about any negotiations or consultations, deliberately. Although, some level of transparency is good (even there ZKPs may be designated), multiple horizontal-level ZKPs can be used to keep the stakeholders intimated that their proofs are under consideration and probabilistic grounds may be internalised, accordingly. However, this might work for internal and closed engagement. Yet, this is a proposition, which may be subject to thinking. In case (2), it would not be appropriate to use ZKP to hide evidence or necessary information to be subject to disclosure. To make things interesting, we can apply one aspect of ZKP here. Let us say there is critical information which cannot be disclosed by a party. Then, the regulator can estimate the information which the party concerned has refused to disclose. In such a case, ZKPs may be used to garner certain probabilistic insights indirectly from the party concerned. This may not be useful until the key aspect of validation is clearer, but it could work if thought out well. In case (3), regulators and their counterparts in other countries, sometimes due to sovereign interests or national security or secrecy concerns may suffer from a deadlock to engage and share relevant knowledge. In that case, to address the deadlock, cooperative Zero Knowledge Protocols may be created to generate trust-based engagement to break the deadlock. Here, probability may help regulators making decisions and encompassing their own approaches to take things forward or still. Another aspect of using ZKP could be to encapsulate trust as a "channel" of engagement on certain critical issues, like nuclear deterrence and others. It is proposed that in a multi-polar world, where trust, metadata, knowledge and information can easily be weaponised, instead of being utterly protectionist or hawkish, governments may develop a "language" of zero knowledge-based engagement in certain affairs. ZKP could also be workable in the case of "AI Diplomacy". Interestingly, Corneliu Bjola had written on Diplomacy in the Age of Artificial Intelligence for the UAE's Emirates Diplomatic Academy. The diagram above from Bjola's paper on AI and Diplomacy clearly explains how structured and unstructured decisions may be logically dealt with. Here, ZKP can help a lot to designate what Zero Knowledge Proofs are designed, and how in a vertical / oblique hierarchy, they are established within a government functionary. This can also be understood from Figure 8, referring to the Social Informatics of Knowledge Embodiment. The hierarchy decided to designate an AI Robotic System is interesting. It starts from being a cooperator, until coopetition becomes a reality. Since the knowledge required at multiple levels, differs with purpose and human cognition is extended even at the top level, ZKP may be useful to create indispensable connectivity between kinds of knowledge, their sharing and evaluation-related viabilities. Here is an interesting diagram from Zoravar Daulet Singh's Power and Diplomacy: India’s Foreign Policies during the Cold War (2018), which can also be taken for reference to see where ZKP can be pushed through. If we compare the diagrams in Bjola's paper, the use of ZKP could be applied, to mitigate lack of coherence amidst behaviour patterns, which are congruent, consistent yet unlikely and incongruent that affect decision-making. Validation matters, so building alternative correlations among the kinds of behaviours could be possible. The Web2-to-Web3 (2to3) Transition through Law 2.0 Achieving and contributing the Web2 to Web3 transition in systems and ethics, within the framework of Law 2.0 could be an interesting and pertinent proposition, if we can use Zero Knowledge Systems for the same purpose. For closed systems, as the case with Cloudfare was explained, verification could be considered when the key information required is embedded in the closed systems and institutions. For open systems, for example, a digital public square, convergences can be achieved by technological hedging. Legal systems have to recognise the ontological and practical purpose of these multiple horizontal-level efforts and recognise their value. Now, Law 2.0 implies a harmonious and naturalised integration of technologies into the legal fiction. The impact of such integration could be positive as governance priorities may shape up quite suitably. Balaji Srinivasan's The Network State (2022) discusses the concept of Network State in that aspect, quite clearly. Zero Knowledge and Soft Law When it comes to Soft Law, Zero Knowledge Systems can easily be integrated, due to the nature of Law 3.0 as a proposed field. Validation can be achieved among self-regulating companies which can be then addressed by the government at a centralised level. From a theory point-of-view, ZKP may not be needed to achieve complete decentralisation, since centralisation is a part of governance considerations. Now, let us estimate where such validation-requiring Zero Knowledge Proofs can be used. For starters, ZKP can be easily used to build peer-to-peer self-regulating standards. Taking cue from Law 2.0 on Regulator's levels of engagement, while certain critical information is not visible or disclosed, the Protocols can be established to analyse the horizontal-level impact of the self-regulating standards already proposed by the government. Since the legal interpretation exists, ZKP enables to provide peer-to-peer company-related insights, through probability. Obviously, not all standards can be enforced directly and making interpretation complicated by not having an informed or optimised legislative intent can be mitigated, from a procedural aspect. Another use of ZKP could be possible in the fintech industry to avoid predatory retail and credit loan offers from being recommended, which depends on the Central Bank (in India's case, the RBI). Unbundling Policy Dynamics with Zero Knowledge Now, as compared to law, policy dynamics are amorphous in nature. In addition, while policy dynamics are intersectional to multiple domains, related or unrelated, political consensus & motivation shape political trust. Zero Knowledge Systems can be used to generate policy innovations beyond governance mechanisms and digital public infrastructure. Let us then address political trust quickly. In politics, people-to-government engagement is a generic aspect of building trust. Political trust can also be built by endorsing public-private partnerships and cooperative societies, since the commercial focus may crystallise the avenues of political consensus. Then, substantive propositions and solutions can also germinate trust. Since interconnectedness is a tangible element of Web2 technologies and their necessities, ZKP can be used to protect that interconnectedness or interoperability, whichever the objective deems fit. The reason is that interoperability may not imply absolute cohesion of data and information, while interconnectedness implies that a mesh of counter-dependencies or codependency exists. We can see this aspect behind the proposition of zero-knowledge taxes made by Matthew Niemerg for Yahoo Finance: “Zero-knowledge taxes” describes a situation in which taxes can be filed and verified with zero-knowledge proofs. This could operate through a trusted, third party application that analyzes a user’s wallets and calculates taxable events, resulting in a net summary of the individual’s taxes for the year. That summary tax payment, along with the proof itself, is submitted to the regulating entity, which can verify through the proof that the tax summary is accurate without needing to see every transaction leading up to the summary. Although there are multiple issues and risks with the model because while privacy matters, you can read Sanad Arora's article on Central Bank Digital Currencies to understand where do the privacy concerns lie and can be managed. Another example of applying Zero Knowledge Proofs in policy could be protecting information to promote nuclear disarmament talks. To avoid revealing information about the composition and configuration of the cubes, bubbles created in this manner were added to those already preloaded into the detectors. The preload was designed so that if a valid object were presented, the sum of the preload and the signal detected with the object present would equal the count produced by firing neutrons directly into the detectors – with no object in front of them. The experiment found that the count for the “true” pattern equaled the sum of the preload and the object when neutrons were beamed with nothing in front of them, while the count for the significantly different “false” arrangements clearly did not. Conclusion This article consists of propositions, which are theoretical. The reason is that Zero Knowledge Systems require more testing in real life, from a case-to-case perspective. At least through the efforts of this article, we can think and revisit our approaches to decentralising and centralising digital governance. Comments and counterpoints will be appreciated.

  • The Digital Personal Data Protection Act & Shaping AI Regulation in India

    As of August 11, 2023, the President of India has given assent to the Digital Personal Data Protection Act (DPDPA), and it is clear that the legal instrument after its notification in the Official Gazette, is notified as a law. Now, there have been multiple briefs, insights and infographics which have been reproduced and published by several law firms across India. This article thus focuses on the key provisions of the Act, and explores how it would shape the trajectory of AI Regulation in India, especially considering the recent amendments in the Competition Act, 2002 and the trajectory for the upcoming Digital India Act, which is still in the process. You can read the analysis on the Digital India Act as proposed in March 2023 here. You can also find this complete primer of the important provisions of the Digital Personal Data Protection Act here, which have been discussed in this article. We urge you to download the file as we have discussed provisions which are described in this document. General Review of the Key Provisions of the DPDPA Let's begin with the stakeholders under this Act. The Digital Personal Data Protection Act, 2023 (DPDP Act) defines the following stakeholders and their relationships: Data Principal: The individual to whom the personal data relates. Consent Manager: A person or entity appointed by a Data Fiduciary to manage consents for processing personal data. Data Protection Board (DPB): A statutory body established under the DPDP Act to regulate the processing of personal data in India. Data Processor: A person or entity who processes personal data on behalf of a Data Fiduciary. Data Fiduciary: A person or entity who alone or in conjunction with other persons determines the purpose and means of processing of personal data. Significant Data Fiduciary: A Data Fiduciary that meets certain thresholds, such as for example, having a turnover of more than INR 100 crores or processing personal data of more than 50 million data principals. However, it is to be noted that no specified threshold has been defined in the Act, as of now. The relationships among these stakeholders are as follows: The Data Principal is the owner of their personal data and has the right to control how their data is processed. The Consent Manager is responsible for managing consents for processing personal data on behalf of the Data Fiduciary. The DPB is responsible for regulating the processing of personal data in India. It has the power to investigate complaints, issue directions, and impose penalties. The Data Processor is responsible for processing personal data on behalf of the Data Fiduciary in accordance with the Data Fiduciary's instructions. The Data Fiduciary is responsible for determining the purpose and means of processing personal data. They must comply with the DPDP Act and the directions of the DPB. A Significant Data Fiduciary has additional obligations under the DPDP Act, such as appointing a Data Protection Officer and conducting data protection impact assessments. Data Protection Rights Now, while the Act clearly has a set of rights for Data Principals and obligations attached to Data Fiduciaries, which is discussed further. However, a lot of the provisions in the Act, contain the clause "as may be prescribed". This means a lot of the provisions will remain subject to delegated legislation, which makes sense, because the Government could not integrate every aspect of data regulation and protection into the Act and could only propose specific and basic provisions, which could make sense, from a multi-stakeholder and citizen perspective. Now, like the General Data Protection Regulation in the European Union, the rights of a Data Principal are clearly defined in Sections 11-14 of the Act, stated as follows: Right to access information about personal data which includes: a summary of personal data identities of Data Fiduciaries and Data Processors who have been shared the same any other related information related to the Data Principal and the processing itself Right to: correction of personal data completion of personal data updating of personal data and erasure (deletion) of personal data Right to grievance redressal which has to be readily available Right to nominate someone else to their exercise their data protection rights under this Act, as Data Principals There are no specific parameters or factors defined when it comes to the Right to be Forgotten (erasure of personal data). Hence, we can expect some specific guidelines and circulars to address this issue, along with industry-specific interventions, for example, by the RBI in the fintech industry. Now, the provisos containing a list of duties of a Data Principal are referred to for obvious reasons. That is done for a reflective perspective to estimate policy and ethical perspectives on the Data Protection Board's internal expectations. Like the Fundamental Duties, these duties also do not have any binding value, nor does it affect the data-related jurisprudence in India, especially on matters related to this Act. However, those duties could be invoked by any party to a data protection-related civil dispute for the purposes of interpretation and to elaborate on the purpose of the Act. Nevertheless, invoking the duties of Data Principals has a limited impact. Legitimate Use of Personal Data The following are considered as "legitimate use" of personal data by a Data Fiduciary: Processing personal data for the Government with respect to any subsidy, benefit, service, certificate, licence or permit prescribed by the Government. For example: to let people avail benefits of a government scheme or programme through an App, personal data would have to be processed Processing personal data to: Fulfil any obligation under any law in force or Disclose any information to the State or any of its instrumentalities This is subject to the obligation that processing of personal data is being done in accordance with the provisions regarding disclosure of such information in any other law Processing personal data in compliance with: Any judgment or decree or order issued in India, or Any judgment or order relating to claims of a contractual or civil nature based on a law in force outside India When a Data Principal voluntarily offers personal data to the Data Fiduciary (a company, for example). This is applicable when it has not been indicated at all that the Data Fiduciary does not have consent to process data This is therefore a negative obligation on the Data Fiduciary (a company, for example). If consent is not granted by indication, then data cannot be processed There are other broad grounds as well, such as national security, sovereignty of India, disaster management measures, medical services and others. Major Policy Dilemmas & Challenges with DPDPA Now, there are certain aspects on the data protection rights in this Act, which must be understood. Now, publicly available data as stated in the Section 3 of this Act, will not be covered by the provisions of this Act. This means that if you post something on social media (for example), or give prompts to generative AI tools, then they are not covered under the provisions of this Act in India, which is not the case in Western countries and even China overall. Since different provisions refer to the Data Protection Board having powers of a civil court on specific matters, under the Civil Procedure Code of 1908, and that the orders of the Appellate Tribunal under this Act, are executable as a civil decree, it clearly - and obviously signifies that most data protection issues would be commercial and civil law issues. In other countries, the element of public duty (emanated from public law) comes in. This also shows clearly that in the context of public law, India is not opening its approach to regulate the use of artificial intelligence technologies at macro and micro scales yet. I am certain this will be addressed in the context of high-risk and low-risk AI systems in the Digital India Act. On the transnational flow of data and the issue of building bridges and digital connectivity between India and other countries, the Act gives unilateral powers to the Government to restrict flow of data whenever they find a ground to do so. This is why nothing specific as to the measures have been described by the Government yet, because of the trade negotiations on information economy between India and stakeholders such as the UK, the European Union and others, which useless get stuck. In fact, this is a general problem across the board for companies and governments around the world for the simple reasons - (1) the trans-border flow of data is a trade law issue, requiring countries to render diplomatic negotiations, without reaching at a consensus, due to the transactional aspect of it; (2) data protection law, which is a subset of technology law, has a historical inference to the field of telecommunications law, which is why the contractual and commercial nature of trans-border data flow since being related to telecom law, may not arrive at conclusions. This is relatable to the poignant issue of moratoriums on digital goods and services under WTO Law, which is subject to discussion in future WTO Ministerial Conferences. Here is an excerpt from the India & South Africa's joint submissions on 'E-commerce Moratoriums': What about the positive impacts of the digital economy for developing countries? Should these not also be taken into account in the discussion on losses and the impact of the moratorium? After all, it is often said that new digital technologies can provide developing countries with new income generation opportunities, including for their Micro and Small and Medium Sized Enterprises (MSMEs). [...] Further, ownership of platforms is the new critical factor measuring success in the digital economy. The platform has emerged as the new business model, capable of extracting and controlling immense amounts of data. However, with ‘platformisation’, we have seen the rise of large monopolistic firms. UNCTAD’s Digital Economy Report (2019) highlights that the US and East Asia accounts for 90 percent of the market capitalization value of the world’s 70 largest digital platforms. Africa and Latin America’s share together is only 1 percent. Seven ‘super platforms’ – Microsoft, Apple, Amazon, Google, Facebook, Tencent and Alibaba – account for two-thirds of total market value. In particular, Africa and Latin America are trailing far behind. Also, startups have been given exemptions from certain crucial compliances under this Act. While this may be justified as a move to promote the Digital India and startup ecosystem in India, and some may argue that it is against creating a privacy-compliant startup ecosystem, another aspect which is ignored by most critics of this Act (formerly a Bill), is the sluggishness and hawkishness of the bureaucratic mindset behind ensuring compliances. Maybe, this gives some room to ensure a flexible compliance environment, if the provisions are used reasonably. Plus, how would this affect fintech companies when it comes to data collection-related compliances would have to be seen. Although it is clear that the data protection law, for its own limits, will not supersede fintech regulations and other public & private law systems. This means, the fintech regulations on data collection and restrictions on the use of it, will prevail over this Data Protection law. For Data Fiduciaries, if they would have to collect data every time, they would have to give a notice every time when they request consent from a Data Principal. It is argued rightfully that merely having a privacy policy would not matter. since there would be multiple instances of data collection in an app / website interface in multiple locations of the app / website. Here is an illustration from the Act, which explains the same. X, an individual, opens a bank account using the mobile app or website of Y, a bank. To complete the Know-Your-Customer requirements under law for opening of bank account, X opts for processing of her personal data by Y in a live, video-based customer identification process. Y shall accompany or precede the request for the personal data with notice to X, describing the personal data and the purpose of its processing. Interestingly, the Act defines obligations for Data Fiduciaries, but not Data Processors, which seems strange. Or, it could be argued that the Government would like to keep the legal issues between the Data Fiduciary and their assigned Data Processors, subject to contractual terms. We must remember that for example, in Section 8(1) of the Act, the Data Fiduciaries are required to comply with the provisions of the Act (DPDPA), "irrespective of any agreement to the contrary or failure of a Data Principal to carry out the duties provided under this Act" considering any processing undertaken by the Data Processor. Now, the issue that may arise is - what happens if the Data Processor makes a shoddy mistake? What if the data breach is caused by the actions of the Data Processor despite due dilligence by the Data Fiduciary? This makes the role of Data Processors more of a commercial law issue or dilemma when contracts are agreed upon, instead of making it a civil or public law issue, in the context of the Act. Finally, the Act introduces a new concept known as the "consent manager." Now, as argued by Sriya Sridhar - such a conceptual stakeholder could be related with one of the most successful stakeholder systems created in the RBI's fintech regulation framework, that i.e., Account Aggregators (AAs). Since the DPDPA would not have precedence over fintech regulations of the Reserve Bank of India, for example - and the role of data protection itself could be generalised and tailor-made subject to the best industry-centric regulatory practices, Consent Managers not being Data Fiduciaries, would be helpful for AAs as well. Some aspects related to the inclusion of artificial intelligence technology in the context of Consent Managers is discussed in the next section of this article. The next section of this article covers all aspects covered related to the use of artificial intelligence in the Digital Personal Data Protection Act, 2023. Key Definitions & Provisions in the DPDPA on Artificial Intelligence Here are some definitions in Section 2 of the Act, which must be read and understood, to begin with: (b) “automated” means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data; (f) “child” means an individual who has not completed the age of eighteen years; (g) “Consent Manager” means a person registered with the Board, who acts as a single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform; (h) “data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means; (i) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data; (j) “Data Principal” means the individual to whom the personal data relates and where such individual is — (i) a child, includes the parents or lawful guardian of such a child; (ii) a person with disability, includes her lawful guardian, acting on her behalf; (k) “Data Processor” means any person who processes personal data on behalf of a Data Fiduciary; (n) “digital personal data” means personal data in digital form; (s)(vii) every artificial juristic person, not falling within any of the preceding sub-clauses; (t) “personal data” means any data about an individual who is identifiable by or in relation to such data; (x) “processing” in relation to personal data, means a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as collection, recording, organisation, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction; Now, with reference to Figure 2, the four most important definitions with respect to artificial intelligence, are in the Section 2, especially sub-sections (b), (s)(vii) and (x). The definition of the term "automated" clearly states that "automated" means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data. This means that AI systems that are capable of making decisions without human intervention are considered to be "automated" for the purposes of the Act. Of course, this recognition was impliedly done, as the integration of AI systems in data processing is a long-known reality. However, the wording makes it meticulously clear. This definition is broad enough to encompass a wide range of AI systems, including: Machine learning systems: These systems are trained on large amounts of data to learn how to make predictions or decisions. Once they are trained, they can make these decisions without human intervention. Natural language processing systems: These systems can understand and process human language. They can be used to generate text, translate languages, and answer questions. Computer vision systems: These systems can identify and track objects in images and videos. They can be used for tasks such as facial recognition and object detection. It would be intriguing to observe how this plays out when the Digital India Act is released, since the Act is proposed to cover high-risk, medium-risk and low-risk AI systems. Artificial Juristic Person Furthermore, the definition of "every artificial juristic person" as defined in the sub-section 2(s)(vii) of the Act is interesting, considering that the Act uses the word "person" at least 30+ times, which is obvious. is important because it helps to clarify what types of AI systems are considered to be "legal persons" for the purposes of the law. The definition states that "artificial juristic person" means every artificial juristic person, not falling within any of the preceding sub-clauses. This means that AI systems that are not explicitly defined in the preceding sub-clauses, such as companies, firms, and associations of persons, may still be considered to be "artificial juristic persons" if they have the capacity to acquire rights and incur liabilities. The wording is important to notice simply because it allows the Act to apply to AI systems that are not traditionally considered to be "legal persons." This is important because AI systems are becoming increasingly sophisticated and are capable of making decisions that have a significant impact on people's lives. By classifying AI systems as "legal persons," the Act helps to ensure that these systems are held accountable for their actions and that they are subject to the same legal protections as humans. It could be argued that the definition of "artificial juristic person" in the DPDPA would evolve, as AI technology continues to develop and the integration of assessing AI-related persona issues come up for law and policy stakeholders to address. To add, the definition of an artificial juristic person, clearly reeks of offering an ad hoc (or specific) understanding of legal recognition or legal affirmation, which could be granted to AI systems. This is in line with the ISAIL Classifications on Artificial Intelligence, especially the CEI Classification Method. As per the classifications defined in the 2020 Handbook on AI and International Law, the CEI Method could classify AI as a Concept, an Entity or an Industry. The reference to "artificial juristic persons" can be directly alluded to the classification of an AI system as a Juristic Entity. Here is an excerpt from the 2020 Handbook (pages 45 and 47), explaining the concept of a Juristic Entity in the case of artificial intelligence: On the question of the entitative status of AI, under jurisprudence, there can be 2 distinctions on a prima facie basis: (1) the legal status; and (2) the juristic status. […] In both the cases, it is suitable to establish the substantive attributes of AI both as legal and juristic entities. There can be disagreements on the procedural attributes here due to the simple reasons that there at procedural levels, it is not practically possible to have similar legal standards of different kinds of products and services which involve AI directly or indirectly. Here are some examples of AI systems that could be considered to be "artificial juristic persons" under the Act: Self-driving cars: These cars are capable of making decisions about how to navigate roads and avoid obstacles without human intervention. Virtual assistants: These assistants can understand and respond to human language, and they can be used to perform a variety of tasks, such as booking appointments, making travel arrangements, and playing music. Chatbots: These bots can engage in conversations with humans, and they can be used to provide customer service, answer questions, and even write creative content. Also read The Legal "Status" of AI: How, why and where AI as a Consent Manager? Nevertheless, let's examine where the use of the term "artificial juristic persons" gets intriguing. Let's begin with the concept of a "Consent Manager". Of course, Section 2(g) states that a Consent Manager is a person registered with the Data Protection Board of India acting as the single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform. This means that a Consent Manager can be any person, including an individual, a company, or an AI system. However, in order to be registered with the Board, a Consent Manager must meet certain requirements. In the context of AI, the definition of "Consent Manager" could be interpreted to mean that an AI system could be registered as a Consent Manager. However, it is important to note that the AI system must meet the same requirements as any other Consent Manager, such as having the necessary technical expertise and experience to manage consent effectively. The function of the Consent Managers could also be explained in the context of the following sub-sections of Section 5 of the Act: (7) The Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. (8) The Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. (9) Every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed. Now, Section 5(7) states that the Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. This means that a Data Principal can use an AI Consent Manager to manage their consent to the processing of their personal data by a Data Fiduciary. Meanwhile, it is interesting to notice that Section 5(8) states the Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. This means that the AI Consent Manager must be designed and used in a way that ensures that it is acting in the best interests of the Data Principal. This includes being transparent about how it is using personal data and being able to explain its decisions to the Data Principal. Finally, Section 5(9) states that every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed. This means that any AI Consent Manager that wants to operate in India must be registered with the Data Protection Board (DPB). The DPB will set out the technical, operational, financial and other conditions that AI Consent Managers must meet in order to be registered. Here are some specific ways that AI could be used to support the functions of Consent Managers: Automating consent management: AI could be used to automate the process of giving, managing, reviewing and withdrawing consent. This would make it easier for Data Principals to control their personal data and it would also reduce the risk of human error. Providing personalised consent experiences: AI could be used to personalize the consent experience for each Data Principal. This would involve understanding the Data Principal's individual needs and preferences and tailoring the consent process accordingly. Ensuring transparency and accountability: AI could be used to ensure that consent is transparent and accountable. This would involve tracking how consent is given, managed, reviewed and withdrawn, and it would also involve providing Data Principals with clear and concise information about how their personal data is being used. Additionally, the AI system must be designed in a way that ensures that it is acting in the best interests of Data Principals. This means that the AI system must be transparent about how it is using personal data and it must be able to explain its decisions to Data Principals. Now, on the rights of Data Principals (data subjects) for grievance redressal, the role of AI as Consent Managers could become interesting. Section 13(1) of the Act states that a Data Principal shall have the right to have readily available means of grievance redressal provided by a Data Fiduciary or Consent Manager in respect of any act or omission of such Data Fiduciary or Consent Manager regarding the performance of its obligations in relation to the personal data of such Data Principal or the exercise of her rights under the provisions of this Act and the rules made thereunder. This means that a Data Principal can use an AI Consent Manager to file a grievance if they are unhappy with the way their personal data is being handled by a Data Fiduciary. Meanwhile, the Section 13(2) states that the Data Fiduciary or Consent Manager shall respond to any grievances referred to in sub-section (1) within such period as may be prescribed from the date of its receipt for all or any class of Data Fiduciaries. This means that the AI Consent Manager must be designed and used in a way that ensures that it can respond to grievances in a timely and effective manner. Here are some use cases of AI in Consent Management, which could be looked upon: Personalised consent experiences: AI can be used to personalise the consent experience for each individual user. This can be done by understanding the user's individual needs and preferences, and tailoring the consent process accordingly. For example, AI could be used to suggest relevant consent options to users, or to provide users with more detailed information about how their data will be used. Automated consent management: AI can be used to automate the process of giving, managing, reviewing and withdrawing consent. This can make it easier for users to control their data, and it can also reduce the risk of human error. For example, AI could be used to send automatic reminders to users about their consent preferences, or to automatically revoke consent when a user no longer uses a particular service. Ensuring transparency and accountability: AI can be used to ensure that consent is transparent and accountable. This can be done by tracking how consent is given, managed, reviewed and withdrawn, and by providing users with clear and concise information about how their data is being used. For example, AI could be used to create a audit trail of consent activity, or to generate reports that show how users' data is being used. Grievance redressal: AI can be used to support the grievance redressal process. This can be done by automating the process of filing and tracking grievances, and by providing users with clear and concise information about the status of their grievance. For example, AI could be used to create a chatbot that allows users to file grievances without having to speak to a human representative, or to generate reports that show how grievances are being resolved. Compliance with regulations: AI can be used to help organisations comply with regulations related to consent management. This can be done by tracking consent activity, generating reports, and providing users with clear and concise information about how their data is being used. For example, AI could be used to create a dashboard that shows how an organisation is complying with the General Data Protection Regulation (GDPR), or to generate reports that show how users' data is being used in accordance with the California Consumer Privacy Act (CCPA). Processing as Defined in the Act The term “processing” in relation to personal data, includes the following: It is a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as: collection, recording, organisation, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction; Now, in the context of digital rights management (DRM), and the use of artificial intelligence technology through Data Processors in CMS-based platforms, would have to be observed carefully. There are certain activities which could easily be covered by automated intelligence systems, or those AI systems, which have narrow use cases, like collection, recording, storage, organisation and others. Since there is nothing clearly stated on the role of Data Processors and the burden is on the Data Fiduciary to ensure compliance with the Act, it would now be a matter of contract, as to how will companies do contracts with Data Processors to ensure compliance, and redressal of matter among themselves, especially on the limited & specific use of AI. Nevertheless, the processing capabilities of any AI system, which is preceded by their computational capabilities (for example, Generative AI systems), it would be necessary to see how the prescribed regulations, bye-laws, circulars and industry-based self-regulatory & regulatory measures would work. For Data Processors who use artificial intelligence systems, they would have to clarify the use of AI in their contracts, for sure to keep things explained about liability and accountability issues, by the arrangement of the contract they would naturally have with Data Fiduciaries. Role of AI Usage in Shaping Rights of Data Principals The rights of data principals under the Sections 11 to 14 of the DPDP Act are important in the context of the commercial and technical use cases of AI applications, especially those of generative AI applications. Let's decipher that. Section 11: The right to obtain information about personal data that is being processed by a data fiduciary is essential for data principals to understand how their data is being used by AI applications. This information can help data principals to make informed decisions about whether or not to use an AI application, and it can also help them to identify and address any potential privacy concerns. However, to complement the instance discussed of AI as Consent Managers, or the involvement of AI in consent management, the role of technology-enabled and human-monitored elements of processing of personal data would have to be explained. Section 12: The right to correct, complete, update, or erase personal data is also important in the context of AI applications. This is because AI applications can often make mistakes when processing personal data, and these mistakes can have a significant impact on data principals. For example, an AI application that is used to make lending decisions could make a mistake and deny a loan to a data principal who is actually eligible for the loan. The data principal's right to correct the mistake is essential to ensuring that they are not unfairly discriminated against. Section 13: The right to have readily available means of grievance redressal is also important in the context of AI applications. This is because AI applications can be complex and it can be difficult for data principals to understand how their data is being used. If data principals believe that their rights under the DPDP Act have been violated, they should be able to easily file a complaint with the data fiduciary or consent manager. Section 14: The right to nominate another individual to exercise one's rights under the DPDP Act is also important in the context of AI applications. This is because AI applications can be used to collect and process personal data about individuals who are not able to exercise their own rights, such as children or people with disabilities. The right to nominate another individual to exercise one's rights ensures that these individuals' rights are still protected. In addition to the rights listed above, data fiduciaries that use generative AI applications must also take steps to safeguard the privacy of data principals. This includes using appropriate security measures to protect personal data, and ensuring that generative AI applications are not used to create content that is harmful or discriminatory. Here are some specific safeguards that data fiduciaries can implement to protect the privacy of data principals when using generative AI applications: Implement access controls to restrict who can access personal data. Use anonymisation techniques to remove personally identifiable information from personal data. Monitor generative AI applications for bias and discrimination. Educate data principals about their privacy rights. Conclusion & Emerging Policy Dilemmas Overall, this Act is not a disappointing piece of legislation from the Union Government. However, it is not a groundbreaking legislation, as India's political and technological viewpoints on data and AI regulation are still emerging. This legislation is clearly emblematic of the fact that merely having data protection laws do not ensure regulatory malleability and proficiency to tackle data-related issues, in commercial, technology and public laws. Regulatory subterfuge in matters of data law could easily happen when laws are not specific and not rooted enough to be mechanical. The DPDPA is mechanically suitable as a law, and considering India's digital economy-related trade negotiations at the WTO and beyond, the law will suffice its sourced and general purpose. Of course, the law would be challenged in the Supreme Court, and upcoming bye-laws, regulations and circulars under this Act's provisions would be subject to transformation. However, the competition law and trade law-centric approach towards data regulation and digital connectivity is not shifting to purely civil law and public law issues anytime soon. In the case of artificial intelligence and law, there are certain legal and policy dilemmas that are surely going to emerge: The Rise of International Algorithmic Law is inevitable No matter what is argued about this perspective that over-focus on data-related trade issues leads to deviation from larger data law issues, the proper way to resolve and quantify problem-solving legal and policy prescriptions in the case of data law, could come from developing a soft law approach to a newer field of global governance, and international law, i.e., International Algorithmic Law. Here is an excerpt on the definition of International Algorithmic Law from my paper on the same: The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law. It could be easily argued that data law issues must be addressed by default, and there is no doubt that it should be done. However, the data protection laws, lack that legal consortium of understanding that could tackle and address the economics behind data colonialism and exploitation. Domestic regulators would also have to develop economic law tools which are principled and rules-based, because regulation by enforcement and endless reliance on trade negotiations alone would never help if a privacy-centric digital economy has to achieved at domestic levels. Hence, beyond certain general compliances and issues where the Data Protection laws across the world can have impact, developing regulatory tendencies around the anthropomorphic use of artificial intelligence could surely be the best way forward. I would even argue that having AI-related 'treaties' could be possible as well. However, those 'treaties' would not be about some comic book utopia, or a sci-fi movie utopia on political control. It could be about basic ethics issues, data processing issues, or issues related to optimal explainability of the algorithms and their neural networks, and models. It could be like a treaty due to its purpose-based legal workflows and use cases. Blending Legal Prescriptions of Data Jurisprudence & Intellectual Property Law Now, this is a controversial suggestion, but in VLiGTA-TR-002, our report on Generative AI applications, I had proposed that in certain intellectual property issues, the protections offered to proprietary information produced by Generative AI applications could be justified by companies to manufacture the consent of data principals at every stage of prompting, by virtue of the technology by design. In such a case, I had proposed that in the case of copyright law, invoking data protection law could be helpful. Now, considering market trends, I would state that data protection law could also be invoked in the case of trade secrets. Here is an excerpt from the report (page 127): Regulators would need to develop a better legal recognition regime, where based on the nature of use cases, copyright-related concerns could be addressed or averted. In this case, we have to consider the role of data protection and privacy laws, when it comes to the data subject. However, the legal position to invoke data protection rights of the Data Principals, to address the justification of invoking IP rights of proprietary information by technology companies has to be done to achieve specific remedies. For AI developers and data scientists, they would have to address the issue of bias-variance tradeoff, when it comes to their AI models, especially the large language models. Here is an excerpt from an article from Analytics India Magazine: Bias and variance are inversely connected and it is practically impossible to have an ML model with a low bias and a low variance. When we modify the ML algorithm to better fit a given data set, it will in turn lead to low bias but will increase the variance. This way, the model will fit with the data set while increasing the chances of inaccurate predictions. The same applies while creating a low variance model with a higher bias. [...] Models like GPT have billions of parameters, enabling them to process vast amounts of data and learn intricate patterns in language. However, these models are not immune to the bias-variance tradeoff. Moreover, it is possible that the larger the model, the chances of showing bias and variance is higher. [...] To tackle underfitting, especially when the training data contains biases or inaccuracies it is important to include as many examples as possible. [...] On the other hand, over-explanation to models to perfectly align with human values can lead to an overfit model that shows mundane and results that represent only one point of view. This often happens because of RLHF, the key ingredient for LLMs like OpenAI’s GPT, which has often been criticised to be too politically correct when it shouldn’t be. To mitigate overfitting, various techniques are employed, such as regularisation, early stopping, and data augmentation. LLMs with high bias may struggle to comprehend the complexities and subtleties of human language. They may produce generic and contextually incorrect responses that do not align with human expectations. To conclude, the economics of AI explainability, in the case of India's Digital Personal Data Protection Act, can be developed by the market in India. If we achieve the economics that makes AI explainable, accountable and responsible enough, which enables sustainable business models, then a lot could be achieved on the front on data protection ethics and standards to enable a privacy-compliant ecosystem.

Search Results

bottom of page