top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

90 items found

  • [New Report] Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010

    We are excited to announce the publication of our 10th Infographic Report since December 2023, and our 22nd Technical Report since 2021, titled "Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010."  This report is available for free for a limited time at this link . This report holds a special significance for us as it reflects our collective commitment to deeply exploring the complex legal challenges posed by emerging technologies like Generative AI. We would like to extend our heartfelt congratulations to Samyak Deshpande, Sanvi Zadoo, and Alisha Garg, whose dedication and meticulous effort have been instrumental in developing and curating this comprehensive report. In this report, we’ve included a quote from Carissa Véliz on privacy to emphasize the importance of respecting human autonomy in the creative process. This choice captures the spirit in which we approach the intricate relationship between technology and law, particularly when it comes to safeguarding the creative rights and freedoms of individuals. Why This Report Matters The publishing industry is no stranger to disruption, but the advent of Generative AI has introduced a new layer of complexity. While many may rush to declare that Generative AI infringes upon copyright and patent laws, such assertions, though valid, often oversimplify the issues at hand. The real challenge lies in addressing these concerns with the specificity and nuance they require. This report represents not just an analysis of intellectual property law issues related to Generative AI in publishing but also a broader exploration of how these technologies can create legal abnormalities that escalate to points of no return. It is the product of our collective patience, thorough research, and a deep understanding of the legal landscape. The Broader Implications Generative AI has been both lauded and criticszed for its impact on various industries. In publishing, the effects have been particularly pronounced, leading to a range of legal challenges that must be navigated with care. This report seeks to provide a balanced perspective, offering insights into how these technologies can be regulated and managed without stifling innovation or creativity. As Generative AI continues to evolve, so too must our approach to the legal frameworks that govern it. This report is a step in that direction, aiming to provide both clarity and guidance for those involved in the publishing industry and beyond. You can access the report for free for a limited time at https://indopacific.app/product/impact-based-legal-problems-around-generative-ai-in-publishing-iplr-ig-010/ Final Thoughts In a time when AI is making headlines—such as the recent mention of Anil Kapoor in TIME magazine for his connection to AI—our report offers a timely and relevant exploration of the real-world implications of these technologies. We hope it will serve as a valuable resource for those interested in understanding the complexities and challenges of the Generative AI ecosystem. We invite you to read this report and engage with the critical issues it raises. Happy reading! Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • [New Report] Risk & Responsibility Issues Around AI-Driven Predictive Maintenance, IPLR-IG-009

    Greetings. I hope this update finds you well. Today, I want to share with you a topic that has captured my imagination for quite some time: the intriguing confluence of space law and artificial intelligence policy. As someone who has always been fascinated by astronomy and the laws of nature, I find this area of study as captivating as it is important. For the past few months, I have been working on a comprehensive report titled "Risk & Responsibility Issues Around AI-Driven Predictive Maintenance." This report, the 9th Infographic Report by Indic Pacific Legal Research LLP, delves into the complex legal landscape surrounding the use of AI in predictive maintenance for spacecraft. I am excited to announce that the report, IPLR-IG-009, is now available on the IndoPacific App at https://indopacific.app/product/navigating-risk-and-responsibility-in-ai-driven-predictive-maintenance-for-spacecraft-iplr-ig-009-first-edition-2024/ . In this report, I have not only explored the legal implications of AI-driven predictive maintenance but also showcased some fascinating case studies that demonstrate the potential of this technology in the space sector. These case studies include: 1️⃣ SPAICE Platform 2️⃣ NASA's Prognostics and Health Management (PHM) Project 3️⃣ ESA's Φ-sat-1 (Phi-sat-1) Project Each of these projects highlights the innovative ways in which AI is being leveraged to enhance the reliability, efficiency, and safety of spacecraft operations. By examining these real-world examples, we can gain valuable insights into the challenges and opportunities that lie ahead as we continue to push the boundaries of space exploration and AI technology. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • Deciphering Australia’s Safe and Responsible AI Proposal

    This insight by Visual Legal Analytica is a response/ public submission to the Australian Government's recent Proposals Paper for introducing mandatory guardrails for AI in high-risk settings published in September 2024. This insight, authored by Mr Abhivardhan, our Founder, is submitted on behalf of Indic Pacific Legal Research LLP. Key Definitions Used in the Paper The key definitions provided in the Australian Government's September 2024 High Risk AI regulation proposal reflect a comprehensive and nuanced approach to AI governance: Broad Scope and Lifecycle Perspective The definitions cover a wide range of AI systems and models, from narrow AI focused on specific tasks to general-purpose AI (GPAI) capable of being adapted for various purposes. They also consider the entire AI lifecycle, from inception and design to deployment, use, and eventual decommissioning. This broad scope ensures the regulations can be applied to diverse AI applications now and in the future. Differentiation between AI Models and Systems The proposal makes a clear distinction between AI models, which are the underlying mathematical engines, and AI systems, which are the ensembles of components designed for practical use. This allows for more targeted governance, recognizing that raw models and end-user applications may require different regulatory approaches. Emphasis on Supply Chain and Roles The definitions highlight the complex network of actors involved in the AI supply chain, including developers who create the models and systems, deployers who integrate them into products and services, and end users who interact with or are impacted by the deployed AI. By delineating these roles, the regulations can assign appropriate responsibilities and accountability at each stage. Recognition of Autonomy and Adaptiveness The varying levels of autonomy and adaptiveness of AI systems after deployment are acknowledged. This reflects an understanding that more autonomous and self-learning AI may pose different risks and require additional oversight compared to static, narrow AI. Inclusion of Generative AI Generative AI models, which can create new content similar to their training data, are specifically defined. Given the rapid advancement and potential impacts of generative AI, such as in generating realistic text, images, and other media, this inclusion demonstrates a forward-looking approach to AI governance. Overall, the key definitions show that Australia is taking a nuanced, lifecycle-based approach to regulating AI, with a focus on the supply chain, different levels of AI sophistication, and emerging areas like generative AI. Designated AI Risks in the Proposal The Australian Government has correctly identified several key risks that AI systems can amplify or create in the "AI amplifies and creates new risks" section of the report: Amplification of Existing Risks like Bias The report highlights that AI systems can embed human biases and create new algorithmic biases, leading to systemic impacts on groups based on protected attributes like race or gender. This bias may arise from inaccurate, insufficient, unrepresentative or outdated training data, or from the design and deployment of the AI system itself. Identifying the risk of AI amplifying bias is important, as there are already real-world examples of this occurring, such as in AI resume screening software and facial recognition systems. New Harms at Multiple Levels The government astutely recognises that AI misuse and failure can cause harm to individuals (e.g. injury, privacy breaches, exclusion), groups (e.g. discrimination), organisations (e.g. reputational damage, cyber attacks), and society at large (e.g. growing inequality, mis/disinformation, erosion of social cohesion). This multilevel perspective acknowledges the wide-ranging negative impacts AI risks can have. National Security Threats The report identifies how malicious actors can leverage AI to threaten Australia's national security through accelerated information manipulation, AI-enabled disinformation campaigns to erode public trust, and lowering barriers for unsophisticated actors to engage in malicious cyber activity. Given the growing use of AI for influence operations and cyberattacks, calling out these risks is prudent and proactive. Some Concrete Examples of Realised Harms Importantly, the government provides specific instances where the risks of AI have already resulted in real-world harms, such as AI screening tools discriminating against certain ethnicities and genders in hiring. These concrete examples demonstrate that the identified risks are not just theoretical but are actively materialising. Response to Questions for Consultation on High-Risk AI Classification The Australian Government has outlined the following Questions for Consultation on their proposed approach on classifying High-Risk artificial intelligence systems. In this insight, we intend to offer a response to some of these questions based on how the Australian Government has designated their regulatory & governance priorities: Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove? Please identify any: low-risk use cases that are unintentionally captured categories of uses that should be treated separately, such as uses for defence or national security purposes. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country? Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed? If you prefer a list-based approach (similar to the EU and Canada), what use cases should we include? How can this list capture emerging uses of AI? If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity? Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)? If so, how should we define these? Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)? Should mandatory guardrails apply to all GPAI models? What are suitable indicators for defining GPAI models as high-risk? For example, is it enough to define GPAI as high-risk against the principles, or should it be based on technical capability such as FLOPS (e.g. 10^25 or 10^26 threshold), advice from a scientific panel, government or other indicators? Response to Questions 1 & 3 The paper proposes the following definition of General Purpose Artificial Intelligence: An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems Our feedback at Indic Pacific Legal Research LLP is that the definition may be considered to cover or co-opt the coverage of "potential risks". However, the definition should not be used as a default legal approach, for a few reasons. When we attempt to designate an intended purpose of any AI system, it may also help governments reflect upon the substandard or advanced level of their proposed use cases. The propensity of an AI system to be substandard, or advanced as GPAI, entirely relies on how its intended purpose is established with relevant technical parameters. While the definition can be kept as it is - we recommend that the Government must reconsider linking the definition directly to bypass the requirement to determine intended purpose of GPAI, to attribute risk recognition on general-purpose AI systems. Since it is requested in the consultation question whether low-risk use cases may be unintentionally captured, it may be feasible to determine the intended purpose of a GPAI to further attribute high-risk recognition upon it, in a reasonable manner. This helps the regulator to avoid low-risk use cases be unintentionally captured. The EU/ Canada approaches highlighted in the tables on the containerised recognition of artificial intelligence use cases may be adopted. However, we recommend that guidance-based measures could be helpful in keeping up with the international regulatory landscape, instead of adopting the EU/ Canada approach. The larger rationale of our feedback is that listing use cases can be achieved by creating a repository of recognised AI use cases. However, the substandard/ advanced nature of these use cases, and the outcome and impact of these use cases requires effective documentation with time. That may create some transparency, or else it may lead to a situation where low-risk use cases might be captured in hindsight. Response to Question 5 The principles need guidance points for further expansion, since the examples mentioned for specific principles do not convince to give a clearer picture on the scope of the principles. However, the recognition of severity & extent of an AI risk is a substantial way to define the threshold of purposive application of these principles, provided it is done in a transparent fashion. Response to Question 7 The FLOPS criteria to signify GPAI as high-risk is deeply counterproductive, and it would be necessary to involve human expertise and technical indicators to designate any GPAI as high-risk. For example, we had proposed India's first artificial intelligence regulation as a private effort to the Government of India, to get public feedback, called aiact.in . You may look at the following parts of aiact.in Version 3 for reference: Section 5 – Technical Methods of Classification (1)   These methods as designated in clause (b) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as – (i)    General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2); (ii)   General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3); (iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);   (2)   General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards: (i)    Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users. (ii)   Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors. (iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements. (iv)  Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases. Section 7 – Risk-centric Methods of Classification (4)   High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors: (i)    Widespread utilization or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences; (ii)   Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest; (iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system; (iv)  High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system; (v)   Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences. (vi)  The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.   Illustration   An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification. The approaches outlined in Section 5 and Section 7(4) of the proposed Indian AI regulation ( aiact.in Version 3) provide a helpful framework for classifying and regulating AI systems based on their technical characteristics and risk profile. Here's why these approaches may be beneficial: Section 5 - Technical Methods of Classification: Categorizing AI systems as GPAIS (General Purpose AI with stable use cases), GPAIU (General Purpose AI with unclear use cases), and SPAI (Specific Purpose AI) allows for tailored regulatory approaches based on the system's inherent capabilities and intended applications. Evaluating factors like scale, inherent purpose, technical features, and limitations helps assess an AI system's potential impact and reach, informing appropriate oversight measures. Aligning classification with relevant industrial standards promotes consistency and interoperability across sectors. Distinguishing between stable and unclear use cases recognizes the evolving nature of AI and the need for adaptable regulatory frameworks. Section 7(4) - Risk-centric Methods of Classification: Focusing on outcome and impact-based risks ensures that the most potentially harmful AI systems are subject to stringent oversight, regardless of their technical characteristics. Considering factors like widespread deployment, potential for severe harm, lack of user control, and vulnerability of affected individuals helps identify high-risk applications that warrant additional safeguards. Recognizing the difficulty of reversing or remediating adverse outcomes emphasizes the need for proactive risk mitigation measures. Applying the high-risk designation irrespective of scale, purpose, or technical limitations acknowledges that even narrow or limited AI systems can pose significant risks in certain contexts. The illustrative example of an AI system controlling critical infrastructure highlights the importance of a risk-based approach that prioritizes societal consequences over technical specifications. However, it's important to note that implementing such a framework would require significant technical expertise, ongoing monitoring, and coordination among regulators and stakeholders. Clear guidelines and standards would need to be developed to ensure consistent application and enforcement. We have responded to some of the questions from Consultation Questions 8 to 12 as well: Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings? Are there any guardrails that we should add or remove? How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve ICIP? Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately? For example, are the requirements assigned to developers and deployers appropriate? Are the proposed mandatory guardrails sufficient to address the risks of GPAI? How could we adapt the guardrails for different GPAI models, for example low-risk and high-risk GPAI models? Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails? Response to Questions 8 & 11 We agree with the rationale proposed behind the mandatory guardrails, and all the 10 required guardrails, from a larger understanding of incident response protocols, and attributing accountability in AI management. Feedback on Guardrail 1 Wherever it is discerned that GPAIs are being utilised for public-private partnerships, we recommend that requirements under Guardrail 1, must include mandatory disclosure and publication of the accountability frameworks. However, this cannot be applicable in the case of low-risk AI use cases, hence if intended purpose remains an indicator of evaluation, the obligations under Guardrail 1 must be applied with a sense of proportionality. Feedback on Guardrails 2, 3 8 & 9 We agree with the approach in these Guardrails to obligate AI companies to ensure that "where elimination is not possible, organisations must implement strategies to contain or mitigate any residual risks". It is noteworthy that the paper recognises that any type of risk mitigation will be based on the organisation’s role in the AI supply chain or AI lifecycle and the circumstances. That makes the implementation of these guardrails effective & focused. Stakeholder participation is also necessary, as outlined in the AI Seoul Summit 2024's International Scientific Interim Report. Feedback on Guardrail 8 We agree with the approach in this Guardrail, which focuses on transparency between developers and deployers of high-risk AI systems, for several reasons: Enabling effective risk management: By requiring developers to provide deployers with critical information about the AI system, including its characteristics, training data, key design decisions, capabilities, limitations, and risks, deployers can better understand how the system works and identify potential issues. This transparency allows deployers to proactively respond to risks as they emerge during the deployment and use of the AI system. Promoting responsible deployment: Deployers need to have a clear understanding of how to properly use the high-risk AI system to ensure it is deployed in a safe and responsible manner. By mandating that developers provide guidance on interpreting the system's outputs, deployers can make informed decisions and avoid misuse or misinterpretation of the AI system's results. Addressing information asymmetry: There is often a significant knowledge gap between developers, who have intimate understanding of the AI system's inner workings, and deployers, who may lack technical expertise. This guardrail helps bridge that gap by ensuring that deployers have access to the necessary information to effectively manage the AI system and mitigate risks. Balancing transparency and intellectual property: The guardrail acknowledges the need to protect commercially sensitive information and trade secrets, which is important to maintain developers' competitive advantage and incentivize innovation. By striking a balance between transparency and protection of proprietary information, the guardrail ensures that deployers receive the information they need to manage risks without compromising developers' intellectual property rights. Response to Questions 10 & 12 The proposed mandatory guardrails in the Australian Government's paper do a good job of distributing responsibility across the AI supply chain and AI lifecycle between developers and deployers: However, the current approach could be improved in a few ways: More guidance may be needed on how responsibilities are divided when an AI system involves multiple developers and deployers. The complexity of modern AI supply chains can make accountability challenging. Some guardrails, like enabling human oversight and informing end-users, may require different actions from developers vs. deployers. The requirements could be further tailored to each role. Feedback from developers and deployers should be proactively sought to identify any misalignment between the assigned responsibilities and their actual capabilities to manage risks at different lifecycle stages. To reduce the regulatory burden on small-to-medium enterprises (SMEs), we suggest: Providing templates, checklists, and examples to help SMEs efficiently implement the guardrails. Ready-made accountability process outlines, risk assessment frameworks, and testing guidelines would be valuable. Offering tiered requirements based on SME size and AI system risk level. Lower-risk systems or smaller businesses could have simplified record-keeping and less frequent conformity assessments. Establishing a central AI authority (maybe on an interlocutory or nodal basis) to provide guidance, tools, and oversight. This one-stop-shop would reduce the burden of dealing with multiple regulators. Facilitating access to shared testing facilities, data governance tools, and expert advisors. Pooled resources and support would help SMEs meet the guardrails cost-effectively. Phasing in guardrail requirements gradually for SMEs. An extended timeline with clear milestones would ease the transition. Providing financial support, such as tax incentives or grants, to help SMEs invest in AI governance capabilities. Subsidised training would also accelerate adoption. Finally, we have responded to some of the Consultation Questions 13-16 provided below as well: Which legislative option do you feel will best address the use of AI in high-risk settings? What opportunities should the government take into account in considering each approach? Are there any additional limitations of options outlined in this section which the Australian Government should consider? Which regulatory option/s will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology? Where do you see the greatest risks of gaps or inconsistencies with Australia’s existing laws for the development and deployment of AI? Which regulatory option best addresses this, and why? Response to Question 13 We propose that Option 3, could be a reasonable option and would best address the use of AI in high-risk settings in Australia. Here are the key reasons: Comprehensive coverage: A dedicated AI Act would provide consistent definitions of high-risk AI and mandatory guardrails across the entire economy. This avoids the gaps and inconsistencies that could arise from a sector-by-sector approach under Option 1. The AI Act could also extend obligations to upstream AI developers, not just deployers, enabling a more proactive and preventative regulatory approach. Regulatory efficiency: Developing a single AI-specific Act is more efficient than individually amending the multitude of existing laws that touch on AI under Option 1 or Option 2's framework legislation. It allows for a cohesive, whole-of-economy approach. Dedicated enforcement: An AI Act could establish an independent AI regulator to oversee compliance with the guardrails. This dedicated expertise and enforcement would be more effective than relying on existing sector-specific regulators who may lack AI-specific capabilities. However, the government should consider the following when designing an AI Act: Interaction with existing laws: The AI Act should include carve-outs where sector-specific laws already impose equivalent guardrails, to avoid duplication. Close coordination between the AI regulator and existing regulators will be essential. Compliance burden: The AI Act should incorporate mechanisms to reduce the regulatory burden on SMEs, such as tiered requirements based on risk levels. Practical guidance and shared resources would help SMEs build governance capabilities. Responsive regulation: The AI Act should be adaptable to the rapid evolution of AI. Regular reviews, expert input, and agile rulemaking will ensure it remains fit-for-purpose.

  • New Report: The Legal and Ethical Implications of Monosemanticity in LLMs, IPLR-IG-008

    🚀 Excited to announce an ambitious wave of reports we plan to release this September 2024! Starting off with our latest infographic report: "The Legal and Ethical Implications of Monosemanticity in LLMs, IPLR-IG-008"  by Indic Pacific Legal Research LLP by Abhivardhan our Founder. It was a pleasure collaborating on this report with Samyak Deshpande, Sanvi Zadoo, and Alisha Garg, former interns at the Indian Society of Artificial Intelligence and Law. 📄 Access the report here:   https://indopacific.app/product/monosemanticity-llms-iplr-ig-008/ This report draws inspiration from two significant developments in the global AI landscape: 1️⃣ Anthropic’s groundbreaking paper,  "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3" (mid-2024), delves into the complexities of large language models (LLMs) and the extraction of monosemantic neurons. 2️⃣ The International Scientific Report on AI Safety  (interim report), released by the UK Government and other stakeholders at the AI Seoul Summit, May 2024, building on the discussions from the 2023 Bletchley Summit. Our report provides a comprehensive analysis of these developments, exploring Anthropic’s work on monosemanticity through technical, economic, and legal-ethical lenses. It also delves into the evolution of neurosymbolic AI, offering pre-regulatory ethical considerations for this emerging technology. 🧠 Notably, Anthropic's paper doesn't overstate their findings on AI risk mapping, allowing us to present our recommendations with a nuanced perspective. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • The John Doe v. GitHub Case, Explained

    This case analysis is co-authored by Sanvi Zadoo and Alisha Garg, along with Samyak Deshpande. The authors of this case analysis have formerly interned at the Indian Society of Artificial Intelligence and Law quite recently. In a world where artificial intelligence is redefining the way developers write code, Copilot, an AI-powered coding program developed by GitHub in collaboration with OpenAI was launched  in 2021. Copilot promised to revolutionize software development by generating code functions based on the input of choice. However, this ‘revolution’ soon found itself in the midst of a legal storm .   The now famous GitHub-Copilot case revolves around allegations that the AI-powered coding assistant uses copyrighted code from open-source repositories without proper credit. The initiative for the lawsuit  was taken by programmer and attorney Matthew Butterick and joined by other developers. They claimed that Copilot's suggestions include exact code from public repositories without adhering to the licenses under which the code was published. Despite efforts by Microsoft, GitHub, and OpenAI to dismiss the lawsuit, the court allowed the case to proceed.   Timeline of the case   June 2021 GitHub Copilot is publicly launched in a technical preview November 2022 Plaintiffs file a lawsuit against GitHub and OpenAI, alleging DMCA violations and breach of contract December 2022 The court dismisses several of the Plaintiffs' claims, including unjust enrichment, negligence, and unfair competition, with prejudice. March 2023 GitHub introduces new features for Copilot, including improved security measures and an AI-based vulnerability prevention system. June 2023 The court dismisses the DMCA claim with prejudice July 2024 The California court affirms the dismissal of nearly all the claims​    Overall, the lawsuit includes breach of contract claims based on the terms of open-source licenses, arguing that Copilot's use of their code violates these licenses. Let us further explore this case in detail and what it means for the future of such Artificial Intelligence programs.     The Technical Features Of The Copilot And Recent Iterations As Well As Competing Entities Technical Features of GitHub Copilot GitHub Copilot is an AI-powered code completion tool, developed in collaboration with OpenAI, that seamlessly integrates into development environments to assist developers with code suggestions, completions, and snippets.   The technical features of GitHub Copilot include: OpenAI Codex Model: Copilot is fueled by OpenAI's Codex, which is a descendant of the GPT-3 model and has been specifically trained on a vast amount of publicly available code from GitHub and other sources. This advanced model comprehends the context of the code being written, including the function or class a developer is working on, comments, and preceding code, allowing it to provide pertinent and contextually appropriate code suggestions. Diverse Language Support: Copilot is compatible with a wide array of programming languages, encompassing, but not restricted to, Python, JavaScript, TypeScript, Ruby, Go, Java, C#, PHP, and more. This broad compatibility caters to developers utilizing different languages and frameworks. For select languages, Copilot provides language-specific features, such as type hints in TypeScript or docstring generation in Python. Integration with Visual Studio Code: Copilot seamlessly integrates with Visual Studio Code, a widely used code editor. This integration enables developers to receive real-time code suggestions and completions as they code. Achieved through an extension that can be effortlessly installed and configured within VS Code, this integration ensures accessibility to a diverse developer community. Real-time Feedback: Copilot offers real-time code suggestions, diminishing the need for developers to scour code snippets or documentation. This instantaneous feedback accelerates the development process and enhances productivity. Suggestions are presented inline within the code editor, allowing developers to seamlessly review and accept or modify suggestions without disrupting their workflow. Adaptive Learning: Copilot assimilates feedback from developers to enhance its future suggestions. Whether a developer accepts, rejects, or modifies a suggestion, this information contributes to refining the model's future suggestions. Over time, Copilot adjusts to a developer's coding style and preferences, delivering personalized and precise suggestions. Function and Class Completion: Copilot can generate entire functions and classes based on concise developer descriptions or comments, significantly expediting the development process, particularly for boilerplate code. It also provides examples demonstrating the use of specific functions or libraries and generates documentation strings for functions and classes. Duplication Detection: Copilot includes a feature to detect and refrain from suggesting code that exactly matches public code, addressing concerns regarding code plagiarism and copyright infringement. Diligent efforts are made to ensure that the generated code complies with open-source licenses and does not violate copyright laws, employing filtering and other mechanisms to prevent code misuse. Understanding Code Semantics: Copilot surpasses basic keyword matching by comprehending the semantics of the code. This capability allows it to suggest appropriate variable names, function calls, and entire code blocks relevant to the current context. Copilot effectively handles complex coding scenarios, offering pertinent suggestions in contexts such as nested functions, asynchronous code, and multi-threaded applications. Error Detection: Copilot aids in detecting potential errors or issues within the code and provides suggested fixes. It can recommend code refactoring to enhance readability, performance, or maintainability. Recent Iterations and Improvements: GitHub Copilot has undergone multiple iterations to enhance its functionality and user experience. These enhancements include: Improved Accuracy and Speed: Upgrades to the underlying Codex model have increased its precision and speed, resulting in more efficient and relevant suggestions. Context Understanding Improvements: Copilot can now provide more accurate suggestions, even in complex coding scenarios, due to improved context understanding. Addition of New Languages: Support for additional programming languages and frameworks has been integrated, expanding its applicability. Seamless Integration: Improvements in the integration with VS Code and other IDEs offer a more seamless and intuitive user experience. Stronger Compliance Measures: Enhanced mechanisms for detecting and preventing the suggestion of copyrighted or sensitive code have been implemented. Additionally, better management of open-source licenses ensures compliance and reduces legal risks.   Competing Entities Other companies and tools in the industry provide comparable AI-powered code assistance features, posing competition to GitHub Copilot:   Tabnine Tabnine employs AI to deliver code completions and suggestions across various IDEs and programming languages. It offers both cloud-based and local models, allowing developers to maintain the privacy of their code. Tabnine can be trained on a team's codebase to offer bespoke and pertinent suggestions tailored to the specific project.   Kite Kite provides AI-powered code completions for Python and JavaScript, seamlessly integrating with popular editors such as VS Code, PyCharm, and Sublime Text. o   It furnishes in-line documentation and code examples to facilitate developers' comprehension of using specific functions or libraries. o   Kite utilizes machine learning models to deliver precise and context-aware code completions.   IntelliCode by Microsoft IntelliCode delivers AI-assisted code recommendations based on patterns found in open-source projects and a developer's codebase. Integrated into Visual Studio and Visual Studio Code, it supports a wide array of programming languages. It tailors recommendations to specific teams by learning from the code patterns within a team's codebase.   Codota Codota focuses on providing code completions and suggestions for Java and Kotlin, with recent expansion into other programming languages. It provides both cloud-based and on-premises solutions to accommodate varying privacy needs. Codota learns from the developer's codebase to deliver more accurate and relevant suggestions.   Examining the facts, arguments and verdict of the case   Facts   In the given case of J. Doe vs. GitHub, the plaintiffs, who are developers, brought a legal action against GitHub and its parent company, Microsoft. The primary issue of the case arises around GitHub's Copilot tool, an AI-based code completion tool designed to assist developers by generating code snippets. The plaintiffs alleged that Copilot was trained on publicly available code, including code they had authored, which was protected under open-source licenses. They claimed that Copilot's generated code snippets were identical or substantially similar to their original work, which, according to them, amounted to copyright infringement. Furthermore, they argued that Copilot violated the terms of the open-source licenses by failing to give any statement of attribution to the original authors of the code and by not adhering to the license conditions.   As stated above, there are primarily two concerns raised by the plaintiff, first that there is copyright infringement by the defendants and second, that they have violated the contract. This takes us to the issues posed by the case.   Issues 1.     Whether Copilot's generation of code snippets constituted copyright infringement? 2.     Whether the plaintiffs had a valid claim for breach of contract due to the alleged violation of open-source licenses? 3.     Whether they were entitled to restitution for unjust enrichment? 4.     Whether their request for punitive damages was justified?   Arguments Plaintiffs The plaintiffs argued that Copilot's operation resulted in the unauthorized use and reproduction of their code, which infringed on their copyrights. They also contended that by not attributing the generated code to the original authors and by not complying with the open-source licenses, GitHub and Microsoft had breached contractual obligations. The plaintiffs sought restitution for the unjust enrichment of the defendants, who they claimed had profited from Copilot. Additionally, they sought punitive damages, arguing that the defendants' actions warranted such penalties.   Defendants In response, GitHub and Microsoft countered that Copilot did not produce identical copies of the plaintiffs' code, and any similarities were incidental due to the nature of the tool. They argued that open-source licenses do not create enforceable contract claims in this context. Furthermore, the defendants asserted that the plaintiffs had not sufficiently stated a claim for restitution for unjust enrichment under California law. Regarding the punitive damages, the defendants argued that such damages are not typically recoverable in breach of contract cases.   Decision of the Court Upon review, the court decided to dismiss the plaintiffs' claim under Section 1202(b) with prejudice, meaning that the claim could not be refiled. This section of the claim pertained to allegations related to the removal or alteration of copyright management information. The court found that the plaintiffs had not provided sufficient grounds to support this claim. However, the court allowed the plaintiffs' breach of contract claim to proceed, specifically regarding the alleged violations of open-source licenses. This meant that the plaintiffs could continue to pursue their argument that the defendants had breached the terms of these licenses.   AI and Skill Security: The Impact of the Judgment on the Developer Community   Source of training data One of the significant challenges in suing companies that offer generative AI tools lies in identifying the sources of the training data. In the present case, all training data used by Copilot was hosted on GitHub and subject to GitHub's open-source licenses. This made it relatively straightforward for the plaintiffs to pinpoint the specific license terms they believed had been violated. However, for more complex AI models like ChatGPT, which do not disclose their training data sources, proving similar violations could be considerably more difficult. The opaque nature of these models' data origins presents a substantial hurdle for plaintiffs attempting to assert their rights.   In the context of evolving AI legislation, potential European laws may soon require companies to disclose the data used to train their AI models. Such regulations could significantly aid in identifying and proving violations related to AI-generated content.   Prompt Engineering Considerations Additionally, the court's decision on the plaintiffs' standing to seek monetary damages is particularly noteworthy. The court ruled that plaintiffs could seek damages even if they themselves entered the prompts that led to the generation of the allegedly infringing content. This ruling could have far-reaching implications, especially given similar approaches in other ongoing cases involving generative AI.   For instance, in Authors Guild v. OpenAI Inc., plaintiffs claimed that by entering prompts, they generated outlines of sequels or derivatives of their original works, which they argued constituted copyright infringement. Similarly, in The New York Times Company v. Microsoft Corporation, the plaintiff entered prompts to recreate previously published articles, claiming this also amounted to infringement. In both cases, the plaintiffs themselves provided the input prompts that led to the generation of the contested content.   The court’s decision in J. Doe vs. GitHub aligns with the plaintiffs' approach in these other cases by affirming that the act of entering prompts does not disqualify them from seeking damages. This ruling emphasizes that plaintiffs can argue their content was used inappropriately by the AI, regardless of their role in generating the specific outputs. Moreover, the court in J. Doe vs. GitHub held that plaintiffs do not need to demonstrate, for standing purposes, that other users of the AI model would seek to reproduce their content. This aspect of the ruling is significant because it lowers the bar for establishing standing in copyright infringement cases involving generative AI. Plaintiffs no longer need to prove that their content is commonly used or sought after by other users of the AI tool. This could be a crucial argument in cases where the original content is not widely recognized or utilized but was still allegedly infringed upon by the AI.   The Impact of Development, Maintenance, and Knowledge Management on the Coding Market and Competition   Having understood the above points, it is also pertinent to note that the if outcome of this case could reshape the competitive landscape for AI coding assistants. Despite the dismissal, the case has far-reaching implications on how the AI community navigates intellectual property and copyright issues. If GitHub Copilot and similar tools were to be modifed to provide more attribution as demanded, it might increase the complexity and cost of developing these tools. This could slow down the adoption of AI in coding, as companies might need to invest more in ensuring this new compliance with open-source licenses. However following the dismissal, the current model for AI tools like Copilot has gotten reinforced, allowing them to continue suggesting code without significant changes to their attribution practices. This legal backing can boost the confidence of AI developers and users, leading to continued innovation and integration of AI in coding. Companies might feel more secure investing in AI tools, knowing that the legal risks associated with copyright infringement are currently manageable under existing laws and this precedent. Similarly, for maintenance, AI tools can continue to play a significant role in optimizing code and suggesting improvements without the immediate need for new attribution systems. This ensures that maintenance processes remain efficient and cost-effective. Plus, the ruling suggests that AI-generated code does not necessarily infringe on copyright if it does not explicitly replicate large chunks of protected material. This can now encourage much broader use of AI tools in knowledge management, enabling organisations to capture and disseminate coding practices and solutions more freely. As the market adjusts to these legal and ethical considerations, we predict that the companies that effectively navigate these challenges may gain a competitive edge.   US Judgment's Implications for the Indian IT Community   Importance of Legal Compliance   The ruling emphasizes the imperative requirement for AI tools and software development practices to strictly comply with copyright laws. It is crucial for Indian IT companies and developers to ensure that their utilization of AI tools, such as GitHub Copilot, complies with these laws to evade potential legal consequences.   Adhering to open-source licenses is of paramount importance. Indian developers must be vigilant in guaranteeing that AI-generated code does not breach these licenses, wherein there may be stipulations such as attribution requirements and constraints on commercial usage. Being well-versed in local and international copyright laws and regulations is quintessential for Indian IT companies to maneuver the complexities of legal compliance in a globalized industry. Implementing proactive legal strategies to supervise the use of AI tools within development processes can forestall potential violations and mitigate risks.   Ethical AI Development   Indian IT companies should prioritize the creation of AI systems that operate transparently and are accountable for their outputs. Granting users more control over AI-generated content can foster trust and diminish legal risks.   Indian IT firms should formulate and embrace ethical guidelines for AI usage addressing matters such as data privacy, bias in AI models, and the responsible utilization of AI-generated content. Engaging with diverse stakeholders can help ensure the responsible development and usage of AI tools.   Focus on Skill Development   Given the rapid evolution of AI technologies, Indian developers should stay abreast of the latest advancements and invest in training programs encompassing topics such as copyright laws, open-source licenses, and ethical AI practices to comprehend the legal and ethical implications of utilizing AI tools.   As AI takes on routine tasks, developers can concentrate on the more creative and innovative facets of their work. Encouraging developers to acquire new languages, frameworks, and tools can broaden their expertise and adaptability.   Protection of Intellectual Property   It is imperative for Indian developers and companies to be vigilant about their intellectual property rights and seek appropriate redress when their rights are infringed upon by AI tools or other entities.   The ruling underscores the necessity for developers whose rights are infringed upon by AI tools to seek compensation. This reinforces the economic worth of intellectual property and the need to safeguard it.   Global Standards and Competitiveness   Aligning with global legal and ethical standards can enhance the competitiveness and reputation of Indian IT companies in the international market. Upholding legal and ethical compliance can facilitate smoother international collaborations and create new opportunities for partnerships and projects.   This case holds substantial implications for the economic rights of the Indian IT community, emphasizing the paramount importance of safeguarding intellectual property (IP), fostering economic opportunities through responsible AI utilization, and ensuring adherence to global standards to protect economic interests.   Safeguarding Intellectual Property   The judgment underscores the necessity for Indian developers and companies to vigilantly protect their intellectual property rights. It is imperative for them to comprehensively understand the legal frameworks safeguarding their code and creations, ensuring that these rights remain inviolate in the presence of AI tools such as GitHub Copilot.   Developers and companies are urged to proactively safeguard their IP through the utilization of licensing agreements, patents, and trademarks to formally protect their software and code. Furthermore, regular auditing of their code in AI-generated outputs is recommended to detect and address potential infringements effectively.   The judgment implies that developers whose rights are violated by AI tools are entitled to seek legal recourse and compensation. Thus, emphasizing the economic value of intellectual property and the indispensability of protecting it through lawful channels.   Indian developers and companies are advised to advocate for more robust legal frameworks that offer comprehensive protection for intellectual property in the context of AI and software development, including advocating for clearer regulations and more effective enforcement mechanisms.   Economic Opportunities   Responsible usage of AI tools such as GitHub Copilot presents Indian IT companies with the opportunity to enhance productivity and foster innovation. AI's capacity to handle routine coding tasks allows developers to focus on more intricate and creative aspects of software development, thereby leading to the creation of higher-quality software products and services.   By demonstrating compliance with legal and ethical standards, Indian IT companies can gain a competitive edge in the global market, attracting more clients and partnerships and consequently enhancing their economic prospects.   Investing in skill development pertaining to AI and legal compliance can make Indian developers more competitive on a global scale. This includes training in state-of-the-art AI technologies, comprehension of copyright laws, and adherence to best practices in ethical AI development.   As AI continues to evolve, new job opportunities will arise in domains such as AI ethics, legal compliance, and advanced software development. Indian developers equipped with these skills can leverage these opportunities to enhance their career prospects and economic potential.   Global Standards and Competitiveness   The judgment incentivizes Indian IT companies to align with global legal and ethical standards. By adopting best practices in AI development and usage, Indian firms can sustain their competitiveness and reputation in the international market.   Adherence to these standards can differentiate companies within a crowded marketplace, attracting international clients and partnerships and thereby enhancing economic growth.   Understanding and complying with judgments such as the US GitHub Copilot case can smoothen international collaborations, enabling Indian IT firms to engage in cross-border projects and partnerships, thereby expanding their global footprint.   Ensuring that Indian IT services are perceived as reliable and legally compliant can cultivate trust with international partners, leading to more collaborative projects, joint ventures, and increased economic opportunities.   Economic Redress and Fair Compensation   The judgment reinforces that developers whose intellectual property is used without authorization by AI tools are entitled to seek economic redress. This underscores the significance of fair compensation for intellectual property usage and the economic rights of creators.   Indian developers should seek legal support to comprehend their rights and the available mechanisms for seeking compensation in cases of IP infringement. This involves consulting with legal experts and pursuing litigation when necessary.   The judgment highlights the economic value of intellectual property and the contributions of individual developers. Recognizing and fairly compensating these contributions is fundamental for fostering innovation and ensuring the sustainability of the IT industry.   Therefore, Indian IT companies should equip their developers with resources and legal assistance to safeguard their intellectual property rights, allowing creators to focus on innovation without apprehensions about potential infringements.

  • Book Review: Disrupt with Impact by Roger Spitz

    This is a review of a book recently authored by Roger Spitz, entitled, "Disrupt with Impact". Thus, it may be a short read as a book review. Disclaimer: My review of this book is limited to my examination & analysis of the Chapters 9, 10 & 11 of the book. The most important aspect of risk analysis and estimation is the relevance of any data point, inference, or trend within the context of risk estimation. If we are not clear about estimating the systemic or substantive realities surrounding any proposed data point or inference associated with risks, then our analysis will be clouded by judgments based on unclear and scattered points of assessment. The segment on the future of AI, strategic decision-making, and technology encouraged me to take a deeper look at this book and understand the purpose of writing it. Are these chapters similar to the typical chapters on AI markets, ecosystems, and communities found in other books? It does not seem that way, simply because throughout the book, you may observe that the author, in a cautiously interesting and neutral tone, addresses certain phenomena and realities based on tangible parameters. For example, some parameters or the powerful "distinctive features of technology" are ubiquitous in their own way. I found the attribute of technology being combinatorial and fusion-oriented quite interesting because the compounding effects of technology are indeed underrated. This is because these compounding effects are based on generic and special human-technology relationships and how the progressive or limiting role of technology creates human attributes—or perhaps branches of human attributes (or maybe micro-attributes, who knows). Even if some of these attributes are not clearly discernible as trends or use case correlations, it does not discount the role of any class of technology. I also appreciate that, unlike most authors and AI experts who view technology as supposedly neutral, the book asserts a commonsensical point that no technology is neutral. Superstupidity, technology 'substandardism' and superintelligence The reference to the term 'superstupidity' in this book is both ironic and intriguing. The author is clear and vocal about not mincing words when pointing out how substandard AI use cases or preview-level applications may impact humans through their potential for fostering idleness. Here is an excerpt from the book: 'Maybe the existential risk is not machines taking over the world, but rather the opposite, where humans start responding like idle machines—unable to connect the emerging dots of our UN-VICE world.' This excerpt reflects on a crucial element of the human-technology relationship and even anthropology: the evolution of human autonomy. It is praiseworthy that the author unfolds the simple yet profound point that promoting a culture of substandardism (yes, I’ve coined this word in this book review) could render the human-technology relationship so inactive that humans might be counter-anthropomorphized into 'idle machines.' The narrative raised by the author is deep. It is distinct from the usual argument that using smartphones or devices makes you lazy in a dependency-related sense when transitioning from older classes of technology to newer versions of the same class. Between the 2000s and the 2010s, the tech transition has been exceptionally quick. However, due to technology winter, the pandemic, the transformation of social media into recommendation platforms, and the lack of channeled funding for large-scale enhanced R&D across countries (among other reasons), we are witnessing the realization of Moore's Law and aspects of the Dunning-Kruger effect from a tech and behavioral economy standpoint. The spectrum of human dependency has slowed across fields of industrial, digital, and emerging technologies, which, in my view, the author highlights in this excerpt. "For instance, believing that AI can be a proxy for our own under- standing and decision-making as we delegate more power to algorithms is superstupid. Perhaps AI is also superstupid and may cause mistakes, wrong decisions or misalignment. Further, consider AI ineptitude . What might appear as incompetence may simply be algorithms acting on bad data." This is why I coined the term 'substandardism' for the purposes of this book review. The author brilliantly points out elements of technology substandardism, the disproportionate human-technology relationship, and how AI tools can indeed be superstupid. This reminds me of a recent call to shift the 'paradigm of Generative AI' by moving away from text-to-speech and text-to-visual toward text-to-action, which brings to mind The Entity from Mission: Impossible 7, Bujji from Kalki 2898, and Jarvis/The Vision from the Marvel Cinematic Universe—if I may reference bits of cinema. That being said, the responsible and pragmatic approach of the author in treating 'substandardized' (another new term I coined for this review) artificial intelligence use cases as a vector for potential risks is noteworthy. The author’s sincere writing will help anyone in the risk management or technology industry recognize the reality of technology substandardism. The Black Mirror Effect and Anticipatory Governance Although since COVID, the Black Mirror Effect has been frequently mentioned by people in journal articles, industry insights, social media posts and even other forms of insights, in the most generalised way, I appreciate the author to have dedicated a section of his book to Anticipatory Governance. For example, the reference to Edward Tenner is quite intriguing to me. I think Tenner's book "Why Things Bite Back" directly addresses the concept of unintended consequences of technology. Although the book is described as "dated," it's still considered "insightful." This suggests that Tenner's observations about technology and its unintended effects have stood the test of time and remain applicable to current technological developments, including AI. Tenner's work on unintended consequences provides a bridge between the existentialist philosophy discussed earlier (Sartre's "existence precedes essence") and the practical realities of technological advancement. It helps to ground the philosophical discussion in real-world examples and consequences. The author remains quite deliberate and cautious in differentiating two nearly distinct policy phenomena: unintended drawbacks and perverse consequences. The author illustrates this point using several examples: Air conditioning was developed to cool buildings but ended up contributing to climate change due to increased energy consumption. Industrialized agriculture aimed to provide affordable food on a large scale but led to obesity and environmental damage. Passenger airbags were introduced to save lives in car accidents but initially caused an increase in child fatalities due to automatic deployment. The Indian government's bounty program to reduce the cobra population backfired, as people started farming cobras for the reward, and when the program was discontinued, the farmed cobras were released, worsening the problem. This brings us to the Collingridge Dilemma and the 'quandary of time.' Since the hype to regulate artificial intelligence across governments has been in vogue for months now, the author hints at the possibility of using regulation or control of AI communities and developers by subjecting them to an intended form of containment. However, containing a community without estimating the potential impact at early stages is a challenging task. The author honestly points this out as an example of how anticipatory governance is on the rise, which is commendable. Here's an excerpt: To anticipate, we must distinguish between the unintended consequences which may arguably be unavoidable, versus the unanticipated outcomes, those adverse effects which could have been anticipated and avoided. When negative externalities are unavoidable, we can still seek to manage them effectively. The AAA Framework and the Future of Work It seems to me that the author has been quite responsible in writing about the role of artificial intelligence in shaping the future of work, which is not surprising considering his contributions and efforts in ushering techistentialism in his own way. That being said, the reference to "Radically Human," by Daugherty and Wilson remains interesting to me. The author highlights the vision, mentioning that AI will augment and empower human expertise rather than replace it. The author is also accurate in highlighting the fact that knowledge-intensive tasks have become integral to consulting and other facets of employment & business communities. This is why I find the author's mention of AI’s influence to be "spilling over into complex cognitive functions" praiseworthy. In a thought-provoking excerpt, the author delves into the complex and often oversimplified relationship between artificial intelligence (AI) and the future of work. In the tenth chapter of the book, the author's skepticism towards simplistic slogans that suggest AI will only replace those who cannot use it, is insightful for people in risk management & technology. The author argues that such statements fail to capture the intricate interplay between cognification, mass automation, and the evolving nature of work. The author emphasizes the uncertainty surrounding the net impact of AI on employment, acknowledging that while experts predict a surge in opportunities, the lack of data on the future makes it difficult to make definitive predictions. The chapter underscores the need for a deeper understanding of these complex relationships to ensure a future where both humans and technology can thrive harmoniously. The chapter also highlights the author's observations on AI's increasing role in fields that traditionally require extensive education and training, such as law, accounting, insurance, finance, and medicine. The gradual automation and augmentation of these fields through generative AI are noted as significant transformations that require the integration of systems, adjustment of supply and demand, reskilling of workforces, and adaptation of regulations. It is notable that unlike most "GenAI experts", the author remains honest to enumerate on the possibilities of technology winter and the uncertain & unclear impact of AI technologies, let alone GenAI, on the skill economy. Black Jellyfishes, Elephants & Swans The author presents a compelling typology of risks associated with the development and deployment of artificial intelligence (AI). Drawing on vivid animal metaphors, the author categorizes these risks into three distinct types: Black Jellyfish, Black Elephants, and Black Swans. Each category represents a unique set of challenges and potential consequences that demand our attention and proactive responses. The author begins by introducing the concept of Black Jellyfish, which are low-probability, high-impact events that grow from seemingly predictable situations into far less predictable outcomes. The author highlights several potential Black Jellyfish scenarios, such as info-ruption (the disruptive and potentially dangerous effects of information misuse), scaling bias (the amplification of discrimination and inequality through AI), and the fusion of AI and biotechnology (which could challenge the status of humans as dominant beings). These scenarios underscore the need to consider the cascading effects of AI and how they could spiral out of control. Next, the author turns to Black Elephants, which are obvious and highly likely threats that are often ignored or downplayed due to divergent views and a lack of understanding. The author identifies several critical Black Elephants, including the need to reinvent education to keep pace with AI, the deskilling of decision-making as we delegate more responsibilities to AI systems, the potential for mass technological unemployment, and the double-edged sword of cyber insecurity. The author emphasizes the importance of mobilizing action, aligning stakeholders, and understanding the complex systems in which these risks are embedded. Finally, the author explores the concept of Black Swans, which are unforeseeable, rare, and extremely high-impact events. The author posits several potential Black Swan scenarios, such as the development of artificial general intelligence (AGI) and superintelligent AI systems, extreme catastrophic failures resulting from interacting AI systems, and the magical discovery of cures for incurable diseases. While these events are inherently unpredictable, the author argues that we can still build resilient foundations, monitor for nonobvious signals, and implement guardrails to mitigate the potential consequences. Throughout the tenth chapter, the author's language is both engaging and thought-provoking, drawing the reader into a deeper consideration of the risks and challenges associated with AI. The use of animal metaphors adds a layer of accessibility and memorability to the complex concepts being discussed, while also highlighting the urgency and gravity of the issues at hand. One potential weakness of the sections on Black Jellyfish, Black Elephant and Black Swan is that they do not provide concrete examples or case studies to illustrate the risks and scenarios being discussed. While the animal metaphors are effective in capturing the reader's attention, some readers may desire more tangible evidence to support the author's claims. The Future of Decision-Making: AI's Role and the Risk of Moral Deskilling In the section on 'moral deskilling', the author delves into the complex relationship between artificial intelligence (AI) and human decision-making, particularly in the context of strategic decisions. The author's language is direct and engaging, drawing the reader's attention to the potential consequences of relying too heavily on AI in decision-making. By citing the Pew Research Center's cautionary statement, the author emphasizes the risk of humans becoming overly dependent on machine-driven networks, potentially leading to a decline in their ability to think independently and take action without the aid of automated systems. Furthermore, the author introduces the concept of "moral deskilling," as described by the Markkula Center for Applied Ethics. This concept suggests that as humans increasingly rely on AI for decision-making, they may lose the ability to make moral judgments and ethical decisions independently. The author's inclusion of this concept adds depth to the discussion, prompting readers to consider the long-term implications of AI's role in decision-making. Regarding the Pew Research Center, the author cites a report that expresses concern about the potential negative impacts of AI on human agency and capabilities. The report, titled "Concerns about human agency, evolution and survival," is part of a larger study called "Artificial Intelligence and the Future of Humans" conducted by the Pew Research Center in 2018. The study surveyed experts about their views on the potential impacts of AI on society by 2030. The specific section cited in the highlights concerns that increasing dependence on AI could diminish human cognitive, social, and survival skills. Experts quoted in the report, such as Charles Ess from the University of Oslo and Daniel Siewiorek from Carnegie Mellon University, warn about the potential for "deskilling" as humans offload various tasks and capabilities to machines. As for the Markkula Center for Applied Ethics, the Center has published extensively on the ethical implications of AI, including a report titled "Ethics in the Age of AI". This report, based on a survey of 3,000 Americans, found that a significant majority (86%) believe technology companies should be regulated, and 82% care whether AI is ethical or not. Hence, the author responsibly introduces how AI tools and systems may contribute to the decision-making value chains, in a more pragmatic and straightforward fashion, which is noteworthy. The author does not hype the limited role of AI in value chains, which is really helpful, and maybe eye-opening for some. This excerpt summarises the author's exposition made in the tenth chapter of his book: "We prefer a world where human decisions propel our species forward, where we choose the actions that lead to staying relevant. If we do not, our C-suites might find themselves replaced by an A-suite of algorithms." It is also interesting that the author claims 'data is the new oil' of the 21st century, when he also claims that "big data does not predict anything beyond the assumption of an idealized situation in a stable system". I think the realism in the second statement quoted complements the role of data in a multipolar world, and how the data-algorithm relationship shapes risk management and facets of human autonomy. Info-ruption and the Internet of Existence (IoE) The reference to a kinda less mainstream term, i.e., Inforuption, by the author in the eleventh chapter of his book, seems as intriguing to me as the revelation to the idea of the Internet of Existence (IoE) was. For instance, the author delves into the rapidly expanding world of data-driven innovations and their profound impact on our lives. The author's use of the phrase "data byting back" is a clever play on words, alluding to the idea that data is not only shaping our world but also actively influencing and potentially threatening our existence. The author raises a crucial question: should data be treated as a language, as fundamental to our existence as our linguistic substrates? This question highlights the pervasive nature of data in our lives and suggests that our understanding of data is essential to comprehending its impact on our future. The author presents a timeline of how software and data have evolved, starting with the digitization of business, moving to the democratization of software creation through no-code, low-code, and generative AI, and culminating in a digital universe that surpasses our physical space in importance. This timeline effectively illustrates the increasing dominance of data in our lives and the potential for software to "eat" not only the world but also humanity itself. The author's use of the phrase "software eating humanity" is particularly striking, as it suggests that our reliance on data and software could ultimately consume us. This idea is reminiscent of the concept of technological singularity, where artificial intelligence surpasses human intelligence and control. However, the author does not simply present a dystopian view of the future. Instead, he emphasises the importance of understanding data to articulate its impacts and make informed decisions about its governance. The excerpt concludes by highlighting the critical importance of data privacy, ethics, and governance in a world where our bodies and environments are increasingly composed of data. Disinformation-as-a-service There is a section in the eleventh chapter of the book where the author delves into the emerging threat of disinformation-as-a-service (DaaS). The author explains that DaaS is a criminal business model that provides highly customizable, centrally hosted disinformation services for a fee, enabling the commoditization of info-ruption. This concept is particularly alarming as it allows various bad actors, such as conspiracy theorists, political activists, and autocracies, to easily initiate disinformation campaigns that can reinforce each other by magnifying their impact. The author's use of real-world examples, such as the QAnon groups targeting Wayfair, Netflix, Bill Gates, and 5G telecom operators, as well as the defamation lawsuits filed by Dominion Voting Systems and Smartmatic, effectively illustrates the tangible consequences of disinformation attacks on businesses. These examples demonstrate the severity of the threat and the potential for significant financial and reputational damage. I am however not invalidating or validating the subject of the legal claims in the defamation lawsuits per se. Nevertheless, the mention of these real-world examples by the author itself signifies that he has illustrated some promising examples. The author's transition from discussing DaaS to the broader topic of cyber insecurity is well-executed, as it highlights the growing vulnerability of our digital world. The author emphasizes that cyberattacks can be launched anonymously and at minimal cost, yet have devastating consequences, affecting critical infrastructure such as power grids, healthcare systems, and government structures. The inclusion of the potential legal ramifications for companies facing lawsuits due to inadequate cybersecurity measures further underscores the urgency of addressing these threats. The introduction of ransomware-as-a-service (RaaS) as another emerging threat is particularly compelling. The author's comparison of RaaS to enterprise software, complete with customer service for smooth ransom collection, effectively conveys the ease with which cyberattacks can now be launched. The mention of leading ransomware brands such as BlackCat, DarkSide, and LockBit potentially becoming as commonplace as well-known software companies like Microsoft, Oracle, and Adobe is a powerful and unsettling analogy that drives home the severity of the threat. Conclusion Overall, the book is a definitive introductory read for understanding key technological risks around emerging technologies, including artificial intelligence and others, and the author has been largely responsible in articulating the risks, trends, and phenomena in a well-packaged and well-encapsulated way. I would not regard this book as a form of industry research or an authority on the scholarship of technology policy or technology risk management, but I am clear in saying that this is a promising casket highlighting key risks, trends, and phenomena that we see in an emerging multipolar world. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • The Generative AI Patentability Landscape: Examining the WIPO Report

    This insight examines a recently published Report on Generative Artificial Intelligence Patents by the World Intellectual Property Organisation , as of mid-2024. Now, let's address a caveat before delving into the analysis and the WIPO report itself. It's important to note that this report may not serve as the definitive authority on AI patentability within the WIPO's international intellectual property law framework.   While the report provides valuable insights, certain sections discussing the substantive features of Generative AI and related aspects might not directly reflect WIPO's official stance on AI patentability. This caveat is crucial for two reasons: Evolving Landscape:  The AI patentability landscape is still developing, and individual countries are establishing their own legal frameworks, positions, and case law on the subject. International Framework:  The creation of an international intellectual property law framework under WIPO for AI patentability remains uncertain, as aspects related to economic-legal contractual rights and knowledge management may evolve. The Three Perspectives of Analysis in this Report This report is based on three key aspects of analysis, or let us say, three perspectives of analysis, when it examines the Generative AI Landscape: – The first perspective covers the GenAI models. Patent filings related to GenAI are analyzed and assigned to different types of GenAI models (autoregressive models, diffusion models, generative adversarial networks (GAN), large language models (LLMs), variational autoencoders (VAE) and other GenAI models). – The second perspective shows the different modes of GenAI. The term “mode” describes the type or mode of input used and the type of output produced by these GenAI models. Based on keywords in the patent titles and abstracts, all patents are assigned to the corresponding modes: image/video, text, speech/voice/music, 3D image models, molecules/genes/ proteins, software/code and other modes.  Introduction  – The third perspective analyzes the different applications for modern GenAI technologies. The real-world applications are numerous, ranging from agriculture to life sciences to transportation and many more. Here's an example to illustrate each perspective: GenAI Models Perspective Let's consider a hypothetical company, AICorp, that has filed a patent for a new generative adversarial network (GAN) architecture. The patent describes improvements to the generator and discriminator components of the GAN, enabling it to produce higher quality synthetic images. In the WIPO report's analysis, this patent would be assigned to the "generative adversarial networks (GAN)" category under the GenAI models perspective. GenAI Modes Perspective Now, suppose AICorp's patented GAN model is specifically designed to generate realistic images and videos. The patent abstract and title mention "high-resolution image synthesis" and "video generation."Based on these keywords, the WIPO report would classify this patent under the "image/video" mode in the second perspective, which looks at the types of input and output data the GenAI model handles. GenAI Applications Perspective Finally, let's say AICorp's patent describes potential applications of their GAN model in creating virtual product demos for e-commerce websites and generating synthetic data for autonomous vehicle perception systems. In the third perspective, the WIPO report would categorize this patent under both the "business solutions" and "transportation" application areas, based on the real-world use cases mentioned. What is GenAI then? Interestingly, this report takes note of how Generative AI is defined in the European Union's Artificial Intelligence Act. Here is an interesting excerpt from the "Background and Historical Origins" segment: From the point of view of general users, one key aspect is that unlike the traditional "supervised" machine learning models, which require a large amount of task-specific annotated training data, these models can generate new content just by writing natural language prompts. Therefore, using GenAI tools based on these models does not require technical skills. For the first time, modern cutting-edge AI becomes directly accessible to the general public. The report also recognises the role of synthetic content in shaping the effectiveness of Generative AI deliverables. Here is how the term 'synthetic content' is defined: Synthetic data is annotated information that computer simulations or algorithms generate as an alternative to real-world data. It usually seeks to reproduce the characteristics and properties of existing data or to produce data based on existing knowledge (Deng 2023). It can take the form of all different types of real-world data. For example, synthetic data can be used to generate realistic images of objects or scenes to train autonomous vehicles. This helps for tasks like object detection and image classification. Because of synthetic data, millions of diverse scenarios can be created and tested quickly, overcoming limitations of physical testing. Methodology for patent analysis Now, the Appendix A of the Report acknowledges that GenAI itself is a modern concept without any clear definition. In that regard, even the patent classes of Generative AI are not fully established yet. For example, the Cooperative Classification Scheme's (CPC) G06N3/045 ("Auto-encoder networks; Encoder-decoder networks") is the most relevant patent classification for GenAI, which is a sub-group of G06N3/02 Neural Networks. Many of the sub-groups in this area were recently introduced in 2023 and are currently being reclassified. As per the Report, there is no specific class dedicated to GenAI. However, GenAI can be considered a broad concept encompassing the application of various software methods to vast datasets ("modes"), addressing multiple applications. This is evident in the use of numerous generic terms in modern patents, often without defining the underlying technology used, which is assumed to be known by skilled practitioners in the field. Here is an excerpt from Appendix A explaining how they had developed a specific approach for capturing patents involving 'Generative AI'. We had to develop a specific approach for capturing patents, involving the generation of digital entities, such as images, text or data through the use of specific machine learning algorithms. To achieve this, we used a two-stage approach: first, we combined classical patent searches together with prompts using our AI tool (see Appendices A.4 and A.5 for the patent searches and prompts) to retrieve a first patent dataset with high recall. Second, we refined the previous set using a trained BERT classifier to increase precision. This approach helps to avoid patents that are not “GenAI” according to the usually accepted definition, but that might generate products or other things (such as 3D printers or cameras) by using AI techniques somewhere in a process. Here's also the methods of initiating patent searches: Method 1a: A patent search using generic terms such as “generative AI” to locate patents using specific keyword concepts only. These concepts are searched more broadly or narrowly in different patent classifications. Method 1b: Five patent searches for the specific search concepts: Generative Adversarial Networks (GAN), Autoregressive Models, Diffusion Models, LLMs and Variational Autoencoders. These five concepts are considered as almost a synonym to the concept “generative AI.” Method 2: About one hundred prompts for various concepts of GenAI and its use, by using EconSight’s advanced AI search algorithms. Since these aspects have been addressed, let us now examine the contents of the report, for gaining clarity. Patent Trends in GenAI Models Patentability Economics Landscape China's dominance in all five core GenAI models (diffusion models, autoregressive models, GANs, VAEs, LLMs) in terms of patent families has significant implications for the global AI innovation landscape and the economics of patentability. Diffusion Models : China's lead is most pronounced in diffusion models, with 14 times more patent families than the US since 2014. This suggests Chinese inventors and companies are aggressively seeking to protect their intellectual property in this emerging GenAI technique, which could give them a competitive edge in commercializing related products and services. The high volume of Chinese patents may create barriers for other countries' companies to enter this space without risking infringement. Autoregressive Models : China also has a very high global share of patents in autoregressive models. As these models are key to applications like text generation, China's strong IP position could enable its companies to capture value in emerging markets for GenAI-powered content creation tools and services. GANs : South Korea and India have a relatively high proportion of their GenAI patents in GANs, a widely used technique for generating synthetic images and videos. This focus may align with national industrial strategies around digital content production. However, they may face challenges in global markets given China's overall lead in GAN patents. VAEs and LLMs : The US has strengths in VAEs and LLMs relative to its overall GenAI patent position. This could stem from fundamental research breakthroughs by American universities and companies. Protecting these innovations through patents can help the US capture economic value, but Chinese firms may be able to develop workarounds or alternatives. Japan : The lack of categorization of Japanese GenAI patents suggests they may be taking a more exploratory approach, patenting novel architectures beyond the mainstream models. While riskier, this strategy could pay off if they develop superior techniques that become globally adopted standards. AI and Law Perspective The stark disparities in GenAI patenting across countries and models raise important legal and policy questions: Patent Quality : The high volume of Chinese GenAI patent filings, especially in diffusion models,  raises potential concerns around patent quality and validity . Less rigorous examination standards could lead to overbroad or dubious patents being granted, fueling uncertainty and litigation. Disclosure : GenAI models rely heavily on training data and algorithms . Current patent laws requiring disclosure of inventions may not be adequate for AI innovations, where details of datasets and model architectures are key. Lack of transparency could hinder follow-on innovation. Infringement : The complexity of GenAI models makes it difficult to assess infringement . Overly broad patent claims could stifle innovation and competition if companies fear lawsuits. Skilled legal and technical analysis will be needed to determine the scope of protection. Exceptions and Limitations : Countries may consider enacting targeted exceptions and limitations to patent rights for GenAI in order to enable research, interoperability and downstream use . But these need to be carefully crafted to avoid undermining incentives for innovation. Patent Trends in GenAI Modes Patentability Economics Perspective The dominance of China across all GenAI modes in terms of patent families over the last decade has significant economic implications for the global AI innovation landscape. Image/Video GenAI : China's strong focus on image/video-based GenAI, with nearly 13,000 patent families since 2014, positions it to capture significant value in the growing market for AI-generated visual content. This could give Chinese companies a competitive edge in fields like entertainment, gaming, and design. Text and Speech/Voice/Music GenAI : China's high volume of patents in text and speech/voice/music GenAI, key data types for large language models (LLMs), suggests it is well-positioned to monetize AI applications in content creation, virtual assistants, and customer service. Molecules/Genes/Proteins GenAI : The high growth rate of Chinese patents in this area (64% annually from 2021-2023) indicates a strategic focus on applying GenAI in the lucrative biotech and pharmaceutical industries. Securing broad patent protection could enable Chinese firms to extract significant licensing revenues. US Strengths The US has a relatively high global share of patents in software/code and molecules/genes/proteins GenAI. Focusing patenting activity in these complex, high-value areas could help US companies maintain competitiveness and pricing power. Japan and South Korea The emphasis on speech/voice/music GenAI in these countries aligns with their strengths in consumer electronics and suggests potential for capturing value in the growing smart device and IoT markets. However, Japan's recent decline in patenting activity may put it at a disadvantage. AI and Law Perspective The dominance of China across all GenAI modes in terms of patent families has significant implications that need to be considered in light of differing national legal frameworks around intellectual property and artificial intelligence. From a comparative law standpoint, China's patent system has some notable differences from the US and European systems that could impact GenAI inventorship: China has a "first-to-file" system that grants patents to the first party to file an application, regardless of the date of invention. The US recently switched to a first-inventor-to-file system but still provides limited grace periods for inventors' own disclosures. This could incentivize a rush to file GenAI patents in China. China allows a broader scope of patentable subject matter, including software, business methods and AI algorithms. The US and EU have more restrictions on such patents. Chinese GenAI inventors may face fewer eligibility hurdles. China has requirements for foreign entities to file patents through a licensed Chinese patent agent and undergo a security review for inventions made in China. This could advantage domestic Chinese GenAI inventors over foreign ones. In terms of AI-specific regulations, China was one of the first to issue dedicated rules on generative AI services in 2023. The Interim Measures require providers to respect IP rights, avoid illegal content, and undergo security assessments - but overall take a relatively permissive approach focused on promoting the GenAI industry. By contrast, the draft EU AI Act proposes a more restrictive, risk-based approach requiring conformity assessments and prohibiting certain high-risk applications. The US has issued guidance like the AI Bill of Rights but not yet binding AI regulations at the federal level.These regulatory divergences, combined with China's first-to-file system and broader patent eligibility, could further reinforce its leading position in GenAI patenting. Chinese inventors may face lower regulatory barriers and have greater incentives to seek patents. However, concerns around explainability, transparency and potential bias in GenAI systems may pose challenges under existing legal principles in multiple jurisdictions: AI inventorship remains a grey area, with the US, EU and UK patent offices denying applications listing AI systems as inventors. Only South Africa and Australia have allowed it so far. Attributing GenAI inventions could prove legally tricky. Product liability and tort law principles around foreseeability, control and disclosure duties could be strained by opaque GenAI systems making autonomous decisions. Apportioning liability between GenAI developers, deployers and users is an open question. Data privacy laws like the EU's GDPR and China's PIPL impose restrictions on the use of personal data to train AI that could impact GenAI development. Differing national standards complicate compliance. Connection between GenAI models and GenAI modes The patent analysis reveals a strong interdependence between specific GenAI models and the types of data they process, which has significant implications for the patentability and legal protection of GenAI innovations. Text and Large Language Models (LLMs) The finding that text is the most commonly used data type for LLMs has important economic and legal ramifications: LLMs trained on vast text corpora may raise copyright and fair use issues, as the training data likely includes copyrighted works. The extent to which such use constitutes infringement or is protected under fair use doctrines remains a grey area. The generated text outputs of LLMs could potentially infringe on the intellectual property rights of the original text used in training. Establishing clear guidelines for attributing authorship and ownership of LLM-generated content is crucial. Patenting LLM innovations requires careful consideration of the model's novelty and non-obviousness, as well as the sufficiency of disclosure, given the opacity of the training process. Speech/Voice/Music and GANs/VAEs The importance of speech, voice, and music data for GAN and VAE models presents its own set of legal challenges: Using copyrighted audio data for training GANs and VAEs without permission may constitute infringement. Developers must ensure they have the necessary licenses or rely on public domain or freely licensed datasets. The generation of synthetic speech or music that closely mimics real individuals or artists could give rise to privacy, publicity rights, and trademark issues. Clear guidelines are needed to balance innovation with protecting individual rights. Patenting speech/voice/music-related GANs and VAEs requires careful drafting to capture the key innovative aspects while avoiding overly broad claims that may be invalid or difficult to enforce. Image/Video and GANs The dominance of GANs in processing image, video, 3D model, and software/code data has significant implications: Training GANs on copyrighted images or videos without authorization may infringe on the rights of content owners. Developing robust licensing frameworks and fair use guidelines is essential. The generation of synthetic media that is indistinguishable from real content raises concerns about deepfakes, misinformation, and fraud. Legal frameworks must adapt to address these risks while enabling legitimate applications. Patenting GAN innovations requires a nuanced understanding of the model architecture, training process, and applications to craft claims that are both novel and adequately described. Molecules/Genes/Proteins and GANs/VAEs The use of GANs and VAEs for processing molecular, genetic, and protein data presents unique legal considerations: The generation of novel molecules or proteins using AI may challenge traditional notions of inventorship and patentability. Clarifying the eligibility of AI-generated innovations is crucial. The use of proprietary or sensitive genetic data for training GANs and VAEs raises privacy and ethical concerns. Robust data governance frameworks and ethical guidelines are needed. Patenting AI-generated molecules or proteins requires a delicate balance between incentivizing innovation and preventing overly broad monopolies that could stifle research. Patent Trends in GenAI Applications Patentability Economics Perspective The strong interdependence between specific GenAI models and the types of data they process has significant economic implications for the patentability and commercialization of GenAI innovations. Text and LLMs : The finding that text is the most commonly used data type for LLMs suggests a high potential for patenting text-based GenAI applications. As LLMs become more sophisticated in generating human-like text, there may be increased opportunities to secure broad patent protection for novel LLM architectures and training techniques. This could give patent holders a competitive edge in the growing market for AI-powered content creation, translation, and analysis tools. Speech/Voice/Music and GANs/VAEs : The importance of speech, voice, and music data for GANs and VAEs highlights the potential for patenting innovations in AI-generated audio content. As these technologies advance, there may be valuable opportunities to patent novel GAN and VAE architectures optimized for generating realistic speech, music, and sound effects. Securing such patents could be particularly lucrative in the entertainment and gaming industries. Image/Video and GANs : The dominance of GANs in processing image, video, 3D model, and software/code data suggests a rich space for patenting visually-focused GenAI techniques. Companies that develop novel GAN architectures or training methods enabling high-quality image, video, or 3D content generation may be able to secure valuable patents. These could be leveraged for licensing or to maintain a competitive advantage in fields like digital media, design, and visualization. Molecules/Genes/Proteins and GANs/VAEs : The use of GANs and VAEs for processing molecular, genetic, and protein data indicates significant potential for patenting AI innovations in the biotech and pharmaceutical domains. Novel GAN or VAE techniques that can accurately generate or predict molecular structures, gene sequences, or protein foldings could be immensely valuable. Patenting such methods could help secure market exclusivity and attract investment in AI-powered drug discovery and precision medicine applications. AI & Law Perspective The interdependence between GenAI models and data types raises important legal considerations: Apportionment of Rights in Collaborative GenAI Models The finding that certain GenAI models like GANs and VAEs are particularly well-suited for processing specific data types like images, speech, and molecules highlights the need for clear legal frameworks governing the apportionment of rights in collaborative GenAI development. As different organizations specialize in developing GenAI models optimized for specific data modes, complex questions arise around the ownership and licensing of the resulting IP when these models are combined or built upon. Imagine a startup that develops a state-of-the-art GAN architecture for generating realistic images, which is then integrated by a larger company into a multimodal GenAI system that also processes speech and text using proprietary LLMs and VAEs. How should the IP rights and revenue streams from this combined system be allocated? Collaborative GenAI model development will require adaptive licensing frameworks and revenue sharing models that equitably apportion rights based on the relative value contributed by each component. Smart contracts and blockchain-based systems for tracking model provenance and usage could help enable granular, automated allocation of IP rights. Liability for Harmful Outputs from Multimodal GenAI The use of different GenAI models for different data modes in a single system raises thorny questions around liability when that system produces harmful outputs: A multimodal GenAI system that generates videos with synchronized speech and background music by combining the outputs of different sub-models for each mode (e.g. a GAN for video, LLM for speech, VAE for music) could produce defamatory content. But which model is liable - the one that generated the harmful speech, visuals, or both together? As GenAI systems become more complex and incorporate an increasing variety of specialized models, a more nuanced approach to liability that considers each model's contribution to the final output will be needed. Blanket approaches that assign full liability to the overall system operator may not be suitable. Legal frameworks must evolve to account for the modularity and composability of GenAI systems, perhaps by mandating embedded model-level monitoring and "explanation" capabilities that can help trace the provenance of harmful outputs to specific component models. Need for Domain-Specific GenAI Governance Frameworks The gravitational pull between certain GenAI model types and data modes points to the need for domain-specific governance frameworks tailored to the unique risk profiles of each pairing: The use of GANs for generating highly realistic images and videos requires governance frameworks focused on mitigating risks like deepfakes, disinformation, and infringement of image/likeness rights. Mandatory watermarking or "radioactive data" approaches may be needed. LLMs used for high-stakes text generation tasks like medical or legal analysis may require specialized testing and certification regimes, as well as mandatory quality control measures like human-in-the-loop verification of outputs. GenAI systems used for drug discovery and molecular design will need governance frameworks ensuring compliance with safety regulations, clinical trial standards, and IP protections around generated molecules. Connection between core models and applications The strong interdependence between specific GenAI models and application areas raises important legal considerations around intellectual property, liability, and the need for domain-specific governance frameworks: Liability for AI-Generated Outputs in High-Stakes Domains The dominance of certain GenAI models in high-stakes application areas like transportation (GANs), life and medical sciences (diffusion models), and banking/finance (autoregressive models) raises critical questions about liability for harmful or infringing outputs: If a GAN-generated image used to train an autonomous vehicle leads to an accident, who is liable - the GAN developer, the vehicle manufacturer, or the fleet operator? Apportioning fault will require careful analysis of each party's role and the causal chain. Diffusion models used to generate protein sequences for drug discovery may produce harmful molecular structures. Establishing liability will hinge on whether the harm was foreseeable and what safety checks were in place. As GenAI models become more prominent in sectors with major public safety and socioeconomic ramifications, a clear liability framework tailored to the unique risks of each domain will be essential. This may require updates to existing product liability, negligence, and anti-discrimination laws. IP Ownership of Outputs in Collaborative GenAI Applications The use of different GenAI models for different applications in a larger system or workflow raises thorny questions around IP ownership of the ultimate output: If a VAE used for anomaly detection in smart city sensor data feeds into a larger predictive maintenance system involving other GenAI and non-AI components, who owns the IP in the final insights - the VAE developer, the system integrator, or the city? Contracts will need to contemplate the "nesting" of GenAI within larger solutions. LLMs used for creative text generation may be fine-tuned on an enterprise's proprietary data before being deployed in a customer-facing chatbot. Ownership of the generated text may be unclear between the LLM provider, the enterprise, and the end user. Terms of use and licensing agreements must directly address these scenarios. As GenAI enables more "modular" and composable AI development, legal frameworks for IP allocation must evolve beyond the binary paradigm of human vs machine authorship. Nuanced approaches considering the relative contributions of different GenAI and human components will be needed. Need for Domain-Specific GenAI Governance Frameworks The clustering of certain GenAI model types around particular application domains highlights the need for tailored governance frameworks attuned to the unique technical and societal challenges of each domain: The prominence of GANs in transportation applications like autonomous driving will require governance frameworks focused on ensuring verifiable safety, security, and interpretability of training data and outputs. Sector-specific standards and validation protocols may be needed. The use of diffusion models for high-stakes life sciences applications like drug and protein design will require close coordination with health regulators to ensure compliance with safety and efficacy standards. Specialized approval pathways and monitoring mechanisms may be required. LLMs powering legal and governmental applications will need robust governance to ensure adherence to due process, explainability, and non-discrimination principles enshrined in public law. Mechanisms for public participation and oversight in GenAI development and deployment will be critical. In short, the strong convergence of certain GenAI models around high-stakes application areas necessitates a shift from one-size-fits-all AI governance to domain-specific approaches. Limitations and future of patent analysis in relation to GenAI The report rightly highlights the challenges traditional patent analysis methods face in keeping pace with the rapid evolution of digital technologies like GenAI. The use of pre-defined patent classification schemes and keyword searches can indeed lag behind the emergence of new concepts and terminology in these fast-moving fields. However, the report could provide more concrete evidence to substantiate these claims. For instance, it would be helpful to see specific examples of GenAI-related patents that were missed or misclassified by conventional search methods. Quantitative data on the percentage of GenAI patents falling outside existing classifications, or the average time lag between a new concept emerging and it being added to patent taxonomies, would lend credibility to the argument. The report cites the rapid user adoption of ChatGPT as an example of GenAI's accelerated development outpacing the patent system's response. While this anecdote is illustrative, it is somewhat misleading to conflate the speed of consumer adoption with the pace of technological change from a patentability perspective. The underlying machine learning techniques behind ChatGPT, like transformer architectures and large language models, have been the subject of patent filings for several years prior to the product's launch. Similarly, the semantic ambiguity between "AI-generated" and "AI-assisted" content creation highlighted in the report, while valid, is not a new phenomenon unique to GenAI. Analogous challenges have long existed in software and business method patents, where the line between automation and human direction can be blurry. The report would benefit from acknowledging this continuity and explaining how GenAI may differ in the degree or implications of such ambiguity. The potential of advanced AI tools like LLMs to enable more agile and intelligent patent search and classification, as alluded to in the report, is indeed exciting. However, the report seems to take an overly optimistic view of the current capabilities and readiness of these techniques. While promising, AI-based prior art search and patent landscaping are still nascent and face significant challenges around data quality, interpretability, and legal admissibility. For instance, the report mentions using "patent-trained LLMs" and "fine-tuned models" to collect relevant patents and map development trends. However, it does not address key questions such as: How are these models trained and validated to ensure comprehensive and unbiased coverage? How do they handle the "long-tail" of niche or emerging concepts that may be underrepresented in training data? What mechanisms exist for human experts to scrutinize and contest the outputs of these "black-box" models? The report would be strengthened by a more critical and concrete discussion of the current limitations and development roadmap for AI-based patent analysis. Additionally, while the report understandably focuses on the benefits of AI for patent offices and applicants, it largely ignores the potential risks and challenges. These include issues around algorithmic bias and transparency, data privacy and security, and the need for significant upskilling of patent professionals. A more balanced and forward-looking analysis would consider these factors and highlight the need for proactive governance frameworks. Conclusion In conclusion, while the WIPO report identifies the disruptive potential of GenAI for patent analysis, its claims would be bolstered by more rigorous evidence, acknowledgement of historical continuities, and a critical examination of the current maturity and limitations of AI-based solutions. As GenAI reshapes the patent landscape, a nuanced and interdisciplinary approach that brings together technical, legal, and policy perspectives will be crucial to realizing its benefits while navigating its challenges. The WIPO Report is still substandard in terms of its estimations, and in many sections of the report, too many colourable and misleading statements around Generative AI have been referred to, for example, on technical deliverables, the quantum of how patentability plays out in a clarified sense and others. The honest and reasonable aspect of this report is that WIPO and EconSight acknowledge the limitations in their report, especially on patent analysis, features of patentability, technical categorisation of Generative AI and the interplay of technical purpose & use cases of Generative AI. This is why it is necessary to read the complete report beyond reading some stats believing numbers on how many GenAI patents exist in comparison to the number of scientific publications published. Many scientific publications, including those on Generative AI, have been found fake. However, most people do not care about nuances, and believe that hype cycles will fix problems. We hope this insight for Visual Legal Analytica has been helpful to all. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • [New Report] The Indic Pacific - ISAIL Joint Annual Report, 2022-24, Launched

    Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law (ISAIL),are proud to announce their joint achievements in promoting responsible AI development and legal innovation in India. Founded by Abhivardhan, Indic Pacific and ISAIL have been instrumental in shaping the discourse on AI governance in India. Abhivardhan's early research on AI ethics and international law has been recognized by the Council of Europe, setting the stage for groundbreaking initiatives. Key milestones include: India's first privately proposed draft Artificial Intelligence Regulation Bill, authored by Abhivardhan, aimed at establishing a comprehensive legal framework for AI governance. The VLiGTA ecosystem, offering cutting-edge training programs on technology law, AI governance, and intellectual property for professionals and organizations. ISAIL's significant contributions to AI standardization and research, with the 2020 Handbook on AI and International Law featured by the Council of Europe as a notable Indian AI initiative. The launch of AIStandard.io , a platform dedicated to the development and dissemination of AI standardization guidelines. Ongoing research and policy initiatives, including the recent report on "Reimaging and Restructuring MeitY for India" (IPLR-IG-007). You can now access the Joint Annual Report of Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law for the year 2022-24, for free. For more information about Indic Pacific Legal Research and ISAIL, please visit www.indicpacific.com and www.isail.in . The report is also accessible for reading at https://indopacific.app/product/indic-pacific-isail-joint-annual-report-2022-24/

  • Why AI Standardisation & Launching AIStandard.io & Re-introducing IndoPacific.App

    Artificial Intelligence (AI) is widely recognized as a disruptive technology  with the potential to transform various sectors globally. However, the economic value of AI technologies remains inadequately quantified . Despite numerous reports on AI ethics and governance , many of these efforts have been inconsistent and reactionary, often failing to address the complexities of regulating AI effectively. Even India's MeitY AI Advisory, which faces constitutional challenges, was a result of knee-jerk reactions . Amidst the rapid advancements in AI technology, the market has been inundated with AI products and services that frequently overpromise and underdeliver , leading to significant hype and confusion about AI's actual capabilities. Many companies are hastily deploying AI without a comprehensive understanding of its limitations , resulting in substandard or half-baked solutions that can cause more harm than good. In  India, several key issues in AI policy remain unaddressed by most organizations and government functionaries . Firstly, there is no settled legal understanding of AI at a socio-economic and juridical level , leading to a lack of clarity on what can be achieved through consistent laws, jurisprudence, and guidelines on AI. Secondly, the science and research community, along with the startup and MSME sectors in India, have not actively participated in addressing holistic and realistic questions around AI policy, compute economics, AI patentability, and productization . Instead, much of the AI discourse is driven by investors and marketing leaders, resulting in half-baked and misleading narratives . The impact of AI on employment is multifaceted, with varying effects across industries . While AI solutions have demonstrated tangible benefits in B2B sectors such as agriculture, supply chain management, human resources, transportation, healthcare, and manufacturing, the impact on B2C segments like creative, content, education, and entertainment remains unclear . The long-term impact of RoughDraft AI or GenAI should be approached with caution , and governments worldwide should prioritize addressing the risks associated with the misuse of AI, which can affect the professional capabilities of key workers and employees involved with AI systems. This article aims to explain why AI standardization is necessary and what can be achieved through it in and for India. With the wave of AI hype, legal-ethical risks surrounding substandard AI solutions, and a plethora of AI policy documents , it is crucial to understand the true nature of AI and its significance for the majority of the population. By establishing comprehensive ethics principles for the design, development, and deployment of AI in India, drawing from global initiatives but grounded in the Indian legal and regulatory context , India can harness the potential of AI while mitigating the associated risks, ultimately leading to a more robust and ethical AI landscape. The Hype and Reality of AI in India The rapid advancement of Artificial Intelligence (AI) has generated significant excitement and hype in India. However, it is crucial to separate the hype from reality and address the challenges and ethical considerations that come with AI adoption. The Snoozefest of AI Policy Jargon: Losing Sight of What Matters In the midst of the AI hype train, we find ourselves drowning in a deluge of policy documents that claim to provide guidance and clarity, but instead leave us more confused than ever. These so-called "thought leaders" and "experts" seem to have mastered the art of saying a whole lot of nothing, using buzzwords and acronyms that would make even the most seasoned corporate drone's head spin. Take, for example, the recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) on March 1, 2024.This masterpiece of bureaucratic jargon manages to use vague terms like "undertested" and "unreliable" AI without bothering to define them or provide any meaningful context. It's almost as if they hired a team of interns to play buzzword bingo and then published the results as official policy. Just a few days later, on March 15, the government issued yet another advisory, this time stipulating that AI models should only be accessible to Indian users if they have clear labels indicating potential inaccuracies or unreliability in the output they generate. Because apparently, the solution to the complex challenges posed by AI is to slap a warning label on it and call it a day. And let's not forget the endless stream of reports, standards, and frameworks that claim to provide guidance on AI ethics and governance. From the IEEE's Ethically Aligned Design initiativeto the OECD AI Principles, these documents are filled with high-minded principles and vague platitudes that do little to address the real-world challenges of AI deployment. Meanwhile, the actual stakeholders – the developers, researchers, and communities impacted by AI – are left to navigate this maze of jargon and bureaucracy on their own. Startups and SMEs struggle to keep up with the constantly shifting regulatory landscape, while marginalized communities bear the brunt of biased and discriminatory AI systems . It 's time to cut through the noise and focus on what really matters: developing AI systems that are transparent, accountable, and aligned with human values. We need policies that prioritize the needs of those most impacted by AI, not just the interests of big tech companies and investors. And we need to move beyond the snoozefest of corporate jargon and engage in meaningful, inclusive dialogue about the future we want to build with AI. So let's put aside the TESCREAL frameworks and the buzzword-laden advisories, and start having real conversations about the challenges and opportunities of AI. Because at the end of the day, AI isn't about acronyms and abstractions – it's about people, and the kind of world we want to create together. Overpromising and Underdelivering Many companies in India are rushing to deploy AI solutions without fully understanding their capabilities and limitations. This has led to a proliferation of substandard or half-baked AI products that often overpromise and underdeliver, creating confusion and mistrust among consumers . The excessive focus on generative AI and large language models (LLMs) has also overshadowed other vital areas of AI research, potentially limiting innovation . Ethical and Legal Considerations The integration of AI in various sectors, including healthcare and the legal system, raises complex ethical and legal questions. Concerns about privacy, bias, accountability, and transparency need to be addressed to ensure the responsible development and deployment of AI. The lack of clear regulations and ethical guidelines around AI in India has created uncertainty and potential risks. Policy and Regulatory Challenges India's approach to AI regulation has been reactive rather than strategic, with ad hoc responses and unclear guidelines . The recent AI advisory issued by the Ministry of Electronics and Information Technology (MeitY) has faced criticism for its vague terms and lack of legal validity. There is a need for a comprehensive legal framework that addresses the unique aspects of AI while fostering innovation and protecting individual rights. Balancing Innovation and Competition AI has the potential to drive efficiency and innovation, but it also raises concerns about market concentration and anti-competitive behavior . The Competition Commission of India (CCI) has recognized the need to study the impact of AI on market dynamics and formulate policies that effectively address its implications on competition. What's Really Happening in the "India" AI Landscape? Lack of Settled Legal Understanding of AI India currently lacks a clear legal framework that defines AI and its socio-economic and juridical implications. This absence of settled laws has led to confusion among the judiciary and executive branches regarding what can be achieved through consistent AI regulations and guidelines[1]. A recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) in March 2024 aimed to provide guidelines for AI models under the Information Technology Act. However, the advisory faced criticism for its vague terms and lack of legal validity, highlighting the challenges posed by the current legal vacuum[2]. The ambiguity surrounding AI regulation is exemplified by the case of Ankit Sahni, who attempted to register an AI-generated artwork but was denied by the Indian Copyright Office. The decision underscored the inadequacy of existing intellectual property laws in addressing AI-generated content[3]. Limited Participation from Key Stakeholders The AI discourse in India is largely driven by investors and marketing leaders, often resulting in half-baked narratives that fail to address holistic questions around AI policy, compute economics, patentability, and productization[1]. The science and research community, along with the startup and MSME sectors, have not actively participated in shaping realistic and effective AI policies. This lack of engagement from key stakeholders has hindered the development of a comprehensive AI ecosystem[4]. Successful multistakeholder collaborations, such as the IEEE's Ethically Aligned Design initiative, demonstrate the value of inclusive policymaking[5]. India must encourage greater participation from diverse groups to foster innovation and entrepreneurship in the AI sector. Impact of AI on Employment The impact of AI on employment in India is multifaceted, with varying effects across industries. While AI solutions have shown tangible benefits in B2B sectors like agriculture, supply chain management, and healthcare, the impact on B2C segments such as creative, content, and education remains unclear[1]. A study by NASSCOM estimates that around 9 million people are employed in low-skilled services and BPO roles in India's IT sector[6]. As AI adoption increases, there are concerns about potential job displacement in these segments. However, AI also has the potential to enhance productivity and create new job opportunities. The World Economic Forum predicts that AI will generate specific job roles in the coming decades, such as AI and Machine Learning Specialists, Data Scientists, and IoT Specialists[7]. To harness the benefits of AI while mitigating job losses, India must invest in reskilling and upskilling initiatives. The government has launched programs like the National Educational Technology Forum (NETF) and the Atal Innovation Mission to promote digital literacy and innovation[8]. As India navigates the impact of AI on employment, it is crucial to approach the long-term implications of RoughDraft AI and GenAI with caution. Policymakers must prioritize addressing the risks associated with AI misuse and its potential impact on the professional capabilities of workers involved with AI systems[1]. By expanding on these key points with relevant examples and trends, the article aims to provide a comprehensive overview of the challenges and considerations surrounding AI policy in India. The next section will delve into potential solutions and recommendations to address these issues. A Proposal to "Regulate" AI in India: AIACT.IN The Draft Artificial Intelligence (Development & Regulation) Act, 2023 ( AIACT.IN ) Version 2 , released on March 14, 2024, is an important private regulation proposal developed by yours truly. While not an official government statute, AIACT.IN v2 offers a comprehensive regulatory framework for responsible AI development and deployment in India.AIACT.IN v2 introduces several key provisions that make it a significant contribution to the AI policy discourse in India: Risk-based approach : The bill adopts a risk-based stratification and technical classification of AI systems , tailoring regulatory requirements to the intensity and scope of risks posed by different AI applications. This approach aligns with global best practices, such as the EU AI Act . Apart from the risk-based approach, there are 3 other ways to classify AI. Promoting responsible innovation : AIACT.IN v2 includes measures to support innovation and SMEs, such as regulatory sandboxes and real-world testing . It also encourages the sharing of AI-related knowledge assets through open-source repositories , subject to IP rights. Addressing ethical and societal concerns : The bill tackles issues such as content provenance and watermarking of AI-generated content , intellectual property protections , and countering AI hype . These provisions aim to foster transparency, accountability, and public trust in AI systems. Harmonization with global standards : AIACT.IN v2 draws inspiration from international initiatives such as the UNESCO Recommendations on AI and the G7 Hiroshima Principles on AI . By aligning with global standards, the bill promotes interoperability and facilitates India's integration into the global AI ecosystem. Despite its status as a private bill, AIACT.IN v2 has garnered significant attention and support from the AI community in India. The Indian Society of Artificial Intelligence and Law (ISAIL) has featured the bill on its website, recognizing its potential to shape the trajectory of AI regulation in the country. Now, to disclose, I had proposed this AIACT.IN in November 2023 and then in March 2024 to promote a democratic discourse and not a blind implementation of this bill in the form of a law. The response has been overwhelming so far, and a third version of the Draft Act is in the works already. However, as I had taken feedback from advocates, corporate lawyers, legal scholars, technology professionals and even some investors and C-suite professionals in tech companies, the feedback that I received was that benchmarking AI itself is a hard task, which even through this AIACT.IN proposal could become difficult to implement due to lack of general understandings around AI. What to Standardise Then? Before we standardise artificial intelligence in India, let us configure and understand what exactly can be standardised. To be fair, standardisation of AI in India is contingent upon the nature of the industry itself. As of now, the industry is at a nascent stage despite all the hype, and the so-called discourse around "GenAI" training. This explains that we are mostly at the scaling up and R&D stages around AI & GenAI, be B2B, B2C or D2C in India. Second, let's ask - who should be subject to standardisation? In my view - AI standardisation must be neutral of the net worth or economic status of any company in the market. This means that the principles of AI standardisation, both sector-neutral & sector-specific across the aisle, must apply on all market players, in a competitive sense. This is why the Indian Society of Artificial Intelligence and Law has introduced Certification Standards for Online Legal Education (edtech). Nevertheless, the way AI standards must be developed must have a sense of distinction that it remains mindful of the original / credible use cases that are coming up. The biggest risk of AI hype in this decade is that any random company starts claiming they have a major AI use case, only to find out they haven't tested or effectively built that AI even at the stage of their "solution" being a test case. This is why it becomes necessary to address AI use cases critically. There are 2 key ways that one can standardise AI and not regulate it - (1) the Legal-Ethical Way; and (2) the Technical Way. None of the means can be opted to discount another. In my view, both methods must be implemented, with caution and sense. The reason is obvious. Technical benchmarking enables us to track the evolution of any technology and its sister and daughter use cases, while legal-ethical benchmarking gives us a conscious understanding of how effective AI market practices can be developed. Now, it does not mean that the legal-ethical methods of benchmarking on commonsensical principles like privacy, fairness, data quality etc., (most AI standards will naturally be about data protection principles to begin with at first across sectors) must be applied in a rigid, controllable and absolutist way, because an improperly drafted standardisation approach could also be problematic for the market economy, which is still reeling with the scaling and R&D stages of AI. Fortunately, India already has a full-fledged DPDPA to begin with. Here's what we have planned for technology professionals, AI & tech startups & MSMEs of Bharat and the Indo-Pacific: The Indian Society of Artificial Intelligence and Law (ISAIL) is launching aistandard.io - a repository of AI-related legal-ethical and policy standards with sector-neutral or sector-specific focus. Members of ISAIL, and of specific committees can wholeheartedly contribute to AI standardisation by suggesting their inputs on standardising AI use cases, solutions, testing benchmarks (legal /policy /technical /all); The ISAIL Secretariat will define a set of rules of engagement to contribute to AI standardisation for professionals and businesses; You can also participate and become a part of the aistandard.io community as an ISAIL member for active participation via paid subscription at indian.substack.com or via manual request at executive@isail.co.in ; The Indian Society of Artificial Intelligence and Law will dedicate to invite technology companies, MSMEs and Startups to become their Allied Members soon; This is why, I am glad to state that the  Indian Society of Artificial Intelligence and Law  in conjunction with  Indic Pacific Legal Research LLP  will come with relevant standards on AI use cases across certain key sectors in India - in banking & finance, health, education, intellectual property management, agriculture and legal technologies . Our aim would be to propose industry viability standards and not regulatory standards to study basic parameters for regulation, such as (1) inherent purpose of AI systems, (2) market integrity (includes competition law), (3) risk management and (4) knowledge management. Indic Pacific will publish the Third Version of the AIACT.IN proposal shortly; To begin with, we have defined certain principles of AI Standardisation, which may apply in every case. We have termed these principles as the "ISAIL Principles of AI Standardisation, i.e., aistandard.io ". The ISAIL Principles of AI Standardisation Principle 1: Sector-Neutral and Sector-Specific Applicability AI standardization guidelines should be applicable across all sectors and industries, regardless of the size or economic status of the companies involved. However, they should also consider sector-specific requirements and use cases to ensure relevance and effectiveness. Principle 2: Legal-Ethical and Technical Benchmarking AI standardization should involve both legal-ethical and technical benchmarking. Legal-ethical benchmarking should focus on principles like privacy, fairness, and data quality, while technical benchmarking should enable tracking the evolution of AI technologies and their use cases. Principle 3: Flexibility and Adaptability The standardization approach should be flexible and adaptable to the evolving AI landscape in India, which is still in the scaling and R&D stages. The guidelines should not be rigid or absolutist, but should allow room for innovation and growth. Principle 4: Credible Use Case Focus The guidelines should prioritize credible and original AI use cases, and critically evaluate claims made by companies to avoid hype and misleading narratives. This will help ensure that the standardization efforts are grounded in practical realities. Principle 5: Interoperability and Market Integration AI standardisation should prioritize interoperability to ensure seamless integration of market practices and foster a free economic environment. Standards should be developed with due care to promote healthy competition and innovation while preventing market fragmentation. Principle 6: Multistakeholder Participation and Engagement Protocols The development of AI standards should involve active participation and collaboration from diverse stakeholders, including the science and research community, startups, MSMEs, industry experts, policymakers, and civil society. However, such participation will be subject to well-defined protocols of engagement to ensure transparency, accountability, and fairness. The open-source or proprietary nature of engagement in any initiative will depend on these protocols. Principle 7: Recording and Quantifying AI Use Cases To effectively examine the evolution of AI as a class of technology, it is crucial to record and quantify AI use cases for systems, products, and services. This includes documenting the real features and factors associated with each use case. Both legal-ethical and technical benchmarking should be employed to assess and track the development and impact of AI use cases. From VLiGTA App to IndoPacific App We have transitioned our technology law, and law & policy repository / e-commerce platform, VLiGTA.App to IndoPacific.App . We are thrilled to announce a significant evolution in our platform’s journey.  Say hello to  indopacific.app , your essential app for mastering legal skills and insights. This change is driven by our commitment to making legal education more comprehensive and accessible to a broader audience, especially those in the tech industry and beyond. Why the Change? 🔍 Enhanced Focus and Broader Audience Our previous platform,  vligta.app , was primarily focused on legal professionals. With  indopacific.app , we are expanding our horizons to make legal knowledge relevant and accessible to tech professionals and other non-legal fields. Learn how legal skills can empower you, no matter your profession. 🌟 Alignment with Our New Vision and Mission Our new main tagline, "Your essential app for mastering legal skills & insights," underscores our dedication to being the go-to resource for high-quality, practical legal education. Meanwhile, our supporting tagline, "Empower yourself with legal knowledge, tailored for tech and beyond," highlights our commitment to broader applicability and professional growth. 📈 Improved User Experience and Resources Enjoy a revamped user interface, enhanced features, and a richer resource library. Dive into diverse content such as case studies, interactive modules, and expert talks that bridge the gap between legal concepts and practical application in various fields. 🌏 Reflecting a Global Perspective The name  indopacific.app  signifies our goal to cater to a global audience, particularly in the dynamic and rapidly evolving regions of the Indo-Pacific. We aim to provide universally applicable legal education that transcends geographical and professional boundaries. What to Expect? All existing URLs from  vligta.app  will automatically redirect to the corresponding pages on  indopacific.app , ensuring a seamless transition with no interruption in access to our resources. Join us on this exciting journey as we continue to empower professionals with essential legal skills and insights tailored for the tech industry and beyond. 🌐 References [1] https://www.nature.com/articles/s41599-024-02647-9 [2] https://law.asia/navigating-ai-india/ [3] https://sageuniversity.edu.in/blogs/impact-of-artificial-intelligence-on-employment [4] https://morungexpress.com/absence-of-dedicated-legal-framework-a-challenge-for-ai-regulations [5] https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity [6] https://www.pwc.in/assets/pdfs/consulting/technology/data-and-analytics/artificial-intelligence-in-india-hype-or-reality/artificial-intelligence-in-india-hype-or-reality.pdf [7] https://www.ris.org.in/sites/default/files/Publication/Policy%20brief-104_Amit%20Kumar.pdf [8] https://www.livelaw.in/articles/artificial-intelligence-india-lacks-clear-ip-laws-around-ai-results-249693 [9] https://juriscentre.com/2023/12/14/lack-of-laws-governing-ai-in-india-focus-on-deepfake/ [10] https://www.barandbench.com/law-firms/view-point/artificial-intelligence-the-need-for-development-of-a-regulatory-framework [11] https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/ [12] https://www.spotdraft.com/blog/engaging-stakeholders-in-ai-use-policy-development [13] https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives [14] https://www.publicissapient.com/insights/AI-hype-or-reality [15] https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf [16] https://www.sciencedirect.com/science/article/pii/S266672152200028X [17] https://www.cnbctv18.com/technology/wef-2023-gita-gopinath-says-ai-could-impact-30-of-jobs-in-india-18816901.htm [18] https://nasscom.in/knowledge-center/publications/ai-beyond-myth-hype [19] https://accesspartnership.com/the-key-policy-frameworks-governing-ai-in-india/ [20] https://indiaai.gov.in/article/ai-impact-on-india-jobs-and-employment

  • Microsoft's Calculated 'Competition' with OpenAI

    Recent developments have shed new light on the complex relationship between Microsoft and OpenAI, two significant players in the artificial intelligence (AI) sector. While the companies have maintained a collaborative partnership, Microsoft's 2024 annual report reveals a more nuanced dynamic, explicitly acknowledging areas of competition between the two entities. This insight aims to examine the current state of affairs between Microsoft and OpenAI, analyzing their partnership, areas of competition, and the potential implications for the broader AI industry. By exploring official statements, financial reports, and market trends, we can gain a clearer understanding of how these two influential organizations are positioning themselves in the rapidly evolving AI landscape. Key points of discussion will include: The nature of Microsoft's investment in OpenAI and their collaborative efforts Specific areas where the companies now compete, as outlined in Microsoft's annual report The strategic implications for both companies and the AI industry at large Potential future scenarios for the Microsoft-OpenAI relationship The Microsoft-OpenAI Collaboration Timeline The relationship between Microsoft and OpenAI has been marked by significant investments and collaborations, evolving from a strategic partnership to a more complex dynamic over the years. Initial Investment and Collaboration (2019-2022) Microsoft's involvement with OpenAI began in 2019 with a $1 billion investment, aimed at developing artificial general intelligence (AGI) with OpenAI exclusively using Microsoft's Azure cloud services. This initial phase focused on joint research and development, with Microsoft gaining the right to commercialize resulting technologies. In 2020, Microsoft announced an exclusive license to GPT-3, OpenAI's large language model, further cementing their collaboration.This move allowed Microsoft to integrate GPT-3 capabilities into its own products and services. Expanded Investment and Integration (2023) In January 2023, Microsoft significantly increased its stake in OpenAI with a reported $10 billion investment.This multi-year agreement expanded their partnership, with Microsoft providing advanced supercomputing systems and cloud infrastructure to support OpenAI's research and products. Emerging Competitive Dynamics (2024) Despite their close partnership, Microsoft's 2024 annual report explicitly acknowledged competition with OpenAI in certain AI services and search markets. This admission highlights the complex nature of their relationship, where collaboration and competition coexist. Current State of the Partnership As of 2024, the Microsoft-OpenAI partnership remains strategically important for both companies. Microsoft continues to be OpenAI's exclusive cloud provider, while also integrating OpenAI's technologies into its products. However, the acknowledgment of competition suggests a nuanced relationship where both entities are positioning themselves in the rapidly evolving AI market. The partnership faces potential challenges, including regulatory scrutiny and the need to balance collaborative efforts with individual corporate interests. As the AI landscape continues to evolve, the dynamics between Microsoft and OpenAI may further shift, reflecting the high stakes and competitive nature of the AI industry. Microsoft's Acknowledgment of OpenAI as a Competitor In a significant shift from their previously collaborative stance, Microsoft has explicitly recognized OpenAI as a competitor in their 2024 annual report. This acknowledgment, found in the company's Form 10-K, marks a pivotal moment in the evolving landscape of artificial intelligence (AI) and cloud services. Competitive Dynamics in AI and Cloud Services Microsoft's declaration reflects the rapidly changing dynamics in the AI industry. While the two companies have maintained a strong partnership, with Microsoft investing billions in OpenAI, the acknowledgment of competition suggests a more complex relationship moving forward. This shift is indicative of the high stakes involved in the AI race, where even close collaborators can find themselves vying for market share. Strategic Implications Increased AI Investments : Microsoft reported a 9% increase in research and development expenses, reaching $29.5 billion, with a significant portion dedicated to cloud engineering and AI investments. This substantial commitment underscores Microsoft's determination to maintain a competitive edge in AI technologies. Product Integration : The company is aggressively integrating AI capabilities across its product lines, including Office 365, Bing, and LinkedIn. This strategy aims to differentiate Microsoft's offerings and potentially lock in customers to their AI-enhanced ecosystem. Cloud Service Differentiation : Microsoft is leveraging AI to set its cloud services apart, particularly in Azure. This focus on AI-driven differentiation could lead to increased customer retention and attraction, directly competing with OpenAI's offerings. Market Recognition and Challenges Microsoft's Form 10-K also acknowledges the highly competitive nature of the AI market, with rapid evolution and new entrants constantly emerging. This recognition extends to potential challenges and risks associated with AI development, including: Unintended use of AI technologies The need for responsible AI practices Potential regulatory challenges Long-term Commitment to AI Despite the competitive stance, Microsoft's financial results show strong growth in cloud services, which they expect to further enhance through AI integration. The company's active development of AI infrastructure and training capabilities indicates a long-term commitment to remaining at the forefront of AI technology, even as it navigates a more competitive relationship with OpenAI. Competition Law Implications Microsoft's explicit recognition of OpenAI as a competitor in its 2024 annual report marks a significant shift in the artificial intelligence (AI) competitive landscape. This acknowledgment has several important implications from a competition law perspective: Collaborative Competition : The acknowledgment highlights the complex nature of Microsoft's relationship with OpenAI, which involves both collaboration (through significant investments) and competition. This "coopetition" model may attract scrutiny from competition authorities concerned about potential collusion or market allocation. Merger and Acquisition Implications : This competitive stance could affect how regulators view any future acquisitions or deeper integrations between Microsoft and OpenAI, potentially raising concerns about market consolidation. Data and Resource Access : Competition authorities may examine whether Microsoft's dual role as an investor in and competitor to OpenAI provides it with unfair advantages in terms of data access or computational resources. Vertical Integration Concerns : As Microsoft integrates AI capabilities across its product lines, regulators may scrutinize whether this vertical integration creates barriers to entry for other AI competitors. However, as pointed out by Matt Trifiro in his LinkedIn post, this development is likely to be just the beginning of a broader trend in the tech industry, particularly in the cloud and AI sectors. Two major shifts are already becoming apparent. A New Wave of Cloud Differentiation and Lock-In Microsoft is observing a significant change in how major cloud providers, including itself, are approaching the market. This shift represents a departure from the previous decade's trend of convergence in cloud functionality. Key aspects of this shift include: Exclusive AI Capabilities Cloud providers are now focusing on developing and offering unique AI-powered features and services. These exclusive capabilities are designed to set each provider apart in an increasingly competitive market. Proprietary AI Ecosystems The goal is to create AI-centric environments that are unique to each cloud provider. This strategy aims to increase customer dependency on specific platforms, making it more challenging for clients to switch providers. Reversal of Multi-Cloud Trends Previously, there was a move towards making cloud services more interoperable and supporting multi-cloud strategies. The new approach may make it more difficult for enterprises to maintain a multi-cloud environment. Impact on Enterprise Customers Businesses may find themselves increasingly tied to a single cloud provider's AI ecosystem. This could lead to reduced flexibility but potentially deeper integration and more advanced AI capabilities. Microsoft's Strategy As evidenced in the Form 10-K, Microsoft is heavily investing in AI across all segments. The company is integrating AI capabilities into products like Azure, Office 365, and Dynamics 365. This aligns with the broader trend of creating a more differentiated and potentially "sticky" cloud ecosystem. An Explosion of Specialized Cloud Providers The second major shift involves the emergence of highly specialized AI cloud service providers. This trend is reshaping the competitive landscape in cloud computing and AI services. Key aspects of this shift include: Niche AI Service Providers New players are entering the market with highly specialized AI cloud services. Examples mentioned include CoreWeave for AI training and Zero Gap AI for inferencing. Unique Capabilities These specialized providers offer capabilities that major cloud platforms may struggle to match quickly. They often focus on specific aspects of AI, such as training models or optimizing inference. Physical Infrastructure Advantages Some of these providers have unique physical assets that give them an edge. For instance, Zero Gap AI's urban fiber and Point of Presence (POP) footprint is mentioned as a hard-to-replicate advantage. Market Impact These specialized providers are expected to capture significant portions of the growing AI cloud services market. They may pose a challenge to more generalized cloud providers in specific AI-related niches. Microsoft's Response The Form 10-K indicates that Microsoft is aware of this trend and its potential impact. Microsoft is investing heavily in AI infrastructure and training, likely to compete with these specialized providers. The company's strategy includes both broadening its general AI capabilities and developing more specialized services. Potential for Partnerships or Acquisitions While not explicitly stated, this trend could lead to partnerships between major cloud providers and specialized AI companies. It might also drive acquisitions as larger companies seek to incorporate specialized AI capabilities. Conclusion These two shifts represent a significant evolution in the cloud and AI landscape. Microsoft's Form 10-K reflects an awareness of these changes and outlines strategies to adapt and compete in this new environment. The company's focus on AI integration across its product lines, substantial investments in AI infrastructure, and recognition of the competitive threat from both major cloud providers and specialized AI companies indicate a comprehensive approach to addressing these market shifts. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • AI-Generated Texts and the Legal Landscape: A Technical Perspective

    Artificial Intelligence (AI) has significantly disrupted the competitive marketplace, particularly in the realm of text generation. AI systems like ChatGPT and Bard have been used to generate a wide array of literary and artistic content, including translations, news articles, poetry, and scripts[8]. However, this has led to complex issues surrounding intellectual property rights and copyright laws[8]. Copyright Laws and AI-Generated Content AI-generated content is produced by an inert entity using an algorithm, and therefore, it does not traditionally fall under copyright protection[8]. However, the U.S. Copyright Office has recently shown openness to granting ownership to AI-generated work on a "case-by-case" basis[5]. The key factor in determining copyright is the extent to which a human had creative control over the work's expression[5]. The AI software code itself is subject to copyright laws, and this includes the copyrights on the programming code, the machine learning model, and other related aspects[8]. However, the classification of AI-generated material, such as writings, text, programming code, pictures, or images, and their eligibility for copyright protection is contentious[8]. Legal Challenges and AI The New York Times (NYT) has recently sued OpenAI and Microsoft for copyright infringement, contending that millions of its articles were used to train automated chatbots without authorization[2]. OpenAI, however, has argued that using copyrighted works to train its technologies is fair use under the law[6]. This case highlights the ongoing legal battle over the unauthorized use of published work to train AI systems[2]. Paraphrasing and AI Paraphrasing tools, powered by AI, have become increasingly popular. These tools can rewrite, enhance, and repurpose content while maintaining the original meaning[7]. However, the use of such tools has raised concerns about the potential for copyright infringement and plagiarism. To address this, it is suggested that heuristic and semantic protocols be developed for accepting and rejecting AI-generated texts [3]. AI-based paraphrasing tools, such as Quillbot and SpinBot, offer the ability to rephrase text while preserving the original meaning. These tools can be beneficial for students and professionals alike, aiding in the writing process by providing alternative expressions and avoiding plagiarism . However, the accuracy and ethical use of these tools are concerns. For example, a student might use an AI paraphrasing tool to rewrite an academic paper, but without a deep understanding of the content, the result could be a superficial or misleading representation of the original work . This raises questions about the integrity of the paraphrased content and the student's learning process. It's crucial to develop guidelines for the ethical use of paraphrasing tools, ensuring that users engage with the original material and properly attribute sources to maintain academic and professional standards. Citation and Referencing in the AI Era The advent of AI-generated texts has necessitated a change in the concept of citation and referencing. Currently, the American Psychological Association (APA) recommends that text generated from AI be formatted as "Personal Communication," receiving an in-text citation but not an entry on the References list[4]. However, as AI-generated content becomes more prevalent, the nature of primary and secondary sources might change, and the traditional system of citation may need to be permanently altered. For instance, the Chicago Manual of Style advises treating AI-generated text as personal communication, requiring citations to include the AI's name, the prompt description, and the date accessed. However, this approach may not be sufficient as AI becomes more prevalent in content creation. Hypothetically, consider a scenario where a researcher uses an AI tool to draft a section of a literature review. The current citation standards would struggle to accurately reflect the AI's contribution, potentially leading to issues of intellectual honesty and academic integrity. As AI-generated content becomes more sophisticated, the distinction between human and AI authorship blurs, prompting a need for new citation frameworks that can accommodate these changes. Content Protection and AI The rise of AI has also raised concerns about the protection of gated knowledge and content. Publishing entities like NYT and Elsevier may need to adapt to the changing landscape[1]. The protection of original content in the age of AI is a growing concern, especially for publishers and content creators. The New York Times' lawsuit against OpenAI over the use of its articles to train AI models without permission exemplifies the legal challenges in this domain. To safeguard content, publishers might consider implementing open-source standards for data scraping and human-in-the-loop grammatical protocols. Imagine a small online magazine that discovers its articles are being repurposed by an AI without credit or compensation. To combat this, the magazine could employ open-source tools to track the use of its content and ensure that any AI-generated derivatives are properly licensed and attributed, thus maintaining control over its intellectual property. The rapid advancement of AI technologies has brought about significant changes in the legal and technical landscape. As AI continues to evolve, it is crucial to address the legal implications of AI-generated texts and develop protocols to regulate their use. This will ensure the protection of intellectual property rights while fostering innovation in AI technologies. References [1] https://builtin.com/artificial-intelligence/ai-copyright [2] https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html [3] https://www.semrush.com/goodcontent/paraphrasing-tool/ [4] https://dal.ca.libguides.com/CitationStyleGuide/citing-ai [5] https://mashable.com/article/us-copyright-law-ai-generated-content [6] https://www.nytimes.com/2024/01/08/technology/openai-new-york-times-lawsuit.html [7] https://www.hypotenuse.ai/paraphrasing-tool [8] https://www.legal500.com/developments/thought-leadership/legal-issues-with-ai-generated-content-copyright-and-chatgpt/ [9] https://www.cnbc.com/2024/01/08/openai-responds-to-new-york-times-lawsuit.html [10] https://www.copy.ai/tools/paraphrase-tool [11] https://www.techtarget.com/searchcontentmanagement/answer/Is-AI-generated-content-copyrighted [12] https://www.theverge.com/2024/1/8/24030283/openai-nyt-lawsuit-fair-use-ai-copyright [13] https://originality.ai/ai-paraphraser [14] https://www.reddit.com/r/selfpublishing/comments/znlqla/what_is_the_legality_of_ai_generated_text_for/ [15] https://theconversation.com/how-a-new-york-times-copyright-lawsuit-against-openai-could-potentially-transform-how-ai-and-copyright-work-221059 [16] https://ahrefs.com/writing-tools/paraphrasing-tool [17] https://www.jdsupra.com/legalnews/relying-on-ai-generated-text-and-images-9943106/ [18] https://apnews.com/article/nyt-new-york-times-openai-microsoft-6ea53a8ad3efa06ee4643b697df0ba57 [19] https://quillbot.com [20] https://crsreports.congress.gov/product/pdf/LSB/LSB10922 [21] https://www.reuters.com/legal/transactional/ny-times-sues-openai-microsoft-infringing-copyrighted-work-2023-12-27/ [22] https://www.paraphraser.io [23] https://www.pcmag.com/news/ai-generated-content-and-the-law-are-you-going-to-get-sued [24] https://pressgazette.co.uk/media_law/new-york-times-open-ai-microsoft-lawsuit/ [25] https://textflip.ai

  • Abhivardhan representing Indic Pacific at Startup20 (G20 Brazil 2024) Meeting

    Our Founder, and Managing Partner,  Abhivardhan  had represented  Indic Pacific Legal Research LLP  in a recent technical meeting held by  Startup20 Brasil , an engagement group on innovation, entrepreneurship & collaboration by  G20 Brasil 2024  recently. Our position at Indic Pacific Legal Research, since inception has been clear that national artificial intelligence strategies across the world need to have a specific focus so that stakeholders in the public sector implement them with a sense of parity.  Abhivardhan was delighted to address to point out the following integral issues with the state of implementation of many national AI strategies: Insufficient focus on capacity building for AI proliferation, commercialization and integration Limited recognition of diverse AI use cases and local contexts The unclear & incapacitated role of regional and local government institutions in Responsible AI governance Dichotomous challenges around AI deployment and development. You can find many strategic legal insights on artificial intelligence policy by Indic Pacific at indopacific.app/store .

Search Results

bottom of page