top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

Search Results

119 results found

  • Book Review: Taming Silicon Valley by Gary Marcus

    This is a review of " Taming Silicon Valley: How Can We Ensure AI Works for Us ", authored by Dr Gary Marcus. To introduce, Dr Marcus is Emeritus Professor of Psychology and Neural Science at New York University, US. He is a leading voice in the global artificial intelligence industry, especially the United States. One may agree or disagree with his assessments of the Generative AI use cases, and trends. However, his erudite points must be considered to understand how AI trends around the Silicon Valley are documented, and understood, beyond the book’s intrinsic focus on industry & policy issues around artificial intelligence. The book, at its best, gives an opportunity to dive into the introductory problems in the global AI ecosystem, in the Silicon Valley, and in some instances, even beyond. Mapping the Current State of ‘GenAI’ / RoughDraft AI Dr Marcus’s book, "Mapping the Current State of ‘GenAI’ / RoughDraft AI," provides essential examples of how Generative AI (GenAI) solutions appear appealing but have significant reliability and trust issues. The author begins by demonstrating how most Business-to-Consumer (B2C) GenAI ‘solutions’ look appealing, allowing readers to explore basic examples of prompts and AI-generated content to understand the ‘appealing’ element of any B2C GenAI tool, be it in text or visuals. The author compares the ‘Henrietta Incident’, where a misleading point about Dr Marcus led a GenAI tool to produce a plausible but error-riddled output, with an LLM alleging Elon Musk’s ‘death’ by mixing his ownership of Tesla Motors with Tesla driver fatalities. These examples highlight the shaky ground of GenAI tools in terms of reliability and trust, which many technology experts, lawyers, and policy specialists have not focused on, despite the obvious references to these errors. The ‘Chevy Tahoe’ and ‘BOMB’ examples fascinate, showing how GenAI tools consume inputs but don’t understand their outputs. Despite patching interpretive issues, ancillary problems persist. The ‘BOMB’ example demonstrates how masked writing can bypass guardrails, as these tools fail to understand how guardrails can be circumvented. The author responsibly avoids regarding guardrails around LLMs and GenAI as perfect. Many technology lawyers and specialists have misled people about these guardrails’ potential worldwide. The UK Government’s International Scientific Report at the Seoul AI Summit  in May 2024 echoed the author’s views, noting the ineffectiveness of existing GenAI guardrails. The book serves as a no-brainer for people to understand the hyped-up expectations associated with GenAI and the consequences associated with it. The author’s approach of not over-explaining or oversimplifying the examples makes the content more accessible and engaging for readers. The Threats Associated with Generative AI The author provides interesting quotations from the Russian Federation Government’s Defence Ministry and Kate Crawford from the AI Now Institute as he delves into offering a breakdown of the 12 biggest immediate threats of Generative AI. Now, one important and underrated area of concern addressed in the sections is medical advice. Apart from deepfakes, the author’s reference to how LLM responses to medical questions were highly variable and inaccurate was necessary to discuss. This reminds us of a trend among influencers to convert their B2C-level content to handle increased consumer/client consulting queries, which could create a misinformed or disinformed engagement loop between the specialist/generalist and potential client. The author impressively refers to the problem of accidental misinformation, pointing out the ‘Garbage-in-Garbage-Out’ problem, which could drive internet traffic, especially in technical domains like STEM. The mention of citation loops of unreal case laws alludes to how Generative AI promotes a vicious and mediocre citation loop for any topic if not dealt with correctly. In addition, the author raises an important concern around defamation risks with Generative AI. The fabrication of content used to prove defamation creates a legal dilemma, as courts may struggle to determine who should be subject to legal recourse. The book is a must-read for all major stakeholders in the Bar and Bench to understand the ‘substandardism’ associated with GenAI and its legal risks. The author’s reference to Donald Rumsfeld’s "known knowns, known unknowns, and unknown unknowns" quote frames the potential risks associated with AI, particularly those we may not yet be aware of. Interestingly, Dr Marcus debunks myths around ‘literal extinction’ and ‘existential risk’, explaining that mere malignant training imparted to ChatGPT-like tools does not give them the ability to develop ‘genuine intentions’. He responsibly points out the risks of half-baked ideas like text-to-action to engineer second and third-order effects out of algorithmic activities enabled by Generative AI, making this book a fantastic explainer of the 12 threats of Generative AI.    The Silicon Valley Groupthink and What it Means for India [While the sections covering Silicon Valley in this book do not explicitly mention the Indian AI ecosystem in depth, I have pointed out some normal parallels, which could be relatable to a limited extent.] The author refers to the usual hypocrisies associated with the United States-based Silicon Valley. Throughout the book, Dr Marcus has referred to the works of Soshanna Zuboff and the problem of surveillance capitalism, largely associated with the FAANG companies of North America, notably Google, Meta, and others. He provides a polite yet critical review of the promises held by companies like OpenAI and others in the larger AI research & B2C GenAI segments. The Apple-Facebook differences emphasised by Dr Marcus are intriguing. The author highlights a key point made by Frances Haugen, a former Facebook employee turned whistleblower, about the stark contrast between Apple and Facebook in terms of their business practices and transparency. Haugen argues that Apple, selling tangible products like iPhones, cannot easily deceive the public about their offerings’ essential characteristics. In contrast, Facebook’s highly personalised social network makes it challenging for users to assess the true nature and extent of the platform’s issues. Regarding OpenAI, the author points out how the ‘profits, schmofits’ problem, around high valuations, made companies like OpenAI and Anthropic give up their safety goals around AI building. Even in the name of AI Safety, the regurgitated ‘guardrails’ and measures have not necessarily put forward the goals of true AI Safety that well. This is why building AI Safety Institutes across the world (as well as something in the lines of CERN as recommended by the author) becomes necessary. The author makes a reasonable assessment of the over-hyped & messianic narrative built by Silicon Valley players, highlighting how the loop of overpromise has largely guided the narrative so far. He mentions the "Oh no, China will get to GPT-5" myth spread across quarters in Washington DC, which relates to hyped-up conversations on AI and geopolitics in the Indo-Pacific, India, and the United States. While the author mentions several relatable points around ‘slick video’ marketing and the abstract notion of ‘money gives them immense power’, it reminds me of the discourse around the Indian Digital Competition Bill. In India, the situation gets dire because most of the FAAMG companies in the B2C side have invested their resources in such a way that even if they are not profiting enough in some sectors, they are earning well by selling Indian data and providing relevant technology infrastructure. Dr Marcus points out the intellectual failures of science popularizing movements, like effective accelerationism (e-acc). While e-acc can still be subject to interest and awe, it does not make sense in the long run, with its zero-sum mindset. The author calls out the problems in the larger Valley-based accelerationist movements. To conclude this section, I would recommend going through a sensible response given by the CEO of Honeywell, Vimal Kapur , on how AI tools might affect hardly noticeable domains such as aerospace & energy. I believe the readers might feel more excited to read this incredible book.   Remembering the 19th Century and the Insistence to Regulate AI The author's reference to quotes by Tom Wheeler and Madeleine Albright reminds me of a quote from former UK Prime Minister, Tony Blair , on a lighter note: “My thesis about modern politics is that the key political challenge today is the technological revolution, the 21st century equivalent of the 19th century Industrial Revolution. And politics has been slow to catch up.” While Blair's reference is largely political, the two quotes by Wheeler and Madeleine relate to the interesting commonalities between the 19th and 21st centuries. The author provides a solid basis as to why copyright laws are important when data scraping techniques in the GenAI ecosystem do not respect the autonomy & copy-rights of the authors whose content is consumed & grasped. The reference to quotes from Ed Newton-Rex and Pete Dietert on the GenAI-copyright issue highlights the ethical and legal complexities surrounding the use of creative works in training generative AI models. Dr Marcus emphasizes the urgent need for a more nuanced and ethical approach to AI development, particularly in the realm of creative industries. The author uses these examples to underscore a critical point: the current practices of many AI companies in harvesting and using creative works without proper permission or compensation are ethically questionable and potentially exploitative. Pete Dietert's stark warning about "digital replicants" amplifies the urgency of addressing these issues, extending the conversation beyond economic considerations to fundamental human rights, as recognised in the UNESCO Recommendation on the Ethics of AI of 2021 . Dr Marcus points out how the 'Data & Trust Alliance' webpage features appealing privacy and data protection-related legal buzzwords, but the details help in shielding companies more than protecting consumers. Such attempts of subversions are being tried in Western Europe, Northern America, and even parts of the Indo-Pacific Region, including India. The author focuses on algorithmic transparency & source transparency among the list of demands people should make. He refers to the larger black box problem as the core basis to legally justify why interpretability measures matter. With respect to consumer law and human rights, AI interpretability (Explainable AI) becomes necessary to have a gestation phase to see if there is any interpretability of the activities regularly visible in AI systems at a pre-launch stage. On source transparency, the author points out the role of content provenance (labelling) in enabling distinguishability between human-created content and synthetic content, so that the tendency to create "counterfeit people" is prevented and discouraged. The author refers to the problem of anthropomorphism, where many AI systems create a counterfeit perception among human beings and, via impersonation, could potentially downgrade their cognitive abilities. Among the eight suggestions made by Dr Marcus on how people can make a difference in bettering AI governance avenues, the author makes a reasonable point that voluntary guidelines must be negotiated with major technology companies. In the case of India, there have been some self-regulatory attempts, like an AI Advisory (non-binding) in March 2024, but more consistent efforts may be implemented, starting with voluntary guidelines, with sector-specific & sector-neutral priorities.   Conclusion Overall, Dr Gary Marcus has written an excellent prologue to truly ‘tame’ the Silicon Valley in the simplest way possible for anyone who is not aware of technical & legal issues around Generative AI. As recommended, this book also gives a slight glance at improving some understanding around digital competition policy measures, and the effective use of consumer law frameworks, where competition policy remains ineffective. The book is not necessarily a detailed documentation on the state of AI Hype. However, the examples, and references mentioned in the book are enough for researchers in law, economics and policy to trace out the problems associated with the American & Global AI ecosystems.

  • The 'Algorithmic' Sophistry of High Frequency Trading in India's Derivatives Market

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as October 2024. A recent study conducted by the market regulator of the country, the Securities and Exchange Board of India (SEBI), shed light on tectonic disparities in the market space concerning equity derivatives. Per the study, the utilisation of algorithm trading for the purposes of proprietary trading and that of foreign funds resulted in gross profits that totalled an amount of ₹588.4 billion ($7 billion) from having traded in the equity derivates of Indian markets in the Financial Year that ended on March’24. [1] However, it was noted that in a disparate stark contrast, several individual traders faced monumental consequential losses. The study further detailed that almost 93% of the individual traders had suffered losses in the Equity Futures and Options (F&O) Segment, in the preceding three years, that is, from the financial years of 2022 to 2024 ; the aggregate losses totalling to an amount exceeding ₹1.8 lakh crore . Notably, in the immediately preceding Financial Year [2023 – March 2024] alone, the net losses incurred among individual traders approximated an amount of ₹75,000 crore. The findings of SEBI underscore the challenges faced by individual traders when the former is having to compete against a more technologically furthered, well-funded entity in the market space of derivatives. The insight clearly contends that institutional entities that have inculcated algo-trading strategies have a clear competitive edge over those who lack the former, i.e., individual traders. Understanding the Intricacies of Algorithm Trading High-Frequency Trading refers to the over-arching aspect of algorithm trading which is latency sensitive and done through the medium of an automated platform, that essentially focuses on trading. The same is facilitated through advanced computational systems that are technically capable of executing large orders ate a more efficient speed in order to achieve optimal prices at a level humans cannot match. The dominancy of algorithms in the domain of the global landscape of financial markets, have had an exponential growth the past decade. High-frequency trading (HFT) algorithms, aim at the execution of trades within fractions of a second. This high-speed computational system places institutional investors at a more profitable and higher pedestal than individual traders, who typically place reliance on manual trading strategies and consequently have an evident lack access in sophisticated analytics and real-time processing trading systems. Furthermore, HFT allows traders to trade a larger amount of shares frequently, by processing differences within marginal prices in split of a second, thereafter ensuring accuracy in the executing of a trade and enhancement of market liquidity. The premise is also paralleled in the Indian equity derivatives market with HFT firms reaping substantial profits. The study conducted by the India’s market regulator evidently sheds light on the comparable gains and losses among institutional traders and individual traders respectively. The insight expounds upon the sophistries on the competitive dynamics of the country’s derivatives market, and its superficial regulation over manual trading and computational trading. The Competitive Landscape of the Derivatives Market: The Odds Stacked Against Individual Traders The study revealed a disadvantageous plight of retail traders, with every nine out of ten retail traders having incurred losses over the preceding three FY. This thereafter raises the contentious debate surrounding viability of individual traders and market dynamics in the landscape of derivatives market. The lack of the requisite support and resources to individual traders would make the sustainability of the former difficult, especially with the backdrop of a growing trend in algorithm trading. HFT has been subjected to critique by several professionals, with the latter holding the former in contempt for unbalancing the playing field of derivatives market. Other disadvantageous impediments brought firth by such trading mechanism include: Market Noise Price volatility Strengthening of the mechanism of surveillance Heavier imposition of costs Market manipulation and consequent disruption in the structure of capital markets The Need to Regulate the Technological 'Arms Race' in Trading Given the evident differences in mechanisms of trading, there arises a pressing need for improving tools of trading and ensuring easier access to related educational resources for individual investors. SEBI, the capital market regulator of India, has the prerogative obligation to regulate such disparities. In 2016, a discussion paper was released by the former that attempted to address the various issues relating to HFT mechanisms. The same was done with the premise of instituting an environment of equitable and fair marketspace for every stakeholder therein involved. SEBI proposed the institution of a “Co-Location facility” done on a shared-basis that do not allow the installation of individual servers. This proposed move aims to potentially reduce the latency of having access to the trading system, and attempting to provide a tick-by-tick feed of data, that would be given free of cost to all trading stakeholders. SEBI further proposed a review mechanism over the requirements of trading with respect to usage of algo-trading softwares. The same is furthered by mandating stock exchanges for strengthening the regulatory framework of algo-trading, and consequently lead to the institution of a simulated environment of market for an initial test of the software, prior to its real-time application. [2] To add, SEBI has also undertaken a slew of measures to regulate the algo-trading and HFT. This includes [3] : Minimum time of rest  for orders of stock Institution of a mechanism of maximum-order-message to measurements of trade, ratio Randomisation of the orders in stock and a review system on the tick-by-tick feed of data Institution of congestion charges to reduce the load on the market Thus, despite the rather unregulated stride on HFT in India, SEBI in vide has an overarching authority over the same through the provisions of SEBI Act, 1992. However, the same is prevailing in a rudimentary existence and thereafter, continues to usher in an age of unhealthy competitiveness among the traders in a capital market. References [1]  Newsdesk, High Speed Traders reap $7bn profit from India’s options market, https://www.thenews.com.pk/print/1233452-high-speed-traders-reap-7bn-profit-from-india-s-options-market  (last visited on 6 Oct, 2024). [2]  Amit K Kashyap, et. al., Legality and issues relating to HFT in India, Taxmann, https://www.taxmann.com/research/company-and-sebi/top-story/105010000000017103/legality-and-issues-related-to-high-frequency-trading-in-india-experts-opinion  (last visited on 6 Oct, 2024). [3]   Id.

  • New Report: Legal-Economic Issues in Indian AI Compute and Infrastructure [IPLR-IG-011]

    We are thrilled to announce the release of our latest report, " Legal-Economic Issues in Indian AI Compute and Infrastructure" [IPLR-IG-011] , authored by the talented duo, Abhivardhan (our Founder) and Rasleen Kaur Dua (former Research Intern, the Indian Society of Artificial Intelligence and Law). This comprehensive study delves into the intricate challenges and opportunities that shape India's AI ecosystem. Our aim is to provide valuable insights for policymakers, entrepreneurs, and researchers navigating this complex landscape. 🧭💡 Read the complete report at https://indopacific.app/product/iplr-ig-011/ Key Highlights 🖥️ Impact of Compute Costs on AI Development in India Examining the AI compute landscape and the role of compute costs in India's AI development. 🏗️ AI-associated Challenges to Fair Competition in the Startup Ecosystem Analysing how compute costs and access to public computing infrastructure influence AI development, particularly for startups and small enterprises. 🌐 Addressing Tech MNCs under Indian Competition Policy Exploring how Indian competition and industrial policy on digital technology MNCs affects regulation and innovation catering. 🤝 India's Role in Global AI Trade and WTO Agreements Investigating India's stance on international trade policies related to AI, the impact of WTO agreements, and the potential for sector-specific AI trade agreements. 📈 Key Recommendations and Strategies for India's AI Development Offering actionable recommendations for enhancing India's AI development both domestically and in the context of global trade policies. As India strives to become a global AI powerhouse, it is crucial to address the legal and economic implications of this transformative technology. Our report aims to contribute to this important discourse and provide a roadmap for inclusive and sustainable AI development. 🌍💫 We invite you to download the full report from our website and join the conversation. Share your thoughts, experiences, and insights on the challenges and opportunities facing India's AI ecosystem. Together, we can shape a future where AI benefits all. 🙌💬 Stay tuned for more cutting-edge research and analysis from Indic Pacific Legal Research. 📣🔍 Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • Easing Regulatory Search in a Project with People+AI

    I’m excited to share a major update on a positive note! For the past few months, I’ve been contributing as a Working Group Partner for Indic Pacific Legal Research LLP, collaborating with People+ai  on a transformative project involving AI and Knowledge Graphs . This initiative aims to streamline the search for legal instruments, specifically subordinate legislations, from Indian regulators. Today, I’m thrilled to share the first outcomes of this ongoing project. I’ve co-authored an introductory article with Saanjh Shekhar , alongside contributions from Varsha Yogish  and Praveen Kasmas . It explores how AI-powered knowledge graph technology can potentially answer legal queries by analyzing complex frameworks, such as those from the RBI. Navigating India’s legal landscape through public search platforms can be a daunting task due to fragmented documentation and inconsistent categorization. This project offers a solution to these challenges, making legal research more efficient for professionals and the public alike. Check out the article here: https://shorturl.at/B5gVW . A huge thanks to Kapil Naresh, Anshul Chaudhary, Sonali Patil Pawar, Aniket Raj , and Varsha  for their valuable testimonials and support. Your feedback is always welcome!

  • The French-German Report on AI Coding Assistants, Explained

    The rapid advancements in generative artificial intelligence (AI) have led to the development of AI coding assistants, which are increasingly being adopted in software development processes. In September 2024, the French Cybersecurity Agency (ANSSI) and the German Federal Office for Information Security (BSI) jointly published a report titled "AI Coding Assistants" to provide recommendations for the secure use of these tools. This legal insight aims to analyse the key findings from the ANSSI and BSI report. By examining the opportunities, risks, and recommendations outlined in the document, we can understand how India should approach the regulation of AI coding assistants to ensure their safe and responsible use in the software industry. The article highlights the main points from the ANSSI and BSI report, including the potential benefits of AI coding assistants, such as increased productivity and employee satisfaction, as well as the associated risks, like lack of confidentiality, automation bias, and the generation of insecure code. The recommendations provided by the French and German agencies for management and developers are also discussed. Potential Use Cases for AI Coding Assistants While AI coding assistants are generating significant buzz, their practical use cases and impact on developer productivity are still being actively studied and debated. Some potential areas where these tools may offer benefits include: Code Generation and Autocompletion AI assistants can help developers write code faster by providing intelligent suggestions and autocompleting common patterns. This can be especially helpful for junior developers or those working in new languages or frameworks. However, the quality and correctness of the generated code can vary, so developer oversight is still required. Refactoring and Code Translation Studies suggest AI tools may help complete refactoring tasks 20-30% faster by identifying issues and suggesting improvements. They can also assist in translating code between languages. However, the refactoring suggestions may not always preserve the original behavior and can introduce subtle bugs, so caution is needed. Test Case Generation AI assistants have shown promise in automatically generating unit test cases based on code analysis. This could improve test coverage, especially for well-maintained codebases. However, the practical usefulness of the generated tests can be hit-or-miss, and they may be less suitable for test-driven development approaches. Documentation and Code Explanation By analysing code and providing natural language explanations, AI tools can help with generating documentation and getting up to speed on unfamiliar codebases. This may be valuable for onboarding and knowledge sharing. The quality and accuracy of the explanations still need scrutiny though. While these use cases demonstrate potential, the actual productivity gains seem to vary significantly based on factors like the complexity of the codebase, the skill level of the developer, and how the AI assistant is applied. Careful integration and a focus on augmenting rather than replacing developers is advised. Studies have shown productivity improvements ranging from 0-45% in certain scenarios, but have also highlighted challenges like the introduction of bugs, security vulnerabilities, and maintainability issues in the AI-generated code. Overly relying on AI assistants and blindly accepting their output can be counterproductive. Overall, while AI coding assistants demonstrate promising potential, their use cases and benefits are still a mixed bag in practice as of 2024. More research and refinement of the technology is needed to unlock their full value in real-world software engineering. Merits of the Report Thorough Coverage of Opportunities The report does a commendable job of highlighting the various ways AI coding assistants can benefit the software development process: Code Generation : The report cites studies showing AI assistants can correctly implement basic algorithms with optimal runtime performance, demonstrating their potential to automate repetitive coding tasks and enhance productivity. Debugging and Test Case Generation : It discusses how AI can reduce debugging workload by automatically detecting and fixing errors, as well as generating test cases to improve code coverage. Specific examples like JavaScript debugging and test-driven development (TDD) are provided. Code Explanation and Documentation : The report explains how AI assistants can help developers understand unfamiliar codebases by providing natural language explanations and generating automated comments/documentation. This can aid in code comprehension and maintainability. Increased Productivity and Satisfaction : While noting the difficulty of quantifying productivity, the report references survey data indicating developers feel more productive and satisfied when using AI coding assistants, mainly due to the reduction of repetitive tasks. Balanced Discussion of Risks The report provides a balanced perspective by thoroughly examining the risks and challenges associated with AI coding assistants: Confidentiality of Inputs : It highlights the risk of sensitive information like login credentials and API keys unintentionally flowing into the AI's training data, depending on the provider's contract conditions. Clear mitigation measures are suggested, such as prohibiting uncontrolled cloud access and carefully examining usage terms. Automation Bias : The report warns of the danger of developers placing excessive trust in AI-generated code, even when it contains flaws. It cites studies showing a cognitive bias where many developers perceive AI assistants as secure, despite the regular presence of vulnerabilities. Lack of Output Quality and Security : Concrete data is provided on the high rates of incorrect answers (50%) and security vulnerabilities (40%) in AI-generated code. The report attributes this partly to the use of outdated, insecure practices in training data. Supply Chain Attacks : Various attack vectors are explained in detail, such as package hallucinations leading to confusion attacks, indirect prompt injections to manipulate AI behavior, and data poisoning to generate insecure code. Specific examples and mitigation strategies are given for each. Recommendations in the Report One of the key strengths of the report is the actionable recommendations it provides for both management and developers: Management : Key suggestions include performing systematic risk analysis before adopting AI tools, establishing security guidelines, scaling quality assurance teams to match productivity gains, and providing employee training and clear usage policies. Developers : The report emphasises the importance of responsible AI use, checking and reproducing generated code, protecting sensitive information, and following company guidelines. It also encourages further training and knowledge sharing among colleagues. Research Agenda : The report goes a step further by outlining areas for future research, such as improving training data quality, creating datasets for code translation, advancing automated security control, and conducting independent studies on productivity impact. Limits in the Report Limited Scope and Depth While the report covers a wide range of topics related to AI coding assistants, it may not delve deeply enough into certain areas: The discussion on productivity and employee satisfaction is relatively brief and lacks concrete data or case studies to support the claims. More comprehensive research is needed to quantify the impact of AI coding assistants on developer productivity. The report mentions the potential for AI to assist in code translation and legacy code modernisation but does not provide a detailed analysis of the current state-of-the-art or the specific challenges involved. The research agenda proposed in the report is quite broad and could benefit from more specific recommendations and prioritisation of key areas. Lack of Practical Implementation Guidance Although the report offers high-level recommendations for management and developers, it may not provide enough practical guidance for organizations looking to implement AI coding assistants: The report suggests performing a systematic risk analysis before introducing AI tools but does not provide a framework or template for conducting such an analysis. While the report emphasizes the importance of establishing security guidelines and training employees, it does not offer specific examples or best practices for doing so. The recommendations for developers, such as checking and reproducing generated code, could be supplemented with more concrete steps and tools to facilitate this process. Limited Discussion of Ethical Considerations The report focuses primarily on the technical aspects of AI coding assistants and does not extensively address the ethical implications of this technology: The potential for AI coding assistants to perpetuate biases present in the training data is not thoroughly explored. The report does not delve into the broader societal impact of AI coding assistants, such as the potential for job displacement or the need for reskilling of developers. Ethical considerations around the use of AI-generated code, such as issues of intellectual property and attribution, are not discussed in detail. Analysis in the Indian Context The ANSSI and BSI reporton AI coding assistants provides valuable insights that can inform the development of AI regulation in India, particularly in the context of the software industry. Here are some key inferences and recommendations based on the report's findings: Establishing Guidelines for Responsible Use : The report emphasises the importance of responsible use of AI coding assistants by developers. Indian regulatory bodies may think to develop clear guidelines and best practices for using these tools, including checking and reproducing generated code, protecting sensitive information, and following company policies. These guidelines should be communicated effectively to the software development community. Mandating Risk Analysis and Security Measures : As highlighted in the report, organisations should conduct a systematic risk analysis before adopting AI coding assistants and establish appropriate security measures. Indian regulators could consider mandating such risk assessments and requiring companies to implement specific security controls, such as secure management of API keys and sensitive data, to mitigate risks associated with these tools. Scaling Quality Assurance and Security Teams : The report notes that the productivity gains from AI coding assistants must be matched by appropriate scaling of quality assurance and security teams. Indian policymakers should encourage and incentivize organisations to invest in expanding their AppSec and DevSecOps capabilities to keep pace with the increased code output enabled by AI tools. This could involve providing funding, training programs, or tax benefits for such initiatives. Promoting Awareness and Training : The ANSSI and BSI report stresses the need for employee awareness and training on the risks and proper usage of AI coding assistants. Indian regulatory bodies should collaborate with industry associations, academic institutions, and tech companies to develop and disseminate educational materials, conduct workshops, and offer certifications related to the secure use of these tools. This will help build a skilled workforce capable of leveraging AI responsibly. Encouraging Research and Innovation : The report outlines a research agenda to advance the quality, security, and productivity impact of AI coding assistants. Indian policymakers should allocate resources and create a supportive ecosystem for research and development in this area. This could involve funding academic research, establishing innovation hubs, and fostering collaboration between industry and academia to address challenges specific to the Indian software development landscape. Conclusion In conclusion, while the French-German report on AI coding assistants has some limitations in terms of scope, depth, practical guidance, and coverage of ethical considerations, it remains a valuable and commendable endeavor. By proactively examining the implications of this rapidly evolving technology, the French and German agencies have taken an important step towards understanding and addressing the potential impact of AI coding assistants on the software development industry. The report provides a solid foundation for further research, discussion, and policy development in this area. It highlights the need for ongoing collaboration between governments, industry leaders, and researchers to study the effects of AI coding assistants, establish best practices for their use, and tackle the ethical and societal challenges they present. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • Supreme Court of Singapore’s Circular on Using RoughDraft AI, Explained

    The Supreme Court of Singapore came up with an intriguing circular on Using Generative AI, or RoughDraft AI (term coined by AI expert Gary Marcus), by stakeholders in courts. The guidance indicated in the circular is quite intriguing, which requires an essential breakdown. However, to begin with: the circular itself shows that the Court does not regard using GenAI tools, as an ultimatum to improve their court tasks, and has reduced the status of Generative AI tools as mere productivity enhancement tools, unlike what many AI companies in India & abroad tried to claim. This insight covers the circular in detail. On 23rd September 2024 , the Supreme Court of Singapore issued Registrar’s Circular No. 1 of 2024 , providing a detailed guide on the use of Generative Artificial Intelligence (AI) tools  in court proceedings. This guide is set to take effect from 1st October 2024  and will apply across various judicial levels, including the Supreme Court, State Courts , and Family Justice Courts . The document provides nuanced principles for court users regarding the integration of Generative AI in preparing court documents, but it also places a heavy emphasis on maintaining traditional legal obligations, ensuring data accuracy, and protecting intellectual property rights. Scope and Application The circular begins by defining its scope. It applies to all matters within the Supreme Court, State Courts (including tribunals such as the Small Claims Tribunals, Employment Claims Tribunals, and the Community Disputes Resolution Tribunals), and Family Justice Courts. All categories of court users, including prosecutors, legal professionals, litigants-in-person , and witnesses , fall within the ambit of the circular. It clarifies that while the use of Generative AI tools is not outrightly prohibited, users are still bound by existing legislation, professional codes, and practice directions. Key Definitions Several key definitions are provided to frame the guide: Artificial Intelligence : Defined broadly, it encompasses technology that can perform tasks requiring intelligence, such as problem-solving, learning, and reasoning. However, it excludes more basic tools such as grammar-checkers, which do not generate content. Court Documents : Includes all written, visual, auditory, and other materials submitted during court proceedings. This extends to written submissions, affidavits , and pleadings , placing emphasis on accurate and responsible content generation. Generative AI : Described as software that generates content based on user prompts. This can encompass text, audio, video, and images. Examples include AI-powered chatbots and tools using Large Language Models (LLMs) . General Principles on the Use of Generative AI Tools The Supreme Court maintains a neutral stance  on the use of Generative AI tools. The circular is clear that Generative AI is merely a tool, and court users assume full responsibility for any content generated using such tools. Notably, the Court does not require pre-emptive declarations  about the use of Generative AI unless the content is questioned. However, court users are encouraged to independently verify any AI-generated content before submitting it. Responsibility and Verification : Court users, whether they are legal professionals or self-represented litigants, are required to ensure that AI-generated content is accurate, appropriate,  and verified independently . For lawyers, this falls under their professional duty of care . Similarly, self-represented individuals are reminded of their obligation to provide truthful and reliable content. Neutral Stance : The court clarifies that its stance on Generative AI remains neutral. While users may employ these tools for drafting court documents, the onus of the content lies solely on the user. This emphasizes that Generative AI tools are not infallible and could generate inaccurate or misleading content. Users must ensure that all submissions are factual and comply with court protocols. Generative AI: Functional Explanation The document goes further to explain how Generative AI tools  work, outlining their reliance on LLMs  to generate responses that appear contextually appropriate based on user prompts. It compares the technology to a sophisticated form of predictive text but highlights that it lacks true human intelligence. While these tools may produce outputs that appear tailored, they do not engage in genuine understanding, posing risks of inaccuracies, especially in the legal context. The circular provides a cautious explanation of the limitations of these tools: Accuracy issues : It warns that Generative AI chatbots  may hallucinate , i.e., generate fabricated or inaccurate responses, including non-existent legal cases or incorrect citations. Inability to provide legal advice : Court users are reminded that Generative AI cannot serve as a substitute for legal expertise, especially in matters requiring legal interpretation or advice . The circular advises caution in using such tools for legal research, as they may not incorporate the latest developments in the law. Use in Court Documents Generative AI tools can assist in the preparation of court documents , but the court mandates careful oversight. The following guidelines are provided: Fact-checking and Proofreading : Users are instructed to fact-check  and proofread AI-generated content. Importantly, users cannot solely rely on AI outputs for accuracy and must verify all references independently. Relevance and IP Considerations : The court stresses that all content, whether generated by AI or not, must be relevant to the case  and should not infringe on intellectual property rights . The guide cautions users against submitting material that lacks attribution or infringes copyright. Prohibited Uses : While the use of AI for drafting preliminary documents, such as a first draft of an affidavit, is allowed, the circular strictly prohibits using AI for generating evidence . It also emphasizes that AI-generated content should not be fabricated, altered, or tampered with to mislead the court. Accuracy and Verification A major focus of the circular is the need for court users to ensure accuracy  in their submissions. The following are key responsibilities outlined for users: Fact-checking : AI-generated legal research or citations must be fact-checked using trusted and verified sources. Self-represented litigants are provided guidance on using resources like Singapore Statutes Online  and the eLitigation GD Viewer  for such verification. Accountability : If questioned by the court, users must be able to explain and verify  the content generated by AI. They are expected to provide details on how the content was produced and how it was verified. The court retains the authority to question submissions and demand further explanations if the content raises doubts. Intellectual Property Concerns One of the key concerns when using Generative AI tools is ensuring that any content generated does not infringe upon the intellectual property rights of third parties. This involves adhering to copyright, trademark, and patent laws , especially when AI tools generate text, images, or other content based on user prompts. Proper Attribution and Compliance with Copyright Laws The circular mandates that court users must ensure proper attribution  of sources when using AI-generated content. This includes accurately citing the original source  of any material referenced or used in court documents. For instance, if a passage from a legal article or a textbook is included in an AI-generated draft, the user must provide the author’s name, title of the work , and year of publication . Failure to do so may not only lead to copyright infringement but can also affect the credibility of the court submissions. The circular further clarifies that Generative AI tools  should not be relied upon to generate evidence or content meant to represent factual claims, as AI can potentially fabricate information. If AI-generated content includes case law, statutes, or quotes , it is the responsibility of the court user to ensure the accuracy and proper citation  of such references. This applies to both lawyers and self-represented litigants. Generative AI and Copyright Infringement Risks A key issue with Generative AI tools is that they are trained on vast datasets, which may include copyrighted material  without proper licensing. While the AI itself may generate new content, the underlying data on which it is trained may pose risks of copyright violations  if not properly addressed. For example, AI-generated text could inadvertently reproduce language from a copyrighted source, which may lead to legal disputes if the original source is not acknowledged. Court users must be vigilant about verifying that the content generated by AI does not infringe on existing copyright protections. This is especially important when submitting legal documents  to the court, as any infringement could lead to penalties, legal action , and damage to professional reputations . The circular reminds users that the responsibility for checking these issues lies with them, not with the AI tool. Confidentiality Concerns The circular also highlights the importance of maintaining confidentiality  and safeguarding sensitive information  when using Generative AI tools. This concern is particularly pressing because AI platforms may not always guarantee that the data inputted will remain confidential. In fact, many AI tools store user inputs for training purposes, which could result in unintentional disclosure of private information. Risks of Inputting Confidential Data The court warns that entering personal, confidential , or sensitive information  into Generative AI platforms can lead to unintended consequences. Since most AI tools are cloud-based and developed by third-party providers, any data inputted could potentially be accessed or stored by the AI provider. This raises several issues, particularly with respect to legal privilege, client confidentiality , and data protection . For example, if a lawyer inputs sensitive case details into an AI tool to draft a legal document, those details could be stored by the AI provider. This storage may inadvertently lead to the exposure of confidential information, potentially breaching data privacy laws  or client confidentiality agreements . This is particularly concerning in cases where non-disclosure agreements (NDAs)  are in place, or where the data falls under privileged communication between a lawyer and their client. Compliance with Data Protection Laws The circular emphasises that court users must comply  with the relevant personal data protection laws  and any confidentiality orders  issued by the court. In Singapore, this would involve adhering to the provisions of the Personal Data Protection Act (PDPA) , which regulates the collection, use, and disclosure of personal data. Failure to safeguard confidential data may lead to legal consequences, including fines, civil lawsuits , and disciplinary actions . Legal Privilege and Sensitive Information Additionally, the court reminds users that documents obtained through court orders must not be used for any purposes beyond the proceedings for which the order was granted. This reinforces the need for discretion when handling privileged documents  and ensures that such documents are not exposed to Generative AI platforms, which could compromise their confidentiality. The circular advises court users to refrain from sharing confidential case details  with AI tools. Instead, users should take extra caution when deciding what information to include in AI prompts. The document acknowledges the potential for unauthorised disclosure , noting that information input into Generative AI tools could be stored or misused. Therefore, users must take proactive steps to avoid breaching confidentiality obligations, particularly in cases involving sensitive personal data , trade secrets , or other proprietary information. Intellectual Property Rights and Legal Implications Court users are also reminded that existing laws on intellectual property rights , including provisions related to court proceedings, remain fully applicable. This means that while Generative AI tools can be used to generate drafts of legal documents, any content included in those documents must comply with IP laws. Court Order Documents : If a court has granted a production order  for specific documents, these materials must not be shared with Generative AI tools or used outside the proceedings for which they were obtained. Respect for Privilege : Users must ensure that any data shared with Generative AI tools does not violate legal privilege . This includes ensuring that privileged communications between lawyers and clients remain confidential and are not disclosed to third-party AI providers. Enforcement of IP and Confidentiality Rules Failure to comply with the guidelines set out in the circular can result in significant penalties, including: Cost orders : Users may be ordered to pay costs to the opposing party, particularly if AI-generated content is found to infringe IP rights or violate confidentiality rules. Disciplinary actions : Lawyers who fail to comply with these rules could face disciplinary measures, including reprimands, suspensions, or fines. Reduction in evidentiary weight : The court may also choose to disregard  AI-generated submissions or reduce their evidentiary weight if they fail to meet accuracy, attribution, or confidentiality standards. Conclusion The Singapore Supreme Court's Registrar's Circular No. 1 of 2024 provides a pragmatic yet cautious approach to the use of Generative AI in court proceedings. While the court acknowledges the utility of such tools, it emphasises that responsibility for accuracy, relevance, and appropriateness remains squarely with the court user. Generative AI is positioned as a useful aid, but not a replacement for human judgment, legal expertise, or verification processes. Users of Generative AI are held to the same standards of accuracy, truthfulness, and integrity as in any other court submission. Nevertheless, it seems clear that even the Supreme Court of Singapore does not deify the purpose of Generative AI tools and remains quite cautious, which only increases trust in their judicial system. This cautious approach is further validated by recent findings from an Australian government regulator, which discovered that generative AI text solutions can actually increase workload rather than reduce it. In a trial conducted by the regulator, it was found that AI-generated summaries of information were often less accurate and comprehensive than those produced by human analysts, requiring additional time and effort to correct and verify.This highlights the importance of the Singapore Supreme Court's emphasis on human oversight and responsibility when using generative AI in legal proceedings. While these tools may offer some efficiency gains, they are not a panacea and can potentially introduce new challenges and risks if not used judiciously. However, it would be unreasonable to write off this assumption that Generative AI/ Rough Draft AI tools will be used ad nauseam, and could lead to huge replacements. As the technology continues to evolve and improve, it is likely that generative AI will play an increasingly significant role in various aspects of legal practice, from research and document preparation to predictive analytics and decision support.The key, as emphasized in the Singapore Supreme Court's circular, is to strike a balance between leveraging the capabilities of these tools and maintaining the human expertise, judgment, and accountability that are essential to the integrity of the legal system. By setting clear guidelines and expectations for the responsible use of generative AI, the Singapore Supreme Court seems to have laid the groundwork for a future in which these technologies can be harnessed to enhance, rather than replace, the work of legal professionals. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • [New Report] Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010

    We are excited to announce the publication of our 10th Infographic Report since December 2023, and our 22nd Technical Report since 2021, titled "Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010."  This report is available for free for a limited time at this link . This report holds a special significance for us as it reflects our collective commitment to deeply exploring the complex legal challenges posed by emerging technologies like Generative AI. We would like to extend our heartfelt congratulations to Samyak Deshpande, Sanvi Zadoo, and Alisha Garg, whose dedication and meticulous effort have been instrumental in developing and curating this comprehensive report. In this report, we’ve included a quote from Carissa Véliz on privacy to emphasize the importance of respecting human autonomy in the creative process. This choice captures the spirit in which we approach the intricate relationship between technology and law, particularly when it comes to safeguarding the creative rights and freedoms of individuals. Why This Report Matters The publishing industry is no stranger to disruption, but the advent of Generative AI has introduced a new layer of complexity. While many may rush to declare that Generative AI infringes upon copyright and patent laws, such assertions, though valid, often oversimplify the issues at hand. The real challenge lies in addressing these concerns with the specificity and nuance they require. This report represents not just an analysis of intellectual property law issues related to Generative AI in publishing but also a broader exploration of how these technologies can create legal abnormalities that escalate to points of no return. It is the product of our collective patience, thorough research, and a deep understanding of the legal landscape. The Broader Implications Generative AI has been both lauded and criticszed for its impact on various industries. In publishing, the effects have been particularly pronounced, leading to a range of legal challenges that must be navigated with care. This report seeks to provide a balanced perspective, offering insights into how these technologies can be regulated and managed without stifling innovation or creativity. As Generative AI continues to evolve, so too must our approach to the legal frameworks that govern it. This report is a step in that direction, aiming to provide both clarity and guidance for those involved in the publishing industry and beyond. You can access the report for free for a limited time at https://indopacific.app/product/impact-based-legal-problems-around-generative-ai-in-publishing-iplr-ig-010/ Final Thoughts In a time when AI is making headlines—such as the recent mention of Anil Kapoor in TIME magazine for his connection to AI—our report offers a timely and relevant exploration of the real-world implications of these technologies. We hope it will serve as a valuable resource for those interested in understanding the complexities and challenges of the Generative AI ecosystem. We invite you to read this report and engage with the critical issues it raises. Happy reading! Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • [New Report] Risk & Responsibility Issues Around AI-Driven Predictive Maintenance, IPLR-IG-009

    Greetings. I hope this update finds you well. Today, I want to share with you a topic that has captured my imagination for quite some time: the intriguing confluence of space law and artificial intelligence policy. As someone who has always been fascinated by astronomy and the laws of nature, I find this area of study as captivating as it is important. For the past few months, I have been working on a comprehensive report titled "Risk & Responsibility Issues Around AI-Driven Predictive Maintenance." This report, the 9th Infographic Report by Indic Pacific Legal Research LLP, delves into the complex legal landscape surrounding the use of AI in predictive maintenance for spacecraft. I am excited to announce that the report, IPLR-IG-009, is now available on the IndoPacific App at https://indopacific.app/product/navigating-risk-and-responsibility-in-ai-driven-predictive-maintenance-for-spacecraft-iplr-ig-009-first-edition-2024/ . In this report, I have not only explored the legal implications of AI-driven predictive maintenance but also showcased some fascinating case studies that demonstrate the potential of this technology in the space sector. These case studies include: 1️⃣ SPAICE Platform 2️⃣ NASA's Prognostics and Health Management (PHM) Project 3️⃣ ESA's Φ-sat-1 (Phi-sat-1) Project Each of these projects highlights the innovative ways in which AI is being leveraged to enhance the reliability, efficiency, and safety of spacecraft operations. By examining these real-world examples, we can gain valuable insights into the challenges and opportunities that lie ahead as we continue to push the boundaries of space exploration and AI technology. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • Deciphering Australia’s Safe and Responsible AI Proposal

    This insight by Visual Legal Analytica is a response/ public submission to the Australian Government's recent Proposals Paper for introducing mandatory guardrails for AI in high-risk settings published in September 2024. This insight, authored by Mr Abhivardhan, our Founder, is submitted on behalf of Indic Pacific Legal Research LLP. Key Definitions Used in the Paper The key definitions provided in the Australian Government's September 2024 High Risk AI regulation proposal reflect a comprehensive and nuanced approach to AI governance: Broad Scope and Lifecycle Perspective The definitions cover a wide range of AI systems and models, from narrow AI focused on specific tasks to general-purpose AI (GPAI) capable of being adapted for various purposes. They also consider the entire AI lifecycle, from inception and design to deployment, use, and eventual decommissioning. This broad scope ensures the regulations can be applied to diverse AI applications now and in the future. Differentiation between AI Models and Systems The proposal makes a clear distinction between AI models, which are the underlying mathematical engines, and AI systems, which are the ensembles of components designed for practical use. This allows for more targeted governance, recognizing that raw models and end-user applications may require different regulatory approaches. Emphasis on Supply Chain and Roles The definitions highlight the complex network of actors involved in the AI supply chain, including developers who create the models and systems, deployers who integrate them into products and services, and end users who interact with or are impacted by the deployed AI. By delineating these roles, the regulations can assign appropriate responsibilities and accountability at each stage. Recognition of Autonomy and Adaptiveness The varying levels of autonomy and adaptiveness of AI systems after deployment are acknowledged. This reflects an understanding that more autonomous and self-learning AI may pose different risks and require additional oversight compared to static, narrow AI. Inclusion of Generative AI Generative AI models, which can create new content similar to their training data, are specifically defined. Given the rapid advancement and potential impacts of generative AI, such as in generating realistic text, images, and other media, this inclusion demonstrates a forward-looking approach to AI governance. Overall, the key definitions show that Australia is taking a nuanced, lifecycle-based approach to regulating AI, with a focus on the supply chain, different levels of AI sophistication, and emerging areas like generative AI. Designated AI Risks in the Proposal The Australian Government has correctly identified several key risks that AI systems can amplify or create in the "AI amplifies and creates new risks" section of the report: Amplification of Existing Risks like Bias The report highlights that AI systems can embed human biases and create new algorithmic biases, leading to systemic impacts on groups based on protected attributes like race or gender. This bias may arise from inaccurate, insufficient, unrepresentative or outdated training data, or from the design and deployment of the AI system itself. Identifying the risk of AI amplifying bias is important, as there are already real-world examples of this occurring, such as in AI resume screening software and facial recognition systems. New Harms at Multiple Levels The government astutely recognises that AI misuse and failure can cause harm to individuals (e.g. injury, privacy breaches, exclusion), groups (e.g. discrimination), organisations (e.g. reputational damage, cyber attacks), and society at large (e.g. growing inequality, mis/disinformation, erosion of social cohesion). This multilevel perspective acknowledges the wide-ranging negative impacts AI risks can have. National Security Threats The report identifies how malicious actors can leverage AI to threaten Australia's national security through accelerated information manipulation, AI-enabled disinformation campaigns to erode public trust, and lowering barriers for unsophisticated actors to engage in malicious cyber activity. Given the growing use of AI for influence operations and cyberattacks, calling out these risks is prudent and proactive. Some Concrete Examples of Realised Harms Importantly, the government provides specific instances where the risks of AI have already resulted in real-world harms, such as AI screening tools discriminating against certain ethnicities and genders in hiring. These concrete examples demonstrate that the identified risks are not just theoretical but are actively materialising. Response to Questions for Consultation on High-Risk AI Classification The Australian Government has outlined the following Questions for Consultation on their proposed approach on classifying High-Risk artificial intelligence systems. In this insight, we intend to offer a response to some of these questions based on how the Australian Government has designated their regulatory & governance priorities: Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove? Please identify any: low-risk use cases that are unintentionally captured categories of uses that should be treated separately, such as uses for defence or national security purposes. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country? Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed? If you prefer a list-based approach (similar to the EU and Canada), what use cases should we include? How can this list capture emerging uses of AI? If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity? Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)? If so, how should we define these? Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)? Should mandatory guardrails apply to all GPAI models? What are suitable indicators for defining GPAI models as high-risk? For example, is it enough to define GPAI as high-risk against the principles, or should it be based on technical capability such as FLOPS (e.g. 10^25 or 10^26 threshold), advice from a scientific panel, government or other indicators? Response to Questions 1 & 3 The paper proposes the following definition of General Purpose Artificial Intelligence: An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems Our feedback at Indic Pacific Legal Research LLP is that the definition may be considered to cover or co-opt the coverage of "potential risks". However, the definition should not be used as a default legal approach, for a few reasons. When we attempt to designate an intended purpose of any AI system, it may also help governments reflect upon the substandard or advanced level of their proposed use cases. The propensity of an AI system to be substandard, or advanced as GPAI, entirely relies on how its intended purpose is established with relevant technical parameters. While the definition can be kept as it is - we recommend that the Government must reconsider linking the definition directly to bypass the requirement to determine intended purpose of GPAI, to attribute risk recognition on general-purpose AI systems. Since it is requested in the consultation question whether low-risk use cases may be unintentionally captured, it may be feasible to determine the intended purpose of a GPAI to further attribute high-risk recognition upon it, in a reasonable manner. This helps the regulator to avoid low-risk use cases be unintentionally captured. The EU/ Canada approaches highlighted in the tables on the containerised recognition of artificial intelligence use cases may be adopted. However, we recommend that guidance-based measures could be helpful in keeping up with the international regulatory landscape, instead of adopting the EU/ Canada approach. The larger rationale of our feedback is that listing use cases can be achieved by creating a repository of recognised AI use cases. However, the substandard/ advanced nature of these use cases, and the outcome and impact of these use cases requires effective documentation with time. That may create some transparency, or else it may lead to a situation where low-risk use cases might be captured in hindsight. Response to Question 5 The principles need guidance points for further expansion, since the examples mentioned for specific principles do not convince to give a clearer picture on the scope of the principles. However, the recognition of severity & extent of an AI risk is a substantial way to define the threshold of purposive application of these principles, provided it is done in a transparent fashion. Response to Question 7 The FLOPS criteria to signify GPAI as high-risk is deeply counterproductive, and it would be necessary to involve human expertise and technical indicators to designate any GPAI as high-risk. For example, we had proposed India's first artificial intelligence regulation as a private effort to the Government of India, to get public feedback, called aiact.in . You may look at the following parts of aiact.in Version 3 for reference: Section 5 – Technical Methods of Classification (1)   These methods as designated in clause (b) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as – (i)    General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2); (ii)   General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3); (iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);   (2)   General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards: (i)    Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users. (ii)   Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors. (iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements. (iv)  Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases. Section 7 – Risk-centric Methods of Classification (4)   High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors: (i)    Widespread utilization or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences; (ii)   Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest; (iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system; (iv)  High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system; (v)   Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences. (vi)  The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.   Illustration   An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification. The approaches outlined in Section 5 and Section 7(4) of the proposed Indian AI regulation ( aiact.in Version 3) provide a helpful framework for classifying and regulating AI systems based on their technical characteristics and risk profile. Here's why these approaches may be beneficial: Section 5 - Technical Methods of Classification: Categorizing AI systems as GPAIS (General Purpose AI with stable use cases), GPAIU (General Purpose AI with unclear use cases), and SPAI (Specific Purpose AI) allows for tailored regulatory approaches based on the system's inherent capabilities and intended applications. Evaluating factors like scale, inherent purpose, technical features, and limitations helps assess an AI system's potential impact and reach, informing appropriate oversight measures. Aligning classification with relevant industrial standards promotes consistency and interoperability across sectors. Distinguishing between stable and unclear use cases recognizes the evolving nature of AI and the need for adaptable regulatory frameworks. Section 7(4) - Risk-centric Methods of Classification: Focusing on outcome and impact-based risks ensures that the most potentially harmful AI systems are subject to stringent oversight, regardless of their technical characteristics. Considering factors like widespread deployment, potential for severe harm, lack of user control, and vulnerability of affected individuals helps identify high-risk applications that warrant additional safeguards. Recognizing the difficulty of reversing or remediating adverse outcomes emphasizes the need for proactive risk mitigation measures. Applying the high-risk designation irrespective of scale, purpose, or technical limitations acknowledges that even narrow or limited AI systems can pose significant risks in certain contexts. The illustrative example of an AI system controlling critical infrastructure highlights the importance of a risk-based approach that prioritizes societal consequences over technical specifications. However, it's important to note that implementing such a framework would require significant technical expertise, ongoing monitoring, and coordination among regulators and stakeholders. Clear guidelines and standards would need to be developed to ensure consistent application and enforcement. We have responded to some of the questions from Consultation Questions 8 to 12 as well: Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings? Are there any guardrails that we should add or remove? How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve ICIP? Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately? For example, are the requirements assigned to developers and deployers appropriate? Are the proposed mandatory guardrails sufficient to address the risks of GPAI? How could we adapt the guardrails for different GPAI models, for example low-risk and high-risk GPAI models? Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails? Response to Questions 8 & 11 We agree with the rationale proposed behind the mandatory guardrails, and all the 10 required guardrails, from a larger understanding of incident response protocols, and attributing accountability in AI management. Feedback on Guardrail 1 Wherever it is discerned that GPAIs are being utilised for public-private partnerships, we recommend that requirements under Guardrail 1, must include mandatory disclosure and publication of the accountability frameworks. However, this cannot be applicable in the case of low-risk AI use cases, hence if intended purpose remains an indicator of evaluation, the obligations under Guardrail 1 must be applied with a sense of proportionality. Feedback on Guardrails 2, 3 8 & 9 We agree with the approach in these Guardrails to obligate AI companies to ensure that "where elimination is not possible, organisations must implement strategies to contain or mitigate any residual risks". It is noteworthy that the paper recognises that any type of risk mitigation will be based on the organisation’s role in the AI supply chain or AI lifecycle and the circumstances. That makes the implementation of these guardrails effective & focused. Stakeholder participation is also necessary, as outlined in the AI Seoul Summit 2024's International Scientific Interim Report. Feedback on Guardrail 8 We agree with the approach in this Guardrail, which focuses on transparency between developers and deployers of high-risk AI systems, for several reasons: Enabling effective risk management: By requiring developers to provide deployers with critical information about the AI system, including its characteristics, training data, key design decisions, capabilities, limitations, and risks, deployers can better understand how the system works and identify potential issues. This transparency allows deployers to proactively respond to risks as they emerge during the deployment and use of the AI system. Promoting responsible deployment: Deployers need to have a clear understanding of how to properly use the high-risk AI system to ensure it is deployed in a safe and responsible manner. By mandating that developers provide guidance on interpreting the system's outputs, deployers can make informed decisions and avoid misuse or misinterpretation of the AI system's results. Addressing information asymmetry: There is often a significant knowledge gap between developers, who have intimate understanding of the AI system's inner workings, and deployers, who may lack technical expertise. This guardrail helps bridge that gap by ensuring that deployers have access to the necessary information to effectively manage the AI system and mitigate risks. Balancing transparency and intellectual property: The guardrail acknowledges the need to protect commercially sensitive information and trade secrets, which is important to maintain developers' competitive advantage and incentivize innovation. By striking a balance between transparency and protection of proprietary information, the guardrail ensures that deployers receive the information they need to manage risks without compromising developers' intellectual property rights. Response to Questions 10 & 12 The proposed mandatory guardrails in the Australian Government's paper do a good job of distributing responsibility across the AI supply chain and AI lifecycle between developers and deployers: However, the current approach could be improved in a few ways: More guidance may be needed on how responsibilities are divided when an AI system involves multiple developers and deployers. The complexity of modern AI supply chains can make accountability challenging. Some guardrails, like enabling human oversight and informing end-users, may require different actions from developers vs. deployers. The requirements could be further tailored to each role. Feedback from developers and deployers should be proactively sought to identify any misalignment between the assigned responsibilities and their actual capabilities to manage risks at different lifecycle stages. To reduce the regulatory burden on small-to-medium enterprises (SMEs), we suggest: Providing templates, checklists, and examples to help SMEs efficiently implement the guardrails. Ready-made accountability process outlines, risk assessment frameworks, and testing guidelines would be valuable. Offering tiered requirements based on SME size and AI system risk level. Lower-risk systems or smaller businesses could have simplified record-keeping and less frequent conformity assessments. Establishing a central AI authority (maybe on an interlocutory or nodal basis) to provide guidance, tools, and oversight. This one-stop-shop would reduce the burden of dealing with multiple regulators. Facilitating access to shared testing facilities, data governance tools, and expert advisors. Pooled resources and support would help SMEs meet the guardrails cost-effectively. Phasing in guardrail requirements gradually for SMEs. An extended timeline with clear milestones would ease the transition. Providing financial support, such as tax incentives or grants, to help SMEs invest in AI governance capabilities. Subsidised training would also accelerate adoption. Finally, we have responded to some of the Consultation Questions 13-16 provided below as well: Which legislative option do you feel will best address the use of AI in high-risk settings? What opportunities should the government take into account in considering each approach? Are there any additional limitations of options outlined in this section which the Australian Government should consider? Which regulatory option/s will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology? Where do you see the greatest risks of gaps or inconsistencies with Australia’s existing laws for the development and deployment of AI? Which regulatory option best addresses this, and why? Response to Question 13 We propose that Option 3, could be a reasonable option and would best address the use of AI in high-risk settings in Australia. Here are the key reasons: Comprehensive coverage: A dedicated AI Act would provide consistent definitions of high-risk AI and mandatory guardrails across the entire economy. This avoids the gaps and inconsistencies that could arise from a sector-by-sector approach under Option 1. The AI Act could also extend obligations to upstream AI developers, not just deployers, enabling a more proactive and preventative regulatory approach. Regulatory efficiency: Developing a single AI-specific Act is more efficient than individually amending the multitude of existing laws that touch on AI under Option 1 or Option 2's framework legislation. It allows for a cohesive, whole-of-economy approach. Dedicated enforcement: An AI Act could establish an independent AI regulator to oversee compliance with the guardrails. This dedicated expertise and enforcement would be more effective than relying on existing sector-specific regulators who may lack AI-specific capabilities. However, the government should consider the following when designing an AI Act: Interaction with existing laws: The AI Act should include carve-outs where sector-specific laws already impose equivalent guardrails, to avoid duplication. Close coordination between the AI regulator and existing regulators will be essential. Compliance burden: The AI Act should incorporate mechanisms to reduce the regulatory burden on SMEs, such as tiered requirements based on risk levels. Practical guidance and shared resources would help SMEs build governance capabilities. Responsive regulation: The AI Act should be adaptable to the rapid evolution of AI. Regular reviews, expert input, and agile rulemaking will ensure it remains fit-for-purpose.

  • New Report: The Legal and Ethical Implications of Monosemanticity in LLMs, IPLR-IG-008

    🚀 Excited to announce an ambitious wave of reports we plan to release this September 2024! Starting off with our latest infographic report: "The Legal and Ethical Implications of Monosemanticity in LLMs, IPLR-IG-008"  by Indic Pacific Legal Research LLP by Abhivardhan our Founder. It was a pleasure collaborating on this report with Samyak Deshpande, Sanvi Zadoo, and Alisha Garg, former interns at the Indian Society of Artificial Intelligence and Law. 📄 Access the report here:   https://indopacific.app/product/monosemanticity-llms-iplr-ig-008/ This report draws inspiration from two significant developments in the global AI landscape: 1️⃣ Anthropic’s groundbreaking paper,  "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3" (mid-2024), delves into the complexities of large language models (LLMs) and the extraction of monosemantic neurons. 2️⃣ The International Scientific Report on AI Safety  (interim report), released by the UK Government and other stakeholders at the AI Seoul Summit, May 2024, building on the discussions from the 2023 Bletchley Summit. Our report provides a comprehensive analysis of these developments, exploring Anthropic’s work on monosemanticity through technical, economic, and legal-ethical lenses. It also delves into the evolution of neurosymbolic AI, offering pre-regulatory ethical considerations for this emerging technology. 🧠 Notably, Anthropic's paper doesn't overstate their findings on AI risk mapping, allowing us to present our recommendations with a nuanced perspective. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

  • The John Doe v. GitHub Case, Explained

    This case analysis is co-authored by Sanvi Zadoo and Alisha Garg, along with Samyak Deshpande. The authors of this case analysis have formerly interned at the Indian Society of Artificial Intelligence and Law quite recently. In a world where artificial intelligence is redefining the way developers write code, Copilot, an AI-powered coding program developed by GitHub in collaboration with OpenAI was launched  in 2021. Copilot promised to revolutionize software development by generating code functions based on the input of choice. However, this ‘revolution’ soon found itself in the midst of a legal storm .   The now famous GitHub-Copilot case revolves around allegations that the AI-powered coding assistant uses copyrighted code from open-source repositories without proper credit. The initiative for the lawsuit  was taken by programmer and attorney Matthew Butterick and joined by other developers. They claimed that Copilot's suggestions include exact code from public repositories without adhering to the licenses under which the code was published. Despite efforts by Microsoft, GitHub, and OpenAI to dismiss the lawsuit, the court allowed the case to proceed.   Timeline of the case   June 2021 GitHub Copilot is publicly launched in a technical preview November 2022 Plaintiffs file a lawsuit against GitHub and OpenAI, alleging DMCA violations and breach of contract December 2022 The court dismisses several of the Plaintiffs' claims, including unjust enrichment, negligence, and unfair competition, with prejudice. March 2023 GitHub introduces new features for Copilot, including improved security measures and an AI-based vulnerability prevention system. June 2023 The court dismisses the DMCA claim with prejudice July 2024 The California court affirms the dismissal of nearly all the claims​    Overall, the lawsuit includes breach of contract claims based on the terms of open-source licenses, arguing that Copilot's use of their code violates these licenses. Let us further explore this case in detail and what it means for the future of such Artificial Intelligence programs.     The Technical Features Of The Copilot And Recent Iterations As Well As Competing Entities Technical Features of GitHub Copilot GitHub Copilot is an AI-powered code completion tool, developed in collaboration with OpenAI, that seamlessly integrates into development environments to assist developers with code suggestions, completions, and snippets.   The technical features of GitHub Copilot include: OpenAI Codex Model: Copilot is fueled by OpenAI's Codex, which is a descendant of the GPT-3 model and has been specifically trained on a vast amount of publicly available code from GitHub and other sources. This advanced model comprehends the context of the code being written, including the function or class a developer is working on, comments, and preceding code, allowing it to provide pertinent and contextually appropriate code suggestions. Diverse Language Support: Copilot is compatible with a wide array of programming languages, encompassing, but not restricted to, Python, JavaScript, TypeScript, Ruby, Go, Java, C#, PHP, and more. This broad compatibility caters to developers utilizing different languages and frameworks. For select languages, Copilot provides language-specific features, such as type hints in TypeScript or docstring generation in Python. Integration with Visual Studio Code: Copilot seamlessly integrates with Visual Studio Code, a widely used code editor. This integration enables developers to receive real-time code suggestions and completions as they code. Achieved through an extension that can be effortlessly installed and configured within VS Code, this integration ensures accessibility to a diverse developer community. Real-time Feedback: Copilot offers real-time code suggestions, diminishing the need for developers to scour code snippets or documentation. This instantaneous feedback accelerates the development process and enhances productivity. Suggestions are presented inline within the code editor, allowing developers to seamlessly review and accept or modify suggestions without disrupting their workflow. Adaptive Learning: Copilot assimilates feedback from developers to enhance its future suggestions. Whether a developer accepts, rejects, or modifies a suggestion, this information contributes to refining the model's future suggestions. Over time, Copilot adjusts to a developer's coding style and preferences, delivering personalized and precise suggestions. Function and Class Completion: Copilot can generate entire functions and classes based on concise developer descriptions or comments, significantly expediting the development process, particularly for boilerplate code. It also provides examples demonstrating the use of specific functions or libraries and generates documentation strings for functions and classes. Duplication Detection: Copilot includes a feature to detect and refrain from suggesting code that exactly matches public code, addressing concerns regarding code plagiarism and copyright infringement. Diligent efforts are made to ensure that the generated code complies with open-source licenses and does not violate copyright laws, employing filtering and other mechanisms to prevent code misuse. Understanding Code Semantics: Copilot surpasses basic keyword matching by comprehending the semantics of the code. This capability allows it to suggest appropriate variable names, function calls, and entire code blocks relevant to the current context. Copilot effectively handles complex coding scenarios, offering pertinent suggestions in contexts such as nested functions, asynchronous code, and multi-threaded applications. Error Detection: Copilot aids in detecting potential errors or issues within the code and provides suggested fixes. It can recommend code refactoring to enhance readability, performance, or maintainability. Recent Iterations and Improvements: GitHub Copilot has undergone multiple iterations to enhance its functionality and user experience. These enhancements include: Improved Accuracy and Speed: Upgrades to the underlying Codex model have increased its precision and speed, resulting in more efficient and relevant suggestions. Context Understanding Improvements: Copilot can now provide more accurate suggestions, even in complex coding scenarios, due to improved context understanding. Addition of New Languages: Support for additional programming languages and frameworks has been integrated, expanding its applicability. Seamless Integration: Improvements in the integration with VS Code and other IDEs offer a more seamless and intuitive user experience. Stronger Compliance Measures: Enhanced mechanisms for detecting and preventing the suggestion of copyrighted or sensitive code have been implemented. Additionally, better management of open-source licenses ensures compliance and reduces legal risks.   Competing Entities Other companies and tools in the industry provide comparable AI-powered code assistance features, posing competition to GitHub Copilot:   Tabnine Tabnine employs AI to deliver code completions and suggestions across various IDEs and programming languages. It offers both cloud-based and local models, allowing developers to maintain the privacy of their code. Tabnine can be trained on a team's codebase to offer bespoke and pertinent suggestions tailored to the specific project.   Kite Kite provides AI-powered code completions for Python and JavaScript, seamlessly integrating with popular editors such as VS Code, PyCharm, and Sublime Text. o   It furnishes in-line documentation and code examples to facilitate developers' comprehension of using specific functions or libraries. o   Kite utilizes machine learning models to deliver precise and context-aware code completions.   IntelliCode by Microsoft IntelliCode delivers AI-assisted code recommendations based on patterns found in open-source projects and a developer's codebase. Integrated into Visual Studio and Visual Studio Code, it supports a wide array of programming languages. It tailors recommendations to specific teams by learning from the code patterns within a team's codebase.   Codota Codota focuses on providing code completions and suggestions for Java and Kotlin, with recent expansion into other programming languages. It provides both cloud-based and on-premises solutions to accommodate varying privacy needs. Codota learns from the developer's codebase to deliver more accurate and relevant suggestions.   Examining the facts, arguments and verdict of the case   Facts   In the given case of J. Doe vs. GitHub, the plaintiffs, who are developers, brought a legal action against GitHub and its parent company, Microsoft. The primary issue of the case arises around GitHub's Copilot tool, an AI-based code completion tool designed to assist developers by generating code snippets. The plaintiffs alleged that Copilot was trained on publicly available code, including code they had authored, which was protected under open-source licenses. They claimed that Copilot's generated code snippets were identical or substantially similar to their original work, which, according to them, amounted to copyright infringement. Furthermore, they argued that Copilot violated the terms of the open-source licenses by failing to give any statement of attribution to the original authors of the code and by not adhering to the license conditions.   As stated above, there are primarily two concerns raised by the plaintiff, first that there is copyright infringement by the defendants and second, that they have violated the contract. This takes us to the issues posed by the case.   Issues 1.     Whether Copilot's generation of code snippets constituted copyright infringement? 2.     Whether the plaintiffs had a valid claim for breach of contract due to the alleged violation of open-source licenses? 3.     Whether they were entitled to restitution for unjust enrichment? 4.     Whether their request for punitive damages was justified?   Arguments Plaintiffs The plaintiffs argued that Copilot's operation resulted in the unauthorized use and reproduction of their code, which infringed on their copyrights. They also contended that by not attributing the generated code to the original authors and by not complying with the open-source licenses, GitHub and Microsoft had breached contractual obligations. The plaintiffs sought restitution for the unjust enrichment of the defendants, who they claimed had profited from Copilot. Additionally, they sought punitive damages, arguing that the defendants' actions warranted such penalties.   Defendants In response, GitHub and Microsoft countered that Copilot did not produce identical copies of the plaintiffs' code, and any similarities were incidental due to the nature of the tool. They argued that open-source licenses do not create enforceable contract claims in this context. Furthermore, the defendants asserted that the plaintiffs had not sufficiently stated a claim for restitution for unjust enrichment under California law. Regarding the punitive damages, the defendants argued that such damages are not typically recoverable in breach of contract cases.   Decision of the Court Upon review, the court decided to dismiss the plaintiffs' claim under Section 1202(b) with prejudice, meaning that the claim could not be refiled. This section of the claim pertained to allegations related to the removal or alteration of copyright management information. The court found that the plaintiffs had not provided sufficient grounds to support this claim. However, the court allowed the plaintiffs' breach of contract claim to proceed, specifically regarding the alleged violations of open-source licenses. This meant that the plaintiffs could continue to pursue their argument that the defendants had breached the terms of these licenses.   AI and Skill Security: The Impact of the Judgment on the Developer Community   Source of training data One of the significant challenges in suing companies that offer generative AI tools lies in identifying the sources of the training data. In the present case, all training data used by Copilot was hosted on GitHub and subject to GitHub's open-source licenses. This made it relatively straightforward for the plaintiffs to pinpoint the specific license terms they believed had been violated. However, for more complex AI models like ChatGPT, which do not disclose their training data sources, proving similar violations could be considerably more difficult. The opaque nature of these models' data origins presents a substantial hurdle for plaintiffs attempting to assert their rights.   In the context of evolving AI legislation, potential European laws may soon require companies to disclose the data used to train their AI models. Such regulations could significantly aid in identifying and proving violations related to AI-generated content.   Prompt Engineering Considerations Additionally, the court's decision on the plaintiffs' standing to seek monetary damages is particularly noteworthy. The court ruled that plaintiffs could seek damages even if they themselves entered the prompts that led to the generation of the allegedly infringing content. This ruling could have far-reaching implications, especially given similar approaches in other ongoing cases involving generative AI.   For instance, in Authors Guild v. OpenAI Inc., plaintiffs claimed that by entering prompts, they generated outlines of sequels or derivatives of their original works, which they argued constituted copyright infringement. Similarly, in The New York Times Company v. Microsoft Corporation, the plaintiff entered prompts to recreate previously published articles, claiming this also amounted to infringement. In both cases, the plaintiffs themselves provided the input prompts that led to the generation of the contested content.   The court’s decision in J. Doe vs. GitHub aligns with the plaintiffs' approach in these other cases by affirming that the act of entering prompts does not disqualify them from seeking damages. This ruling emphasizes that plaintiffs can argue their content was used inappropriately by the AI, regardless of their role in generating the specific outputs. Moreover, the court in J. Doe vs. GitHub held that plaintiffs do not need to demonstrate, for standing purposes, that other users of the AI model would seek to reproduce their content. This aspect of the ruling is significant because it lowers the bar for establishing standing in copyright infringement cases involving generative AI. Plaintiffs no longer need to prove that their content is commonly used or sought after by other users of the AI tool. This could be a crucial argument in cases where the original content is not widely recognized or utilized but was still allegedly infringed upon by the AI.   The Impact of Development, Maintenance, and Knowledge Management on the Coding Market and Competition   Having understood the above points, it is also pertinent to note that the if outcome of this case could reshape the competitive landscape for AI coding assistants. Despite the dismissal, the case has far-reaching implications on how the AI community navigates intellectual property and copyright issues. If GitHub Copilot and similar tools were to be modifed to provide more attribution as demanded, it might increase the complexity and cost of developing these tools. This could slow down the adoption of AI in coding, as companies might need to invest more in ensuring this new compliance with open-source licenses. However following the dismissal, the current model for AI tools like Copilot has gotten reinforced, allowing them to continue suggesting code without significant changes to their attribution practices. This legal backing can boost the confidence of AI developers and users, leading to continued innovation and integration of AI in coding. Companies might feel more secure investing in AI tools, knowing that the legal risks associated with copyright infringement are currently manageable under existing laws and this precedent. Similarly, for maintenance, AI tools can continue to play a significant role in optimizing code and suggesting improvements without the immediate need for new attribution systems. This ensures that maintenance processes remain efficient and cost-effective. Plus, the ruling suggests that AI-generated code does not necessarily infringe on copyright if it does not explicitly replicate large chunks of protected material. This can now encourage much broader use of AI tools in knowledge management, enabling organisations to capture and disseminate coding practices and solutions more freely. As the market adjusts to these legal and ethical considerations, we predict that the companies that effectively navigate these challenges may gain a competitive edge.   US Judgment's Implications for the Indian IT Community   Importance of Legal Compliance   The ruling emphasizes the imperative requirement for AI tools and software development practices to strictly comply with copyright laws. It is crucial for Indian IT companies and developers to ensure that their utilization of AI tools, such as GitHub Copilot, complies with these laws to evade potential legal consequences.   Adhering to open-source licenses is of paramount importance. Indian developers must be vigilant in guaranteeing that AI-generated code does not breach these licenses, wherein there may be stipulations such as attribution requirements and constraints on commercial usage. Being well-versed in local and international copyright laws and regulations is quintessential for Indian IT companies to maneuver the complexities of legal compliance in a globalized industry. Implementing proactive legal strategies to supervise the use of AI tools within development processes can forestall potential violations and mitigate risks.   Ethical AI Development   Indian IT companies should prioritize the creation of AI systems that operate transparently and are accountable for their outputs. Granting users more control over AI-generated content can foster trust and diminish legal risks.   Indian IT firms should formulate and embrace ethical guidelines for AI usage addressing matters such as data privacy, bias in AI models, and the responsible utilization of AI-generated content. Engaging with diverse stakeholders can help ensure the responsible development and usage of AI tools.   Focus on Skill Development   Given the rapid evolution of AI technologies, Indian developers should stay abreast of the latest advancements and invest in training programs encompassing topics such as copyright laws, open-source licenses, and ethical AI practices to comprehend the legal and ethical implications of utilizing AI tools.   As AI takes on routine tasks, developers can concentrate on the more creative and innovative facets of their work. Encouraging developers to acquire new languages, frameworks, and tools can broaden their expertise and adaptability.   Protection of Intellectual Property   It is imperative for Indian developers and companies to be vigilant about their intellectual property rights and seek appropriate redress when their rights are infringed upon by AI tools or other entities.   The ruling underscores the necessity for developers whose rights are infringed upon by AI tools to seek compensation. This reinforces the economic worth of intellectual property and the need to safeguard it.   Global Standards and Competitiveness   Aligning with global legal and ethical standards can enhance the competitiveness and reputation of Indian IT companies in the international market. Upholding legal and ethical compliance can facilitate smoother international collaborations and create new opportunities for partnerships and projects.   This case holds substantial implications for the economic rights of the Indian IT community, emphasizing the paramount importance of safeguarding intellectual property (IP), fostering economic opportunities through responsible AI utilization, and ensuring adherence to global standards to protect economic interests.   Safeguarding Intellectual Property   The judgment underscores the necessity for Indian developers and companies to vigilantly protect their intellectual property rights. It is imperative for them to comprehensively understand the legal frameworks safeguarding their code and creations, ensuring that these rights remain inviolate in the presence of AI tools such as GitHub Copilot.   Developers and companies are urged to proactively safeguard their IP through the utilization of licensing agreements, patents, and trademarks to formally protect their software and code. Furthermore, regular auditing of their code in AI-generated outputs is recommended to detect and address potential infringements effectively.   The judgment implies that developers whose rights are violated by AI tools are entitled to seek legal recourse and compensation. Thus, emphasizing the economic value of intellectual property and the indispensability of protecting it through lawful channels.   Indian developers and companies are advised to advocate for more robust legal frameworks that offer comprehensive protection for intellectual property in the context of AI and software development, including advocating for clearer regulations and more effective enforcement mechanisms.   Economic Opportunities   Responsible usage of AI tools such as GitHub Copilot presents Indian IT companies with the opportunity to enhance productivity and foster innovation. AI's capacity to handle routine coding tasks allows developers to focus on more intricate and creative aspects of software development, thereby leading to the creation of higher-quality software products and services.   By demonstrating compliance with legal and ethical standards, Indian IT companies can gain a competitive edge in the global market, attracting more clients and partnerships and consequently enhancing their economic prospects.   Investing in skill development pertaining to AI and legal compliance can make Indian developers more competitive on a global scale. This includes training in state-of-the-art AI technologies, comprehension of copyright laws, and adherence to best practices in ethical AI development.   As AI continues to evolve, new job opportunities will arise in domains such as AI ethics, legal compliance, and advanced software development. Indian developers equipped with these skills can leverage these opportunities to enhance their career prospects and economic potential.   Global Standards and Competitiveness   The judgment incentivizes Indian IT companies to align with global legal and ethical standards. By adopting best practices in AI development and usage, Indian firms can sustain their competitiveness and reputation in the international market.   Adherence to these standards can differentiate companies within a crowded marketplace, attracting international clients and partnerships and thereby enhancing economic growth.   Understanding and complying with judgments such as the US GitHub Copilot case can smoothen international collaborations, enabling Indian IT firms to engage in cross-border projects and partnerships, thereby expanding their global footprint.   Ensuring that Indian IT services are perceived as reliable and legally compliant can cultivate trust with international partners, leading to more collaborative projects, joint ventures, and increased economic opportunities.   Economic Redress and Fair Compensation   The judgment reinforces that developers whose intellectual property is used without authorization by AI tools are entitled to seek economic redress. This underscores the significance of fair compensation for intellectual property usage and the economic rights of creators.   Indian developers should seek legal support to comprehend their rights and the available mechanisms for seeking compensation in cases of IP infringement. This involves consulting with legal experts and pursuing litigation when necessary.   The judgment highlights the economic value of intellectual property and the contributions of individual developers. Recognizing and fairly compensating these contributions is fundamental for fostering innovation and ensuring the sustainability of the IT industry.   Therefore, Indian IT companies should equip their developers with resources and legal assistance to safeguard their intellectual property rights, allowing creators to focus on innovation without apprehensions about potential infringements.

  • Book Review: Disrupt with Impact by Roger Spitz

    This is a review of a book recently authored by Roger Spitz, entitled, "Disrupt with Impact". Thus, it may be a short read as a book review. Disclaimer: My review of this book is limited to my examination & analysis of the Chapters 9, 10 & 11 of the book. The most important aspect of risk analysis and estimation is the relevance of any data point, inference, or trend within the context of risk estimation. If we are not clear about estimating the systemic or substantive realities surrounding any proposed data point or inference associated with risks, then our analysis will be clouded by judgments based on unclear and scattered points of assessment. The segment on the future of AI, strategic decision-making, and technology encouraged me to take a deeper look at this book and understand the purpose of writing it. Are these chapters similar to the typical chapters on AI markets, ecosystems, and communities found in other books? It does not seem that way, simply because throughout the book, you may observe that the author, in a cautiously interesting and neutral tone, addresses certain phenomena and realities based on tangible parameters. For example, some parameters or the powerful "distinctive features of technology" are ubiquitous in their own way. I found the attribute of technology being combinatorial and fusion-oriented quite interesting because the compounding effects of technology are indeed underrated. This is because these compounding effects are based on generic and special human-technology relationships and how the progressive or limiting role of technology creates human attributes—or perhaps branches of human attributes (or maybe micro-attributes, who knows). Even if some of these attributes are not clearly discernible as trends or use case correlations, it does not discount the role of any class of technology. I also appreciate that, unlike most authors and AI experts who view technology as supposedly neutral, the book asserts a commonsensical point that no technology is neutral. Superstupidity, technology 'substandardism' and superintelligence The reference to the term 'superstupidity' in this book is both ironic and intriguing. The author is clear and vocal about not mincing words when pointing out how substandard AI use cases or preview-level applications may impact humans through their potential for fostering idleness. Here is an excerpt from the book: 'Maybe the existential risk is not machines taking over the world, but rather the opposite, where humans start responding like idle machines—unable to connect the emerging dots of our UN-VICE world.' This excerpt reflects on a crucial element of the human-technology relationship and even anthropology: the evolution of human autonomy. It is praiseworthy that the author unfolds the simple yet profound point that promoting a culture of substandardism (yes, I’ve coined this word in this book review) could render the human-technology relationship so inactive that humans might be counter-anthropomorphized into 'idle machines.' The narrative raised by the author is deep. It is distinct from the usual argument that using smartphones or devices makes you lazy in a dependency-related sense when transitioning from older classes of technology to newer versions of the same class. Between the 2000s and the 2010s, the tech transition has been exceptionally quick. However, due to technology winter, the pandemic, the transformation of social media into recommendation platforms, and the lack of channeled funding for large-scale enhanced R&D across countries (among other reasons), we are witnessing the realization of Moore's Law and aspects of the Dunning-Kruger effect from a tech and behavioral economy standpoint. The spectrum of human dependency has slowed across fields of industrial, digital, and emerging technologies, which, in my view, the author highlights in this excerpt. "For instance, believing that AI can be a proxy for our own under- standing and decision-making as we delegate more power to algorithms is superstupid. Perhaps AI is also superstupid and may cause mistakes, wrong decisions or misalignment. Further, consider AI ineptitude . What might appear as incompetence may simply be algorithms acting on bad data." This is why I coined the term 'substandardism' for the purposes of this book review. The author brilliantly points out elements of technology substandardism, the disproportionate human-technology relationship, and how AI tools can indeed be superstupid. This reminds me of a recent call to shift the 'paradigm of Generative AI' by moving away from text-to-speech and text-to-visual toward text-to-action, which brings to mind The Entity from Mission: Impossible 7, Bujji from Kalki 2898, and Jarvis/The Vision from the Marvel Cinematic Universe—if I may reference bits of cinema. That being said, the responsible and pragmatic approach of the author in treating 'substandardized' (another new term I coined for this review) artificial intelligence use cases as a vector for potential risks is noteworthy. The author’s sincere writing will help anyone in the risk management or technology industry recognize the reality of technology substandardism. The Black Mirror Effect and Anticipatory Governance Although since COVID, the Black Mirror Effect has been frequently mentioned by people in journal articles, industry insights, social media posts and even other forms of insights, in the most generalised way, I appreciate the author to have dedicated a section of his book to Anticipatory Governance. For example, the reference to Edward Tenner is quite intriguing to me. I think Tenner's book "Why Things Bite Back" directly addresses the concept of unintended consequences of technology. Although the book is described as "dated," it's still considered "insightful." This suggests that Tenner's observations about technology and its unintended effects have stood the test of time and remain applicable to current technological developments, including AI. Tenner's work on unintended consequences provides a bridge between the existentialist philosophy discussed earlier (Sartre's "existence precedes essence") and the practical realities of technological advancement. It helps to ground the philosophical discussion in real-world examples and consequences. The author remains quite deliberate and cautious in differentiating two nearly distinct policy phenomena: unintended drawbacks and perverse consequences. The author illustrates this point using several examples: Air conditioning was developed to cool buildings but ended up contributing to climate change due to increased energy consumption. Industrialized agriculture aimed to provide affordable food on a large scale but led to obesity and environmental damage. Passenger airbags were introduced to save lives in car accidents but initially caused an increase in child fatalities due to automatic deployment. The Indian government's bounty program to reduce the cobra population backfired, as people started farming cobras for the reward, and when the program was discontinued, the farmed cobras were released, worsening the problem. This brings us to the Collingridge Dilemma and the 'quandary of time.' Since the hype to regulate artificial intelligence across governments has been in vogue for months now, the author hints at the possibility of using regulation or control of AI communities and developers by subjecting them to an intended form of containment. However, containing a community without estimating the potential impact at early stages is a challenging task. The author honestly points this out as an example of how anticipatory governance is on the rise, which is commendable. Here's an excerpt: To anticipate, we must distinguish between the unintended consequences which may arguably be unavoidable, versus the unanticipated outcomes, those adverse effects which could have been anticipated and avoided. When negative externalities are unavoidable, we can still seek to manage them effectively. The AAA Framework and the Future of Work It seems to me that the author has been quite responsible in writing about the role of artificial intelligence in shaping the future of work, which is not surprising considering his contributions and efforts in ushering techistentialism in his own way. That being said, the reference to "Radically Human," by Daugherty and Wilson remains interesting to me. The author highlights the vision, mentioning that AI will augment and empower human expertise rather than replace it. The author is also accurate in highlighting the fact that knowledge-intensive tasks have become integral to consulting and other facets of employment & business communities. This is why I find the author's mention of AI’s influence to be "spilling over into complex cognitive functions" praiseworthy. In a thought-provoking excerpt, the author delves into the complex and often oversimplified relationship between artificial intelligence (AI) and the future of work. In the tenth chapter of the book, the author's skepticism towards simplistic slogans that suggest AI will only replace those who cannot use it, is insightful for people in risk management & technology. The author argues that such statements fail to capture the intricate interplay between cognification, mass automation, and the evolving nature of work. The author emphasizes the uncertainty surrounding the net impact of AI on employment, acknowledging that while experts predict a surge in opportunities, the lack of data on the future makes it difficult to make definitive predictions. The chapter underscores the need for a deeper understanding of these complex relationships to ensure a future where both humans and technology can thrive harmoniously. The chapter also highlights the author's observations on AI's increasing role in fields that traditionally require extensive education and training, such as law, accounting, insurance, finance, and medicine. The gradual automation and augmentation of these fields through generative AI are noted as significant transformations that require the integration of systems, adjustment of supply and demand, reskilling of workforces, and adaptation of regulations. It is notable that unlike most "GenAI experts", the author remains honest to enumerate on the possibilities of technology winter and the uncertain & unclear impact of AI technologies, let alone GenAI, on the skill economy. Black Jellyfishes, Elephants & Swans The author presents a compelling typology of risks associated with the development and deployment of artificial intelligence (AI). Drawing on vivid animal metaphors, the author categorizes these risks into three distinct types: Black Jellyfish, Black Elephants, and Black Swans. Each category represents a unique set of challenges and potential consequences that demand our attention and proactive responses. The author begins by introducing the concept of Black Jellyfish, which are low-probability, high-impact events that grow from seemingly predictable situations into far less predictable outcomes. The author highlights several potential Black Jellyfish scenarios, such as info-ruption (the disruptive and potentially dangerous effects of information misuse), scaling bias (the amplification of discrimination and inequality through AI), and the fusion of AI and biotechnology (which could challenge the status of humans as dominant beings). These scenarios underscore the need to consider the cascading effects of AI and how they could spiral out of control. Next, the author turns to Black Elephants, which are obvious and highly likely threats that are often ignored or downplayed due to divergent views and a lack of understanding. The author identifies several critical Black Elephants, including the need to reinvent education to keep pace with AI, the deskilling of decision-making as we delegate more responsibilities to AI systems, the potential for mass technological unemployment, and the double-edged sword of cyber insecurity. The author emphasizes the importance of mobilizing action, aligning stakeholders, and understanding the complex systems in which these risks are embedded. Finally, the author explores the concept of Black Swans, which are unforeseeable, rare, and extremely high-impact events. The author posits several potential Black Swan scenarios, such as the development of artificial general intelligence (AGI) and superintelligent AI systems, extreme catastrophic failures resulting from interacting AI systems, and the magical discovery of cures for incurable diseases. While these events are inherently unpredictable, the author argues that we can still build resilient foundations, monitor for nonobvious signals, and implement guardrails to mitigate the potential consequences. Throughout the tenth chapter, the author's language is both engaging and thought-provoking, drawing the reader into a deeper consideration of the risks and challenges associated with AI. The use of animal metaphors adds a layer of accessibility and memorability to the complex concepts being discussed, while also highlighting the urgency and gravity of the issues at hand. One potential weakness of the sections on Black Jellyfish, Black Elephant and Black Swan is that they do not provide concrete examples or case studies to illustrate the risks and scenarios being discussed. While the animal metaphors are effective in capturing the reader's attention, some readers may desire more tangible evidence to support the author's claims. The Future of Decision-Making: AI's Role and the Risk of Moral Deskilling In the section on 'moral deskilling', the author delves into the complex relationship between artificial intelligence (AI) and human decision-making, particularly in the context of strategic decisions. The author's language is direct and engaging, drawing the reader's attention to the potential consequences of relying too heavily on AI in decision-making. By citing the Pew Research Center's cautionary statement, the author emphasizes the risk of humans becoming overly dependent on machine-driven networks, potentially leading to a decline in their ability to think independently and take action without the aid of automated systems. Furthermore, the author introduces the concept of "moral deskilling," as described by the Markkula Center for Applied Ethics. This concept suggests that as humans increasingly rely on AI for decision-making, they may lose the ability to make moral judgments and ethical decisions independently. The author's inclusion of this concept adds depth to the discussion, prompting readers to consider the long-term implications of AI's role in decision-making. Regarding the Pew Research Center, the author cites a report that expresses concern about the potential negative impacts of AI on human agency and capabilities. The report, titled "Concerns about human agency, evolution and survival," is part of a larger study called "Artificial Intelligence and the Future of Humans" conducted by the Pew Research Center in 2018. The study surveyed experts about their views on the potential impacts of AI on society by 2030. The specific section cited in the highlights concerns that increasing dependence on AI could diminish human cognitive, social, and survival skills. Experts quoted in the report, such as Charles Ess from the University of Oslo and Daniel Siewiorek from Carnegie Mellon University, warn about the potential for "deskilling" as humans offload various tasks and capabilities to machines. As for the Markkula Center for Applied Ethics, the Center has published extensively on the ethical implications of AI, including a report titled "Ethics in the Age of AI". This report, based on a survey of 3,000 Americans, found that a significant majority (86%) believe technology companies should be regulated, and 82% care whether AI is ethical or not. Hence, the author responsibly introduces how AI tools and systems may contribute to the decision-making value chains, in a more pragmatic and straightforward fashion, which is noteworthy. The author does not hype the limited role of AI in value chains, which is really helpful, and maybe eye-opening for some. This excerpt summarises the author's exposition made in the tenth chapter of his book: "We prefer a world where human decisions propel our species forward, where we choose the actions that lead to staying relevant. If we do not, our C-suites might find themselves replaced by an A-suite of algorithms." It is also interesting that the author claims 'data is the new oil' of the 21st century, when he also claims that "big data does not predict anything beyond the assumption of an idealized situation in a stable system". I think the realism in the second statement quoted complements the role of data in a multipolar world, and how the data-algorithm relationship shapes risk management and facets of human autonomy. Info-ruption and the Internet of Existence (IoE) The reference to a kinda less mainstream term, i.e., Inforuption, by the author in the eleventh chapter of his book, seems as intriguing to me as the revelation to the idea of the Internet of Existence (IoE) was. For instance, the author delves into the rapidly expanding world of data-driven innovations and their profound impact on our lives. The author's use of the phrase "data byting back" is a clever play on words, alluding to the idea that data is not only shaping our world but also actively influencing and potentially threatening our existence. The author raises a crucial question: should data be treated as a language, as fundamental to our existence as our linguistic substrates? This question highlights the pervasive nature of data in our lives and suggests that our understanding of data is essential to comprehending its impact on our future. The author presents a timeline of how software and data have evolved, starting with the digitization of business, moving to the democratization of software creation through no-code, low-code, and generative AI, and culminating in a digital universe that surpasses our physical space in importance. This timeline effectively illustrates the increasing dominance of data in our lives and the potential for software to "eat" not only the world but also humanity itself. The author's use of the phrase "software eating humanity" is particularly striking, as it suggests that our reliance on data and software could ultimately consume us. This idea is reminiscent of the concept of technological singularity, where artificial intelligence surpasses human intelligence and control. However, the author does not simply present a dystopian view of the future. Instead, he emphasises the importance of understanding data to articulate its impacts and make informed decisions about its governance. The excerpt concludes by highlighting the critical importance of data privacy, ethics, and governance in a world where our bodies and environments are increasingly composed of data. Disinformation-as-a-service There is a section in the eleventh chapter of the book where the author delves into the emerging threat of disinformation-as-a-service (DaaS). The author explains that DaaS is a criminal business model that provides highly customizable, centrally hosted disinformation services for a fee, enabling the commoditization of info-ruption. This concept is particularly alarming as it allows various bad actors, such as conspiracy theorists, political activists, and autocracies, to easily initiate disinformation campaigns that can reinforce each other by magnifying their impact. The author's use of real-world examples, such as the QAnon groups targeting Wayfair, Netflix, Bill Gates, and 5G telecom operators, as well as the defamation lawsuits filed by Dominion Voting Systems and Smartmatic, effectively illustrates the tangible consequences of disinformation attacks on businesses. These examples demonstrate the severity of the threat and the potential for significant financial and reputational damage. I am however not invalidating or validating the subject of the legal claims in the defamation lawsuits per se. Nevertheless, the mention of these real-world examples by the author itself signifies that he has illustrated some promising examples. The author's transition from discussing DaaS to the broader topic of cyber insecurity is well-executed, as it highlights the growing vulnerability of our digital world. The author emphasizes that cyberattacks can be launched anonymously and at minimal cost, yet have devastating consequences, affecting critical infrastructure such as power grids, healthcare systems, and government structures. The inclusion of the potential legal ramifications for companies facing lawsuits due to inadequate cybersecurity measures further underscores the urgency of addressing these threats. The introduction of ransomware-as-a-service (RaaS) as another emerging threat is particularly compelling. The author's comparison of RaaS to enterprise software, complete with customer service for smooth ransom collection, effectively conveys the ease with which cyberattacks can now be launched. The mention of leading ransomware brands such as BlackCat, DarkSide, and LockBit potentially becoming as commonplace as well-known software companies like Microsoft, Oracle, and Adobe is a powerful and unsettling analogy that drives home the severity of the threat. Conclusion Overall, the book is a definitive introductory read for understanding key technological risks around emerging technologies, including artificial intelligence and others, and the author has been largely responsible in articulating the risks, trends, and phenomena in a well-packaged and well-encapsulated way. I would not regard this book as a form of industry research or an authority on the scholarship of technology policy or technology risk management, but I am clear in saying that this is a promising casket highlighting key risks, trends, and phenomena that we see in an emerging multipolar world. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train  and contact us at vligta@indicpacific.com .

bottom of page