top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

Search Results

106 items found

  • [New Report] The Indic Pacific - ISAIL Joint Annual Report, 2022-24, Launched

    Indic Pacific Legal Research, and the Indian Society of Artificial Intelligence and Law (ISAIL),are proud to announce their joint achievements in promoting responsible AI development and legal innovation in India. Founded by Abhivardhan, Indic Pacific and ISAIL have been instrumental in shaping the discourse on AI governance in India. Abhivardhan's early research on AI ethics and international law has been recognized by the Council of Europe, setting the stage for groundbreaking initiatives. Key milestones include: India's first privately proposed draft Artificial Intelligence Regulation Bill, authored by Abhivardhan, aimed at establishing a comprehensive legal framework for AI governance. The VLiGTA ecosystem, offering cutting-edge training programs on technology law, AI governance, and intellectual property for professionals and organizations. ISAIL's significant contributions to AI standardization and research, with the 2020 Handbook on AI and International Law featured by the Council of Europe as a notable Indian AI initiative. The launch of AIStandard.io , a platform dedicated to the development and dissemination of AI standardization guidelines. Ongoing research and policy initiatives, including the recent report on "Reimaging and Restructuring MeitY for India" (IPLR-IG-007). You can now access the Joint Annual Report of Indic Pacific Legal Research & the Indian Society of Artificial Intelligence and Law for the year 2022-24, for free. For more information about Indic Pacific Legal Research and ISAIL, please visit www.indicpacific.com and www.isail.in . The report is also accessible for reading at https://indopacific.app/product/indic-pacific-isail-joint-annual-report-2022-24/

  • Why AI Standardisation & Launching AIStandard.io & Re-introducing IndoPacific.App

    Artificial Intelligence (AI) is widely recognized as a disruptive technology with the potential to transform various sectors globally. However, the economic value of AI technologies remains inadequately quantified. Despite numerous reports on AI ethics and governance, many of these efforts have been inconsistent and reactionary, often failing to address the complexities of regulating AI effectively. Even India's MeitY AI Advisory, which faces constitutional challenges, was a result of knee-jerk reactions. Amidst the rapid advancements in AI technology, the market has been inundated with AI products and services that frequently overpromise and underdeliver, leading to significant hype and confusion about AI's actual capabilities. Many companies are hastily deploying AI without a comprehensive understanding of its limitations, resulting in substandard or half-baked solutions that can cause more harm than good. In India, several key issues in AI policy remain unaddressed by most organizations and government functionaries. Firstly, there is no settled legal understanding of AI at a socio-economic and juridical level, leading to a lack of clarity on what can be achieved through consistent laws, jurisprudence, and guidelines on AI. Secondly, the science and research community, along with the startup and MSME sectors in India, have not actively participated in addressing holistic and realistic questions around AI policy, compute economics, AI patentability, and productization. Instead, much of the AI discourse is driven by investors and marketing leaders, resulting in half-baked and misleading narratives.The impact of AI on employment is multifaceted, with varying effects across industries. While AI solutions have demonstrated tangible benefits in B2B sectors such as agriculture, supply chain management, human resources, transportation, healthcare, and manufacturing, the impact on B2C segments like creative, content, education, and entertainment remains unclear. The long-term impact of RoughDraft AI or GenAI should be approached with caution, and governments worldwide should prioritize addressing the risks associated with the misuse of AI, which can affect the professional capabilities of key workers and employees involved with AI systems. This article aims to explain why AI standardization is necessary and what can be achieved through it in and for India. With the wave of AI hype, legal-ethical risks surrounding substandard AI solutions, and a plethora of AI policy documents, it is crucial to understand the true nature of AI and its significance for the majority of the population. By establishing comprehensive ethics principles for the design, development, and deployment of AI in India, drawing from global initiatives but grounded in the Indian legal and regulatory context, India can harness the potential of AI while mitigating the associated risks, ultimately leading to a more robust and ethical AI landscape. The Hype and Reality of AI in India The rapid advancement of Artificial Intelligence (AI) has generated significant excitement and hype in India. However, it is crucial to separate the hype from reality and address the challenges and ethical considerations that come with AI adoption. The Snoozefest of AI Policy Jargon: Losing Sight of What Matters In the midst of the AI hype train, we find ourselves drowning in a deluge of policy documents that claim to provide guidance and clarity, but instead leave us more confused than ever. These so-called "thought leaders" and "experts" seem to have mastered the art of saying a whole lot of nothing, using buzzwords and acronyms that would make even the most seasoned corporate drone's head spin. Take, for example, the recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) on March 1, 2024.This masterpiece of bureaucratic jargon manages to use vague terms like "undertested" and "unreliable" AI without bothering to define them or provide any meaningful context. It's almost as if they hired a team of interns to play buzzword bingo and then published the results as official policy. Just a few days later, on March 15, the government issued yet another advisory, this time stipulating that AI models should only be accessible to Indian users if they have clear labels indicating potential inaccuracies or unreliability in the output they generate. Because apparently, the solution to the complex challenges posed by AI is to slap a warning label on it and call it a day. And let's not forget the endless stream of reports, standards, and frameworks that claim to provide guidance on AI ethics and governance. From the IEEE's Ethically Aligned Design initiativeto the OECD AI Principles, these documents are filled with high-minded principles and vague platitudes that do little to address the real-world challenges of AI deployment. Meanwhile, the actual stakeholders – the developers, researchers, and communities impacted by AI – are left to navigate this maze of jargon and bureaucracy on their own. Startups and SMEs struggle to keep up with the constantly shifting regulatory landscape, while marginalized communities bear the brunt of biased and discriminatory AI systems. It's time to cut through the noise and focus on what really matters: developing AI systems that are transparent, accountable, and aligned with human values. We need policies that prioritize the needs of those most impacted by AI, not just the interests of big tech companies and investors. And we need to move beyond the snoozefest of corporate jargon and engage in meaningful, inclusive dialogue about the future we want to build with AI. So let's put aside the TESCREAL frameworks and the buzzword-laden advisories, and start having real conversations about the challenges and opportunities of AI. Because at the end of the day, AI isn't about acronyms and abstractions – it's about people, and the kind of world we want to create together. Overpromising and Underdelivering Many companies in India are rushing to deploy AI solutions without fully understanding their capabilities and limitations. This has led to a proliferation of substandard or half-baked AI products that often overpromise and underdeliver, creating confusion and mistrust among consumers. The excessive focus on generative AI and large language models (LLMs) has also overshadowed other vital areas of AI research, potentially limiting innovation. Ethical and Legal Considerations The integration of AI in various sectors, including healthcare and the legal system, raises complex ethical and legal questions. Concerns about privacy, bias, accountability, and transparency need to be addressed to ensure the responsible development and deployment of AI. The lack of clear regulations and ethical guidelines around AI in India has created uncertainty and potential risks. Policy and Regulatory Challenges India's approach to AI regulation has been reactive rather than strategic, with ad hoc responses and unclear guidelines. The recent AI advisory issued by the Ministry of Electronics and Information Technology (MeitY) has faced criticism for its vague terms and lack of legal validity. There is a need for a comprehensive legal framework that addresses the unique aspects of AI while fostering innovation and protecting individual rights. Balancing Innovation and Competition AI has the potential to drive efficiency and innovation, but it also raises concerns about market concentration and anti-competitive behavior. The Competition Commission of India (CCI) has recognized the need to study the impact of AI on market dynamics and formulate policies that effectively address its implications on competition. What's Really Happening in the "India" AI Landscape? Lack of Settled Legal Understanding of AI India currently lacks a clear legal framework that defines AI and its socio-economic and juridical implications. This absence of settled laws has led to confusion among the judiciary and executive branches regarding what can be achieved through consistent AI regulations and guidelines[1]. A recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) in March 2024 aimed to provide guidelines for AI models under the Information Technology Act. However, the advisory faced criticism for its vague terms and lack of legal validity, highlighting the challenges posed by the current legal vacuum[2]. The ambiguity surrounding AI regulation is exemplified by the case of Ankit Sahni, who attempted to register an AI-generated artwork but was denied by the Indian Copyright Office. The decision underscored the inadequacy of existing intellectual property laws in addressing AI-generated content[3]. Limited Participation from Key Stakeholders The AI discourse in India is largely driven by investors and marketing leaders, often resulting in half-baked narratives that fail to address holistic questions around AI policy, compute economics, patentability, and productization[1]. The science and research community, along with the startup and MSME sectors, have not actively participated in shaping realistic and effective AI policies. This lack of engagement from key stakeholders has hindered the development of a comprehensive AI ecosystem[4]. Successful multistakeholder collaborations, such as the IEEE's Ethically Aligned Design initiative, demonstrate the value of inclusive policymaking[5]. India must encourage greater participation from diverse groups to foster innovation and entrepreneurship in the AI sector. Impact of AI on Employment The impact of AI on employment in India is multifaceted, with varying effects across industries. While AI solutions have shown tangible benefits in B2B sectors like agriculture, supply chain management, and healthcare, the impact on B2C segments such as creative, content, and education remains unclear[1]. A study by NASSCOM estimates that around 9 million people are employed in low-skilled services and BPO roles in India's IT sector[6]. As AI adoption increases, there are concerns about potential job displacement in these segments. However, AI also has the potential to enhance productivity and create new job opportunities. The World Economic Forum predicts that AI will generate specific job roles in the coming decades, such as AI and Machine Learning Specialists, Data Scientists, and IoT Specialists[7]. To harness the benefits of AI while mitigating job losses, India must invest in reskilling and upskilling initiatives. The government has launched programs like the National Educational Technology Forum (NETF) and the Atal Innovation Mission to promote digital literacy and innovation[8]. As India navigates the impact of AI on employment, it is crucial to approach the long-term implications of RoughDraft AI and GenAI with caution. Policymakers must prioritize addressing the risks associated with AI misuse and its potential impact on the professional capabilities of workers involved with AI systems[1]. By expanding on these key points with relevant examples and trends, the article aims to provide a comprehensive overview of the challenges and considerations surrounding AI policy in India. The next section will delve into potential solutions and recommendations to address these issues. A Proposal to "Regulate" AI in India: AIACT.IN The Draft Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2, released on March 14, 2024, is an important private regulation proposal developed by yours truly. While not an official government statute, AIACT.IN v2 offers a comprehensive regulatory framework for responsible AI development and deployment in India.AIACT.IN v2 introduces several key provisions that make it a significant contribution to the AI policy discourse in India: Risk-based approach: The bill adopts a risk-based stratification and technical classification of AI systems, tailoring regulatory requirements to the intensity and scope of risks posed by different AI applications. This approach aligns with global best practices, such as the EU AI Act. Apart from the risk-based approach, there are 3 other ways to classify AI. Promoting responsible innovation: AIACT.IN v2 includes measures to support innovation and SMEs, such as regulatory sandboxes and real-world testing. It also encourages the sharing of AI-related knowledge assets through open-source repositories, subject to IP rights. Addressing ethical and societal concerns: The bill tackles issues such as content provenance and watermarking of AI-generated content, intellectual property protections, and countering AI hype. These provisions aim to foster transparency, accountability, and public trust in AI systems. Harmonization with global standards: AIACT.IN v2 draws inspiration from international initiatives such as the UNESCO Recommendations on AI and the G7 Hiroshima Principles on AI. By aligning with global standards, the bill promotes interoperability and facilitates India's integration into the global AI ecosystem. Despite its status as a private bill, AIACT.IN v2 has garnered significant attention and support from the AI community in India. The Indian Society of Artificial Intelligence and Law (ISAIL) has featured the bill on its website, recognizing its potential to shape the trajectory of AI regulation in the country. Now, to disclose, I had proposed this AIACT.IN in November 2023 and then in March 2024 to promote a democratic discourse and not a blind implementation of this bill in the form of a law. The response has been overwhelming so far, and a third version of the Draft Act is in the works already. However, as I had taken feedback from advocates, corporate lawyers, legal scholars, technology professionals and even some investors and C-suite professionals in tech companies, the feedback that I received was that benchmarking AI itself is a hard task, which even through this AIACT.IN proposal could become difficult to implement due to lack of general understandings around AI. What to Standardise Then? Before we standardise artificial intelligence in India, let us configure and understand what exactly can be standardised. To be fair, standardisation of AI in India is contingent upon the nature of the industry itself. As of now, the industry is at a nascent stage despite all the hype, and the so-called discourse around "GenAI" training. This explains that we are mostly at the scaling up and R&D stages around AI & GenAI, be B2B, B2C or D2C in India. Second, let's ask - who should be subject to standardisation? In my view - AI standardisation must be neutral of the net worth or economic status of any company in the market. This means that the principles of AI standardisation, both sector-neutral & sector-specific across the aisle, must apply on all market players, in a competitive sense. This is why the Indian Society of Artificial Intelligence and Law has introduced Certification Standards for Online Legal Education (edtech). Nevertheless, the way AI standards must be developed must have a sense of distinction that it remains mindful of the original / credible use cases that are coming up. The biggest risk of AI hype in this decade is that any random company starts claiming they have a major AI use case, only to find out they haven't tested or effectively built that AI even at the stage of their "solution" being a test case. This is why it becomes necessary to address AI use cases critically. There are 2 key ways that one can standardise AI and not regulate it - (1) the Legal-Ethical Way; and (2) the Technical Way. None of the means can be opted to discount another. In my view, both methods must be implemented, with caution and sense. The reason is obvious. Technical benchmarking enables us to track the evolution of any technology and its sister and daughter use cases, while legal-ethical benchmarking gives us a conscious understanding of how effective AI market practices can be developed. Now, it does not mean that the legal-ethical methods of benchmarking on commonsensical principles like privacy, fairness, data quality etc., (most AI standards will naturally be about data protection principles to begin with at first across sectors) must be applied in a rigid, controllable and absolutist way, because an improperly drafted standardisation approach could also be problematic for the market economy, which is still reeling with the scaling and R&D stages of AI. Fortunately, India already has a full-fledged DPDPA to begin with. Here's what we have planned for technology professionals, AI & tech startups & MSMEs of Bharat and the Indo-Pacific: The Indian Society of Artificial Intelligence and Law (ISAIL) is launching aistandard.io - a repository of AI-related legal-ethical and policy standards with sector-neutral or sector-specific focus. Members of ISAIL, and of specific committees can wholeheartedly contribute to AI standardisation by suggesting their inputs on standardising AI use cases, solutions, testing benchmarks (legal /policy /technical /all); The ISAIL Secretariat will define a set of rules of engagement to contribute to AI standardisation for professionals and businesses; You can also participate and become a part of the aistandard.io community as an ISAIL member for active participation via paid subscription at indian.substack.com or via manual request at executive@isail.co.in; The Indian Society of Artificial Intelligence and Law will dedicate to invite technology companies, MSMEs and Startups to become their Allied Members soon; This is why, I am glad to state that the Indian Society of Artificial Intelligence and Law in conjunction with Indic Pacific Legal Research LLP will come with relevant standards on AI use cases across certain key sectors in India - in banking & finance, health, education, intellectual property management, agriculture and legal technologies. Our aim would be to propose industry viability standards and not regulatory standards to study basic parameters for regulation, such as (1) inherent purpose of AI systems, (2) market integrity (includes competition law), (3) risk management and (4) knowledge management. Indic Pacific will publish the Third Version of the AIACT.IN proposal shortly; To begin with, we have defined certain principles of AI Standardisation, which may apply in every case. We have termed these principles as the "ISAIL Principles of AI Standardisation, i.e., aistandard.io". The ISAIL Principles of AI Standardisation Principle 1: Sector-Neutral and Sector-Specific Applicability AI standardization guidelines should be applicable across all sectors and industries, regardless of the size or economic status of the companies involved. However, they should also consider sector-specific requirements and use cases to ensure relevance and effectiveness. Principle 2: Legal-Ethical and Technical Benchmarking AI standardization should involve both legal-ethical and technical benchmarking. Legal-ethical benchmarking should focus on principles like privacy, fairness, and data quality, while technical benchmarking should enable tracking the evolution of AI technologies and their use cases. Principle 3: Flexibility and Adaptability The standardization approach should be flexible and adaptable to the evolving AI landscape in India, which is still in the scaling and R&D stages. The guidelines should not be rigid or absolutist, but should allow room for innovation and growth. Principle 4: Credible Use Case Focus The guidelines should prioritize credible and original AI use cases, and critically evaluate claims made by companies to avoid hype and misleading narratives. This will help ensure that the standardization efforts are grounded in practical realities. Principle 5: Interoperability and Market Integration AI standardisation should prioritize interoperability to ensure seamless integration of market practices and foster a free economic environment. Standards should be developed with due care to promote healthy competition and innovation while preventing market fragmentation. Principle 6: Multistakeholder Participation and Engagement Protocols The development of AI standards should involve active participation and collaboration from diverse stakeholders, including the science and research community, startups, MSMEs, industry experts, policymakers, and civil society. However, such participation will be subject to well-defined protocols of engagement to ensure transparency, accountability, and fairness. The open-source or proprietary nature of engagement in any initiative will depend on these protocols. Principle 7: Recording and Quantifying AI Use Cases To effectively examine the evolution of AI as a class of technology, it is crucial to record and quantify AI use cases for systems, products, and services. This includes documenting the real features and factors associated with each use case. Both legal-ethical and technical benchmarking should be employed to assess and track the development and impact of AI use cases. From VLiGTA App to IndoPacific App We have transitioned our technology law, and law & policy repository / e-commerce platform, VLiGTA.App to IndoPacific.App. We are thrilled to announce a significant evolution in our platform’s journey. Say hello to indopacific.app, your essential app for mastering legal skills and insights. This change is driven by our commitment to making legal education more comprehensive and accessible to a broader audience, especially those in the tech industry and beyond. Why the Change? 🔍 Enhanced Focus and Broader Audience Our previous platform, vligta.app, was primarily focused on legal professionals. With indopacific.app, we are expanding our horizons to make legal knowledge relevant and accessible to tech professionals and other non-legal fields. Learn how legal skills can empower you, no matter your profession. 🌟 Alignment with Our New Vision and Mission Our new main tagline, "Your essential app for mastering legal skills & insights," underscores our dedication to being the go-to resource for high-quality, practical legal education. Meanwhile, our supporting tagline, "Empower yourself with legal knowledge, tailored for tech and beyond," highlights our commitment to broader applicability and professional growth. 📈 Improved User Experience and Resources Enjoy a revamped user interface, enhanced features, and a richer resource library. Dive into diverse content such as case studies, interactive modules, and expert talks that bridge the gap between legal concepts and practical application in various fields. 🌏 Reflecting a Global Perspective The name indopacific.app signifies our goal to cater to a global audience, particularly in the dynamic and rapidly evolving regions of the Indo-Pacific. We aim to provide universally applicable legal education that transcends geographical and professional boundaries. What to Expect? All existing URLs from vligta.app will automatically redirect to the corresponding pages on indopacific.app, ensuring a seamless transition with no interruption in access to our resources. Join us on this exciting journey as we continue to empower professionals with essential legal skills and insights tailored for the tech industry and beyond. 🌐 References [1] https://www.nature.com/articles/s41599-024-02647-9 [2] https://law.asia/navigating-ai-india/ [3] https://sageuniversity.edu.in/blogs/impact-of-artificial-intelligence-on-employment [4] https://morungexpress.com/absence-of-dedicated-legal-framework-a-challenge-for-ai-regulations [5] https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity [6] https://www.pwc.in/assets/pdfs/consulting/technology/data-and-analytics/artificial-intelligence-in-india-hype-or-reality/artificial-intelligence-in-india-hype-or-reality.pdf [7] https://www.ris.org.in/sites/default/files/Publication/Policy%20brief-104_Amit%20Kumar.pdf [8] https://www.livelaw.in/articles/artificial-intelligence-india-lacks-clear-ip-laws-around-ai-results-249693 [9] https://juriscentre.com/2023/12/14/lack-of-laws-governing-ai-in-india-focus-on-deepfake/ [10] https://www.barandbench.com/law-firms/view-point/artificial-intelligence-the-need-for-development-of-a-regulatory-framework [11] https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/ [12] https://www.spotdraft.com/blog/engaging-stakeholders-in-ai-use-policy-development [13] https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives [14] https://www.publicissapient.com/insights/AI-hype-or-reality [15] https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf [16] https://www.sciencedirect.com/science/article/pii/S266672152200028X [17] https://www.cnbctv18.com/technology/wef-2023-gita-gopinath-says-ai-could-impact-30-of-jobs-in-india-18816901.htm [18] https://nasscom.in/knowledge-center/publications/ai-beyond-myth-hype [19] https://accesspartnership.com/the-key-policy-frameworks-governing-ai-in-india/ [20] https://indiaai.gov.in/article/ai-impact-on-india-jobs-and-employment

  • Microsoft's Calculated 'Competition' with OpenAI

    Recent developments have shed new light on the complex relationship between Microsoft and OpenAI, two significant players in the artificial intelligence (AI) sector. While the companies have maintained a collaborative partnership, Microsoft's 2024 annual report reveals a more nuanced dynamic, explicitly acknowledging areas of competition between the two entities. This insight aims to examine the current state of affairs between Microsoft and OpenAI, analyzing their partnership, areas of competition, and the potential implications for the broader AI industry. By exploring official statements, financial reports, and market trends, we can gain a clearer understanding of how these two influential organizations are positioning themselves in the rapidly evolving AI landscape. Key points of discussion will include: The nature of Microsoft's investment in OpenAI and their collaborative efforts Specific areas where the companies now compete, as outlined in Microsoft's annual report The strategic implications for both companies and the AI industry at large Potential future scenarios for the Microsoft-OpenAI relationship The Microsoft-OpenAI Collaboration Timeline The relationship between Microsoft and OpenAI has been marked by significant investments and collaborations, evolving from a strategic partnership to a more complex dynamic over the years. Initial Investment and Collaboration (2019-2022) Microsoft's involvement with OpenAI began in 2019 with a $1 billion investment, aimed at developing artificial general intelligence (AGI) with OpenAI exclusively using Microsoft's Azure cloud services. This initial phase focused on joint research and development, with Microsoft gaining the right to commercialize resulting technologies. In 2020, Microsoft announced an exclusive license to GPT-3, OpenAI's large language model, further cementing their collaboration.This move allowed Microsoft to integrate GPT-3 capabilities into its own products and services. Expanded Investment and Integration (2023) In January 2023, Microsoft significantly increased its stake in OpenAI with a reported $10 billion investment.This multi-year agreement expanded their partnership, with Microsoft providing advanced supercomputing systems and cloud infrastructure to support OpenAI's research and products. Emerging Competitive Dynamics (2024) Despite their close partnership, Microsoft's 2024 annual report explicitly acknowledged competition with OpenAI in certain AI services and search markets. This admission highlights the complex nature of their relationship, where collaboration and competition coexist. Current State of the Partnership As of 2024, the Microsoft-OpenAI partnership remains strategically important for both companies. Microsoft continues to be OpenAI's exclusive cloud provider, while also integrating OpenAI's technologies into its products. However, the acknowledgment of competition suggests a nuanced relationship where both entities are positioning themselves in the rapidly evolving AI market. The partnership faces potential challenges, including regulatory scrutiny and the need to balance collaborative efforts with individual corporate interests. As the AI landscape continues to evolve, the dynamics between Microsoft and OpenAI may further shift, reflecting the high stakes and competitive nature of the AI industry. Microsoft's Acknowledgment of OpenAI as a Competitor In a significant shift from their previously collaborative stance, Microsoft has explicitly recognized OpenAI as a competitor in their 2024 annual report. This acknowledgment, found in the company's Form 10-K, marks a pivotal moment in the evolving landscape of artificial intelligence (AI) and cloud services. Competitive Dynamics in AI and Cloud Services Microsoft's declaration reflects the rapidly changing dynamics in the AI industry. While the two companies have maintained a strong partnership, with Microsoft investing billions in OpenAI, the acknowledgment of competition suggests a more complex relationship moving forward. This shift is indicative of the high stakes involved in the AI race, where even close collaborators can find themselves vying for market share. Strategic Implications Increased AI Investments: Microsoft reported a 9% increase in research and development expenses, reaching $29.5 billion, with a significant portion dedicated to cloud engineering and AI investments. This substantial commitment underscores Microsoft's determination to maintain a competitive edge in AI technologies. Product Integration: The company is aggressively integrating AI capabilities across its product lines, including Office 365, Bing, and LinkedIn. This strategy aims to differentiate Microsoft's offerings and potentially lock in customers to their AI-enhanced ecosystem. Cloud Service Differentiation: Microsoft is leveraging AI to set its cloud services apart, particularly in Azure. This focus on AI-driven differentiation could lead to increased customer retention and attraction, directly competing with OpenAI's offerings. Market Recognition and Challenges Microsoft's Form 10-K also acknowledges the highly competitive nature of the AI market, with rapid evolution and new entrants constantly emerging. This recognition extends to potential challenges and risks associated with AI development, including: Unintended use of AI technologies The need for responsible AI practices Potential regulatory challenges Long-term Commitment to AI Despite the competitive stance, Microsoft's financial results show strong growth in cloud services, which they expect to further enhance through AI integration. The company's active development of AI infrastructure and training capabilities indicates a long-term commitment to remaining at the forefront of AI technology, even as it navigates a more competitive relationship with OpenAI. Competition Law Implications Microsoft's explicit recognition of OpenAI as a competitor in its 2024 annual report marks a significant shift in the artificial intelligence (AI) competitive landscape. This acknowledgment has several important implications from a competition law perspective: Collaborative Competition: The acknowledgment highlights the complex nature of Microsoft's relationship with OpenAI, which involves both collaboration (through significant investments) and competition. This "coopetition" model may attract scrutiny from competition authorities concerned about potential collusion or market allocation. Merger and Acquisition Implications: This competitive stance could affect how regulators view any future acquisitions or deeper integrations between Microsoft and OpenAI, potentially raising concerns about market consolidation. Data and Resource Access: Competition authorities may examine whether Microsoft's dual role as an investor in and competitor to OpenAI provides it with unfair advantages in terms of data access or computational resources. Vertical Integration Concerns: As Microsoft integrates AI capabilities across its product lines, regulators may scrutinize whether this vertical integration creates barriers to entry for other AI competitors. However, as pointed out by Matt Trifiro in his LinkedIn post, this development is likely to be just the beginning of a broader trend in the tech industry, particularly in the cloud and AI sectors. Two major shifts are already becoming apparent. A New Wave of Cloud Differentiation and Lock-In Microsoft is observing a significant change in how major cloud providers, including itself, are approaching the market. This shift represents a departure from the previous decade's trend of convergence in cloud functionality. Key aspects of this shift include: Exclusive AI Capabilities Cloud providers are now focusing on developing and offering unique AI-powered features and services. These exclusive capabilities are designed to set each provider apart in an increasingly competitive market. Proprietary AI Ecosystems The goal is to create AI-centric environments that are unique to each cloud provider. This strategy aims to increase customer dependency on specific platforms, making it more challenging for clients to switch providers. Reversal of Multi-Cloud Trends Previously, there was a move towards making cloud services more interoperable and supporting multi-cloud strategies. The new approach may make it more difficult for enterprises to maintain a multi-cloud environment. Impact on Enterprise Customers Businesses may find themselves increasingly tied to a single cloud provider's AI ecosystem. This could lead to reduced flexibility but potentially deeper integration and more advanced AI capabilities. Microsoft's Strategy As evidenced in the Form 10-K, Microsoft is heavily investing in AI across all segments. The company is integrating AI capabilities into products like Azure, Office 365, and Dynamics 365. This aligns with the broader trend of creating a more differentiated and potentially "sticky" cloud ecosystem. An Explosion of Specialized Cloud Providers The second major shift involves the emergence of highly specialized AI cloud service providers. This trend is reshaping the competitive landscape in cloud computing and AI services. Key aspects of this shift include: Niche AI Service Providers New players are entering the market with highly specialized AI cloud services. Examples mentioned include CoreWeave for AI training and Zero Gap AI for inferencing. Unique Capabilities These specialized providers offer capabilities that major cloud platforms may struggle to match quickly. They often focus on specific aspects of AI, such as training models or optimizing inference. Physical Infrastructure Advantages Some of these providers have unique physical assets that give them an edge. For instance, Zero Gap AI's urban fiber and Point of Presence (POP) footprint is mentioned as a hard-to-replicate advantage. Market Impact These specialized providers are expected to capture significant portions of the growing AI cloud services market. They may pose a challenge to more generalized cloud providers in specific AI-related niches. Microsoft's Response The Form 10-K indicates that Microsoft is aware of this trend and its potential impact. Microsoft is investing heavily in AI infrastructure and training, likely to compete with these specialized providers. The company's strategy includes both broadening its general AI capabilities and developing more specialized services. Potential for Partnerships or Acquisitions While not explicitly stated, this trend could lead to partnerships between major cloud providers and specialized AI companies. It might also drive acquisitions as larger companies seek to incorporate specialized AI capabilities. Conclusion These two shifts represent a significant evolution in the cloud and AI landscape. Microsoft's Form 10-K reflects an awareness of these changes and outlines strategies to adapt and compete in this new environment. The company's focus on AI integration across its product lines, substantial investments in AI infrastructure, and recognition of the competitive threat from both major cloud providers and specialized AI companies indicate a comprehensive approach to addressing these market shifts. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.

  • AI-Generated Texts and the Legal Landscape: A Technical Perspective

    Artificial Intelligence (AI) has significantly disrupted the competitive marketplace, particularly in the realm of text generation. AI systems like ChatGPT and Bard have been used to generate a wide array of literary and artistic content, including translations, news articles, poetry, and scripts[8]. However, this has led to complex issues surrounding intellectual property rights and copyright laws[8]. Copyright Laws and AI-Generated Content AI-generated content is produced by an inert entity using an algorithm, and therefore, it does not traditionally fall under copyright protection[8]. However, the U.S. Copyright Office has recently shown openness to granting ownership to AI-generated work on a "case-by-case" basis[5]. The key factor in determining copyright is the extent to which a human had creative control over the work's expression[5]. The AI software code itself is subject to copyright laws, and this includes the copyrights on the programming code, the machine learning model, and other related aspects[8]. However, the classification of AI-generated material, such as writings, text, programming code, pictures, or images, and their eligibility for copyright protection is contentious[8]. Legal Challenges and AI The New York Times (NYT) has recently sued OpenAI and Microsoft for copyright infringement, contending that millions of its articles were used to train automated chatbots without authorization[2]. OpenAI, however, has argued that using copyrighted works to train its technologies is fair use under the law[6]. This case highlights the ongoing legal battle over the unauthorized use of published work to train AI systems[2]. Paraphrasing and AI Paraphrasing tools, powered by AI, have become increasingly popular. These tools can rewrite, enhance, and repurpose content while maintaining the original meaning[7]. However, the use of such tools has raised concerns about the potential for copyright infringement and plagiarism. To address this, it is suggested that heuristic and semantic protocols be developed for accepting and rejecting AI-generated texts[3]. AI-based paraphrasing tools, such as Quillbot and SpinBot, offer the ability to rephrase text while preserving the original meaning. These tools can be beneficial for students and professionals alike, aiding in the writing process by providing alternative expressions and avoiding plagiarism. However, the accuracy and ethical use of these tools are concerns. For example, a student might use an AI paraphrasing tool to rewrite an academic paper, but without a deep understanding of the content, the result could be a superficial or misleading representation of the original work. This raises questions about the integrity of the paraphrased content and the student's learning process. It's crucial to develop guidelines for the ethical use of paraphrasing tools, ensuring that users engage with the original material and properly attribute sources to maintain academic and professional standards. Citation and Referencing in the AI Era The advent of AI-generated texts has necessitated a change in the concept of citation and referencing. Currently, the American Psychological Association (APA) recommends that text generated from AI be formatted as "Personal Communication," receiving an in-text citation but not an entry on the References list[4]. However, as AI-generated content becomes more prevalent, the nature of primary and secondary sources might change, and the traditional system of citation may need to be permanently altered. For instance, the Chicago Manual of Style advises treating AI-generated text as personal communication, requiring citations to include the AI's name, the prompt description, and the date accessed. However, this approach may not be sufficient as AI becomes more prevalent in content creation. Hypothetically, consider a scenario where a researcher uses an AI tool to draft a section of a literature review. The current citation standards would struggle to accurately reflect the AI's contribution, potentially leading to issues of intellectual honesty and academic integrity. As AI-generated content becomes more sophisticated, the distinction between human and AI authorship blurs, prompting a need for new citation frameworks that can accommodate these changes. Content Protection and AI The rise of AI has also raised concerns about the protection of gated knowledge and content. Publishing entities like NYT and Elsevier may need to adapt to the changing landscape[1]. The protection of original content in the age of AI is a growing concern, especially for publishers and content creators. The New York Times' lawsuit against OpenAI over the use of its articles to train AI models without permission exemplifies the legal challenges in this domain. To safeguard content, publishers might consider implementing open-source standards for data scraping and human-in-the-loop grammatical protocols. Imagine a small online magazine that discovers its articles are being repurposed by an AI without credit or compensation. To combat this, the magazine could employ open-source tools to track the use of its content and ensure that any AI-generated derivatives are properly licensed and attributed, thus maintaining control over its intellectual property. The rapid advancement of AI technologies has brought about significant changes in the legal and technical landscape. As AI continues to evolve, it is crucial to address the legal implications of AI-generated texts and develop protocols to regulate their use. This will ensure the protection of intellectual property rights while fostering innovation in AI technologies. References [1] https://builtin.com/artificial-intelligence/ai-copyright [2] https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html [3] https://www.semrush.com/goodcontent/paraphrasing-tool/ [4] https://dal.ca.libguides.com/CitationStyleGuide/citing-ai [5] https://mashable.com/article/us-copyright-law-ai-generated-content [6] https://www.nytimes.com/2024/01/08/technology/openai-new-york-times-lawsuit.html [7] https://www.hypotenuse.ai/paraphrasing-tool [8] https://www.legal500.com/developments/thought-leadership/legal-issues-with-ai-generated-content-copyright-and-chatgpt/ [9] https://www.cnbc.com/2024/01/08/openai-responds-to-new-york-times-lawsuit.html [10] https://www.copy.ai/tools/paraphrase-tool [11] https://www.techtarget.com/searchcontentmanagement/answer/Is-AI-generated-content-copyrighted [12] https://www.theverge.com/2024/1/8/24030283/openai-nyt-lawsuit-fair-use-ai-copyright [13] https://originality.ai/ai-paraphraser [14] https://www.reddit.com/r/selfpublishing/comments/znlqla/what_is_the_legality_of_ai_generated_text_for/ [15] https://theconversation.com/how-a-new-york-times-copyright-lawsuit-against-openai-could-potentially-transform-how-ai-and-copyright-work-221059 [16] https://ahrefs.com/writing-tools/paraphrasing-tool [17] https://www.jdsupra.com/legalnews/relying-on-ai-generated-text-and-images-9943106/ [18] https://apnews.com/article/nyt-new-york-times-openai-microsoft-6ea53a8ad3efa06ee4643b697df0ba57 [19] https://quillbot.com [20] https://crsreports.congress.gov/product/pdf/LSB/LSB10922 [21] https://www.reuters.com/legal/transactional/ny-times-sues-openai-microsoft-infringing-copyrighted-work-2023-12-27/ [22] https://www.paraphraser.io [23] https://www.pcmag.com/news/ai-generated-content-and-the-law-are-you-going-to-get-sued [24] https://pressgazette.co.uk/media_law/new-york-times-open-ai-microsoft-lawsuit/ [25] https://textflip.ai

  • Abhivardhan representing Indic Pacific at Startup20 (G20 Brazil 2024) Meeting

    Our Founder, and Managing Partner, Abhivardhan had represented Indic Pacific Legal Research LLP in a recent technical meeting held by Startup20 Brasil, an engagement group on innovation, entrepreneurship & collaboration by G20 Brasil 2024 recently. Our position at Indic Pacific Legal Research, since inception has been clear that national artificial intelligence strategies across the world need to have a specific focus so that stakeholders in the public sector implement them with a sense of parity. Abhivardhan was delighted to address to point out the following integral issues with the state of implementation of many national AI strategies: Insufficient focus on capacity building for AI proliferation, commercialization and integration Limited recognition of diverse AI use cases and local contexts The unclear & incapacitated role of regional and local government institutions in Responsible AI governance Dichotomous challenges around AI deployment and development. You can find many strategic legal insights on artificial intelligence policy by Indic Pacific at indopacific.app/store.

  • AI Seoul Summit 2024: Decoding the International Scientific Report on AI Safety

    The AI Seoul Summit on AI Safety, held in South Korea in 2024, has released a comprehensive international scientific report on AI safety. This report stands out from the myriad of AI policy and technology reports due to its depth and actionable insights. Here, we break down the key points from the report to understand the risks and challenges associated with general-purpose AI systems. 1. The Risk Surface of General-Purpose AI "The risk surface of a technology consists of all the ways it can cause harm through accidents or malicious use. The more general-purpose a technology is, the more extensive its risk exposure is expected to be. General-purpose AI models can be fine-tuned and applied in numerous application domains and used by a wide variety of users [...], leading to extremely broad risk surfaces and exposure, challenging effective risk management." General-purpose AI models, due to their versatility, have a broad risk surface. This means they can be applied in various domains, increasing the potential for both accidental and malicious harm. Managing these risks effectively is a significant challenge due to the extensive exposure these models have. Illustration Imagine a general-purpose AI model used in both healthcare and financial services. In healthcare, it could misdiagnose patients, leading to severe health consequences. In finance, it could be exploited for fraudulent activities. The broad applicability increases the risk surface, making it difficult to manage all potential harms. 2. Challenges in Risk Assessment "When the scope of applicability and use of an AI system is narrow (e.g., consider spam filtering as an example), salient types of risk (e.g., the likelihood of false positives) can be measured with relatively high confidence. In contrast, assessing general-purpose AI models’ risks, such as the generation of toxic language, is much more challenging, in part due to a lack of consensus on what should be considered toxic and the interplay between toxicity and contextual factors (including the prompt and the intention of the user)." Narrow AI systems, like spam filters, have specific and measurable risks. However, general-purpose AI models pose a greater challenge in risk assessment due to the complexity and variability of their applications. Determining what constitutes toxic behavior and understanding the context in which it occurs adds layers of difficulty. Illustration Consider an AI model used for content moderation on a social media platform. The model might flag certain words or phrases as toxic. However, the context in which these words are used can vary widely. For example, the word "kill" could be flagged as toxic, but in the context of a video game discussion, it might be perfectly acceptable. This variability makes it difficult to create a standardized risk assessment. 3. Limitations of Current Methodologies "Current risk assessment methodologies often fail to produce reliable assessments of the risk posed by general-purpose AI systems, [because] Specifying the relevant/high-priority flaws and vulnerabilities is highly influenced by who is at the table and how the discussion is organised, meaning it is easy to miss or mis-define areas of concern. [...] Red teaming, for example, only assesses whether a model can produce some output, not the extent to which it will do so in real-world contexts nor how harmful doing so would be. Instead, they tend to provide qualitative information that informs judgments on what risk the system poses." Existing methodologies for risk assessment are often inadequate for general-purpose AI systems. These methods can miss critical flaws and vulnerabilities due to biases in the discussion process. Techniques like red teaming provide limited insights, focusing on whether a model can produce certain outputs rather than the real-world implications of those outputs. Illustration A red-teaming exercise might show that an AI can generate harmful content, but it doesn't quantify how often this would happen in real-world use or the potential impact. For instance, an AI chatbot might generate offensive jokes during testing, but the frequency and context in which these jokes appear in real-world interactions remain unknown. 4. Nascent Quantitative Risk Assessments "Quantitative risk assessment methodologies for general-purpose AI are very nascent and it is not yet clear how quantitative safety guarantees could be obtained. [...] If quantitative risk assessments are too uncertain to be relied on, they may still be an important complement to inform high-stakes decisions, clarify the assumptions used to assess risk levels and evaluate the appropriateness of other decision procedures (e.g. those tied to model capabilities). Further, “risk” and “safety” are contentious concepts." Quantitative risk assessments for general-purpose AI are still in their early stages. While these assessments are currently uncertain, they can still play a crucial role in informing high-stakes decisions and clarifying assumptions. The concepts of "risk" and "safety" remain contentious and require further exploration. Illustration A quantitative risk assessment might show a 5% chance of an AI system making a critical error in a high-stakes environment like autonomous driving. However, the uncertainty in these assessments makes it hard to rely on them exclusively for regulatory decisions. 5. Testing and Thresholds "It is common practice to test models for some dangerous capabilities ahead of release, including via red-teaming and benchmarking, and publishing those results in a ‘model card’ [...]. Further, some developers have internal decision-making panels that deliberate on how to safely and responsibly release new systems. [...] However, more work is needed to assess whether adhering to some specific set of thresholds indeed does keep risk to an acceptable level and to assess the practicality of accurately specifying appropriate thresholds in advance." Testing for dangerous capabilities before releasing AI models is a standard practice. However, there is a need for more work to determine if these tests and thresholds effectively manage risks. Accurately specifying appropriate thresholds in advance remains a challenge. Illustration An AI model might pass pre-release tests for dangerous capabilities, but once deployed, it could still exhibit harmful behaviors not anticipated during testing. For example, an AI chatbot might generate harmful content in response to unforeseen user inputs. 6. Specifying Objectives for AI Systems "It is challenging to precisely specify an objective for general-purpose AI systems in a way that does not unintentionally incentivise undesirable behaviours. Currently, researchers do not know how to specify abstract human preferences and values in a way that can be used to train general-purpose AI systems. Moreover, given the complex socio-technical relationships embedded in general-purpose AI systems, it is not clear whether such specification is possible." Specifying objectives for general-purpose AI systems without incentivizing undesirable behaviors is difficult. Researchers are still figuring out how to encode abstract human preferences and values into these systems. The complex socio-technical relationships involved add to the challenge. Illustration An AI system designed to maximize user engagement might inadvertently promote sensationalist or harmful content because it interprets engagement as the primary objective, ignoring the quality or safety of the content. 7. Machine Unlearning "‘Machine unlearning’ can help to remove certain undesirable capabilities from general-purpose AI systems. [...] Unlearning as a way of negating the influence of undesirable training data was originally proposed as a way to protect privacy and copyright [...] Unlearning methods to remove hazardous capabilities [...] include methods based on fine-tuning [...] and editing the inner workings of models [...]. Ideally, unlearning should make a model unable to exhibit the unwanted behaviour even when subject to knowledge-extraction attacks, novel situations (e.g. foreign languages), or small amounts of fine-tuning. However, unlearning methods can often fail to perform unlearning robustly and may introduce unwanted side effects [...] on desirable model knowledge." Machine unlearning aims to remove undesirable capabilities from AI systems, initially proposed to protect privacy and copyright. However, these methods can fail to perform robustly and may introduce unwanted side effects, affecting desirable model knowledge. Illustration An AI system trained on biased data might be subjected to machine unlearning to remove discriminatory behaviors. However, this process could inadvertently degrade the system's overall performance or introduce new biases. 8. Mechanistic Interpretability "Understanding a model’s internal computations might help to investigate whether they have learned trustworthy solutions. ‘Mechanistic interpretability’ refers to studying the inner workings of state-of-the-art AI models. However, state-of-the-art neural networks are large and complex, and mechanistic interpretability has not yet been useful and competitive with other ways to analyse models for practical applications." Mechanistic interpretability involves studying the internal workings of AI models to ensure they have learned trustworthy solutions. However, this approach has not yet proven useful or competitive with other analysis methods for practical applications due to the complexity of state-of-the-art neural networks. Illustration A complex neural network used in financial trading might make decisions that are difficult to interpret. Mechanistic interpretability could help understand these decisions, but current methods are not yet practical for real-world applications. 9. Watermarks for AI-Generated Content "Watermarks make distinguishing AI-generated content easier, but they can be removed. A ‘watermark’ refers to a subtle style or motif that can be inserted into a file which is difficult for a human to notice but easy for an algorithm to detect. Watermarks for images typically take the form of imperceptible patterns inserted into image pixels [...], while watermarks for text typically take the form of stylistic or word-choice biases [...]. Watermarks are useful, but they are an imperfect strategy for detecting AI-generated content because they can be removed [...]. However, this does not mean that they are not useful. As an analogy, fingerprints are easy to avoid or remove, but they are still very useful in forensic science." Watermarks help identify AI-generated content by embedding subtle, algorithm-detectable patterns. While useful, they are not foolproof as they can be removed. Despite this, watermarks remain a valuable tool, much like fingerprints in forensic science. Illustrations An AI-generated news article might include a watermark to indicate its origin. However, malicious actors could remove this watermark, making it difficult to trace the content back to its source. 10. Mitigating Bias and Improving Fairness "Researchers deploy a variety of methods to mitigate or remove bias and improve fairness in general-purpose AI systems [...], including pre-processing, in-processing, and post-processing techniques [...]. Pre-processing techniques analyse and rectify data to remove inherent bias existing in datasets, while in-processing techniques design and employ learning algorithms to mitigate discrimination during the training phase of the system. Post-processing methods adjust general-purpose AI system outputs once deployed." To mitigate bias and improve fairness in AI systems, researchers use various techniques across different stages. Pre-processing addresses biases in datasets, in-processing mitigates discrimination during training, and post-processing adjusts outputs after deployment. Illustrations An AI hiring tool might use pre-processing techniques to remove biases from training data, in-processing techniques to ensure fair decision-making during training, and post-processing techniques to adjust outputs for fairness after deployment. Legal Perspective on the Report In many ways, this report is a truth seeker, since amidst the wave of hyped up AI policy content, this report kind of addresses AI safety in a practical way. The acknowledgements offered in the report are incredible, and here are some inferences based on each of the 10 points, that I have developed to suggest how technology professionals and companies must be prepared to understand the legal-ethical implications of half-baked AI regulations. 1. The Risk Surface of General-Purpose AI The extensive risk surface of general-purpose AI necessitates a flexible and adaptive legal framework. Regulators must consider the diverse applications and potential harms of these technologies. This could lead to the development of sector-specific regulations and cross-sectoral oversight bodies to ensure comprehensive risk management. Legal systems may need to incorporate dynamic regulatory mechanisms that can evolve with technological advancements, ensuring that all potential risks are adequately addressed. 2. Challenges in Risk Assessment The difficulty in assessing risks for general-purpose AI due to contextual variability and cultural differences implies that legal standards must be adaptable and context-sensitive. This could involve creating guidelines for context-specific evaluations and establishing international cooperation to harmonize standards. Legal frameworks may need to incorporate mechanisms for continuous learning and adaptation, ensuring that risk assessments remain relevant and effective across different contexts and cultures. 3. Limitations of Current Methodologies The inadequacy of current risk assessment methodologies for general-purpose AI suggests that legal frameworks should mandate comprehensive risk assessments that include both qualitative and quantitative analyses. This might involve setting standards for risk assessment methodologies and requiring transparency in the assessment process. Legal systems may need to ensure diverse stakeholder involvement in risk assessment discussions to capture a wide range of perspectives and concerns, thereby improving the reliability and comprehensiveness of risk assessments. 4. Nascent Quantitative Risk Assessments The nascent and uncertain nature of quantitative risk assessments for general-purpose AI indicates that regulators should use these assessments as one of several tools in decision-making processes. Legal standards should require the use of multiple assessment methods to provide a more comprehensive understanding of risks. This could lead to the development of hybrid regulatory approaches that combine quantitative and qualitative assessments, ensuring that high-stakes decisions are informed by a robust and multi-faceted understanding of risks. 5. Testing and Thresholds The need for ongoing monitoring and post-deployment testing of AI systems implies that legal frameworks should require continuous risk assessment and incident reporting. This could involve mandatory reporting of incidents and continuous risk assessment to ensure that thresholds remain relevant and effective. Legal systems may need to incorporate mechanisms for adaptive regulation, allowing for the adjustment of thresholds and standards based on real-world performance and emerging risks. 6. Specifying Objectives for AI Systems The challenge of specifying objectives for general-purpose AI without incentivizing undesirable behaviors suggests that regulations should require AI developers to consider the broader social and ethical implications of their systems. This might involve creating guidelines for ethical AI design and requiring impact assessments that consider potential unintended consequences. Legal frameworks may need to incorporate principles of ethical AI development, ensuring that AI systems align with societal values and do not inadvertently cause harm. 7. Machine Unlearning The potential for machine unlearning methods to introduce new issues implies that legal standards should require rigorous testing and validation of these methods. This might involve setting benchmarks for unlearning efficacy and monitoring for unintended side effects. Legal systems may need to ensure that unlearning processes are robust and do not compromise the overall performance or safety of AI systems, thereby maintaining trust and reliability in AI technologies. 8. Mechanistic Interpretability The current impracticality of mechanistic interpretability for real-world applications suggests that regulations should promote research into this area and require transparency in AI decision-making processes. This could involve mandating explainability standards and supporting the development of practical interpretability tools. Legal frameworks may need to ensure that AI systems are transparent and accountable, enabling stakeholders to understand and trust the decisions made by these systems. 9. Watermarks for AI-Generated Content The potential for watermarks to be removed implies that legal frameworks should require the use of robust watermarking techniques and establish penalties for their removal. This could involve creating standards for watermarking methods and ensuring they are resistant to tampering. Legal systems may need to incorporate mechanisms for the verification and traceability of AI-generated content, ensuring that the origins and authenticity of such content can be reliably determined. 10. Mitigating Bias and Improving Fairness The need for comprehensive bias mitigation strategies across all stages of AI development suggests that regulations should mandate the use of pre-processing, in-processing, and post-processing techniques. This might involve setting standards for these techniques and requiring regular audits to ensure compliance. Legal frameworks may need to ensure that AI systems are fair and non-discriminatory, promoting equity and justice in the deployment and use of AI technologies. Conclusion In short, the AI Seoul Summit 2024 Report provides a technical study, which was necessary to address basic questions around various contours of artificial intelligence regulation. This is an incredible study, because as governments around the world are still panicking about finding ways to regulate AI, they are yet to understand how key technical challenges and even socio-technical realities could shape their legal, ethical and socio-economic underpinnings to adjudicate & regulate AI. Through our efforts at Indic Pacific Legal Research, I had developed India's first artificial intelligence regulation proposal (private), called the AIACT.IN, or the Draft Artificial Intelligence (Development & Regulation) Act, 2023. As of March 14, 2024, the second version of AIACT.IN is already available for public scrutiny and comments. In that context, I am glad to announce that a Third Version of AIACT.IN is currently underway and we would launch the third version of this AI bill in the coming weeks, once some internal scrutinisation is complete. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.

  • [New Publication] Artificial Intelligence and Policy in India, Volume 5

    Indic Pacific Legal Research is proud to announce the publication of "Artificial Intelligence and Policy in India, Volume 5", a cutting-edge research collection that delves into the most pressing policy challenges and opportunities presented by AI in the Indian context. This volume is the result of a fruitful collaboration between Indic Pacific Legal Research and the Indian Society of Artificial Intelligence and Law (ISAIL). It brings together meticulously researched papers from ISAIL's talented research interns, offering diverse and insightful perspectives on critical issues at the intersection of AI, law, and policy. Under the expert editorship of Abhivardhan and Pratejas Tomar, the collection features three key contributions: Bhavya Singh's paper tackles the complex interplay between the groundbreaking EU AI Act and India's evolving data protection framework, offering valuable insights for policymakers navigating the global AI governance landscape. Purnima Sihmar's research investigates the potential risks surrounding AI integration in public infrastructure projects, providing a roadmap for responsible AI deployment in the public interest. Harinandana V's work sheds light on the subtle ways AI is influencing the advertising landscape, often without consumer awareness, highlighting the urgent need for transparency and accountability measures. At Indic Pacific Legal Research, we are committed to advancing rigorous, timely, and actionable research on the most pressing legal and policy issues facing India and the wider region. This volume exemplifies our mission to bridge the gap between academic insights and policy impact.We are grateful to our partners at ISAIL for their dedication to nurturing the next generation of AI policy leaders, and to the exceptional research interns whose contributions made this volume possible. As AI continues to rapidly transform every aspect of society, it is crucial that we critically examine its implications and proactively shape its development in line with our shared values and aspirations. We believe this collection makes a significant contribution to that vital ongoing conversation. We invite policymakers, legal professionals, AI researchers, and the wider public to explore the collection at https://indopacific.app/product/artificial-intelligence-and-policy-in-india-volume-5-aipi-v5/ For media inquiries or partnership opportunities, please contact [EMAIL].Together, let us work towards a future where AI serves as a force for good, promoting justice, equality, and the well-being of all.

  • [New Report] Reimaging and Restructuring MeitY for India, IPLR-IG-007

    We are thrilled to announce the release of our latest infographic report, "Reimaging and Restructuring MeitY for India", in collaboration with VLA.Digital! 🎉 As India stands at the cusp of a digital revolution, it is imperative that our technology governance structures evolve to meet the challenges of this dynamic landscape. This report takes a deep dive into the reforms needed at the Ministry of Electronics and Information Technology (MeitY) to unlock India's tech potential. 🔓 🔍 Here's a sneak peek into the key issues covered: 💸 Reducing regulatory burden and compliance costs for startups & SMEs 💪 Strengthening MeitY's institutional capacity to regulate effectively 🌞 Ensuring transparency & ethics in technology policymaking 🆕 Enhancing regulatory approaches for AI, blockchain & emerging tech 🏗️ Proposing new models to restructure MeitY for the digital age 👥 Congratulations to our team led by Abhivardhan, Bhavana J Sekhar & Pratejas Tomar, and contributing authors Bhavya Singh & Harinandana V. 📥 Download the full report here: https://indopacific.app/product/reimaging-and-restructuring-meity-for-india-iplr-ig-007/ Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.

  • TESCREAL and AI-Related Risks

    TESCREAL serves as a lens through which we can examine the motivations and potential implications of cutting-edge technological developments, particularly in the field of artificial intelligence (AI). As these ideologies gain traction among tech leaders and innovators, they are increasingly shaping the trajectory of AI research and development. This insight brief explores the potential risks and challenges associated with the TESCREAL framework, focusing on anticompetitive concerns, the impact on skill estimation and workforce dynamics, and the need for sensitisation measures. By understanding these issues, we can better prepare for the societal and economic changes & risks that advanced & substandard AI technologies may bring. It is crucial to consider not only the promises but also the pitfalls of the hype rapid advancement. This brief aims to provide a balanced perspective on the TESCREAL ideologies and their intersection with AI development, offering insights into proactive measures that can be taken before formal regulations are implemented. Introduction to TESCREAL The emergence of TESCREAL as a conceptual framework marks a significant milestone in our understanding of the ideological underpinnings driving technological innovation, particularly in the realm of artificial intelligence. This acronym, coined by computer scientist Timnit Gebru and philosopher Émile P. Torres, encapsulates a constellation of interconnected philosophies that have profoundly shaped the trajectory of AI development and the broader tech landscape. TESCREAL stands for: Transhumanism Extropianism Singularitarianism Cosmism Rationalism Effective Altruism Longtermism These ideologies, while distinct, share common threads and historical roots that can be traced back to the 20th century. They collectively represent a techno-optimistic worldview that envisions a future where humanity transcends its current limitations through technological advancement. The origins of TESCREAL can be understood as a natural evolution of human aspirations in the face of rapid technological progress. Transhumanism, for instance, emerged in the mid-20th century as a philosophy advocating for the use of technology to enhance human physical and cognitive capabilities. Extropianism, a more optimistic offshoot of transhumanism, emphasizes continuous improvement and the expansion of human potential. Singularitarianism, popularized by figures like Ray Kurzweil, posits the eventual emergence of artificial superintelligence that will radically transform human civilization. This concept has gained significant traction in Silicon Valley and has been a driving force behind many AI research initiatives. Cosmism, with its roots in Russian philosophy, adds a cosmic dimension to these ideas, envisioning humanity's future among the stars. This aligns closely with the ambitions of tech entrepreneurs like Elon Musk, who are actively pursuing space exploration and colonization. Rationalism, as incorporated in TESCREAL, emphasizes the importance of reason and evidence-based decision-making. This philosophical approach has been particularly influential in shaping the methodologies employed in AI research and development. Effective Altruism and Longtermism, the more recent additions to this ideological bundle, bring an ethical dimension to technological pursuits. These philosophies encourage considering the long-term consequences of our actions and maximizing positive impact on a global and even cosmic scale. The significance of TESCREAL lies in its ability to provide a comprehensive framework for understanding the motivations and goals driving some of the most influential figures and companies in the tech industry. Consider the following example. A major tech company announces its ambitious goal to develop artificial general intelligence (AGI) within the next decade, framing it as a breakthrough that will "solve humanity's greatest challenges." The company's leadership, steeped in TESCREAL ideologies, envisions this AGI as a panacea for global issues ranging from climate change to economic inequality.From Dr. Gebru's perspective, this scenario raises several critical concerns: Ethical Implications: The pursuit of AGI, driven by TESCREAL ideologies, often overlooks immediate ethical concerns in favor of speculative future benefits. This approach may neglect pressing issues of bias, fairness, and accountability in current AI systems. Power Centralization: The development of AGI by a single company or a small group of tech elites could lead to an unprecedented concentration of power, potentially exacerbating existing social and economic inequalities. Marginalization of Diverse Perspectives: The TESCREAL framework, rooted in a particular cultural and philosophical tradition, may not adequately represent or consider the needs and values of marginalized communities globally. Lack of Accountability: By framing AGI development as an unquestionable good for humanity, companies may evade responsibility for the potential negative consequences of their technologies. Neglect of Present-Day Issues: The focus on long-term, speculative outcomes may divert resources and attention from addressing immediate societal challenges that AI could help solve. Eugenics-Adjacent Thinking: There are concerning parallels between some TESCREAL ideologies and historical eugenics movements, particularly in their techno-optimistic approach to human enhancement and societal progress. Inadequate Safety Measures: The undefined nature of AGI makes it impossible to develop comprehensive safety protocols, potentially putting society at risk. In this view, the TESCREAL bundle of ideologies represents a problematic framework for guiding AI development. Instead, Dr. Gebru advocates for a more grounded, ethical, and inclusive approach to AI research and development. This approach prioritizes addressing current societal issues, ensuring diverse representation in AI development, and implementing robust accountability measures for AI systems and their creators. The Legal, Economic and Policy Risks around TESCREALism Based on the information provided, I'll explore the anticompetitive risks, challenges in skill estimation due to AI, and potential sensitization measures that can be implemented before formal regulation. I'll include examples to illustrate these points. Anticompetitive Risks The rapid development of AI technologies, driven by TESCREAL ideologies, can lead to several anticompetitive risks. Market Concentration Companies with significant resources and access to vast amounts of data may gain an unfair advantage in AI development, potentially leading to monopolistic practices. Example: A large tech company develops an advanced AI system for healthcare diagnostics, leveraging its extensive user data. This could make it difficult for smaller companies or startups to compete, even if they have innovative ideas. b) Algorithmic Collusion: AI systems might inadvertently facilitate price-fixing or other anticompetitive behaviors without explicit agreements between companies. Example: The RealPage case, where multiple landlords are accused of using the same price-setting algorithm to artificially inflate rental prices, demonstrates how AI can potentially lead to collusive behavior without direct communication between competitors[2]. Risks Around Skill "Census" and Estimation AI's impact on the job market and skill requirements poses challenges for accurate workforce planning: Rapid Skill Obsolescence AI may accelerate the pace at which certain skills become outdated, making it difficult for workers and organizations to keep up. Example: As AI takes over routine coding tasks, software developers may need to quickly shift their focus to more complex problem-solving and AI integration skills. Skill Gap Identification While AI can help identify skill gaps, there's a risk of over-reliance on AI-driven assessments without considering human factors. Example: An AI system might identify a need for data analysis skills in a company but fail to recognize the importance of domain expertise or soft skills that are crucial for interpreting and communicating the results effectively. Sensitization Measures Before Regulation To address these challenges before formal regulation is implemented, several sensitization measures can be considered: Promote Explainable AI (XAI) Encourage the development of AI systems that can provide clear explanations for their decisions. This can help identify potential biases or anticompetitive behaviors. Example: Implement a requirement for AI-driven hiring systems to provide explanations for candidate rankings or rejections, allowing for human oversight and intervention. Foster Multi-stakeholder Dialogue Create forums for discussion between industry leaders, policymakers, academics, and civil society to address potential risks and develop best practices. Example: Organize regular roundtable discussions or conferences where AI developers, ethicists, and labor representatives can discuss the impact of AI on workforce dynamics and potential mitigation strategies. Encourage Voluntary Ethical Guidelines Promote the adoption of voluntary ethical guidelines for AI development and deployment within industries. Example: Develop an industry-wide code of conduct for AI use in financial services, addressing issues such as algorithmic trading and credit scoring. Invest in AI Literacy Programs Develop educational initiatives to improve public understanding of AI capabilities, limitations, and potential impacts. Example: Create online courses or workshops for employees and the general public to learn about AI basics, its applications, and ethical considerations. Support Adaptive Learning and Reskilling Initiatives Encourage companies to invest in continuous learning programs that help employees adapt to AI-driven changes in the workplace. Example: Implement AI-powered adaptive learning platforms that personalize training content based on individual skill gaps and learning speeds[7]. Promote Transparency in AI Development Encourage companies to be more transparent about their AI development processes and potential impacts on the workforce and market dynamics. Example: Implement voluntary reporting mechanisms where companies disclose their AI use cases, data sources, and potential societal impacts. How does our AIACT.IN proposal address AI hype and the effect of TESCREALism Here are some key features related to sensitisation measures, anticompetitive risks, and skill estimation: Enhanced Classification Methods: The draft introduces more nuanced and precise evaluation methods for AI systems, considering conceptual, technical, commercial, and risk-centric approaches. This allows for better risk management and tailored regulatory responses. National Registry for AI Use Cases: A comprehensive framework for tracking both untested and stable AI applications across India, promoting transparency and accountability. AI-Generated Content Regulation: Balances innovation with protection of individual rights and societal interests, including content provenance requirements like watermarking. Advanced AI Insurance Policies: Manages risks associated with high-risk AI systems to ensure adequate protection for stakeholders. AI Pre-classification: Enables early assessment of potential risks and benefits of AI systems. Guidance on AI-related Contracts: Provides principles for responsible AI practices within organizations, addressing potential anticompetitive concerns. National AI Ethics Code: Establishes a flexible yet robust ethical foundation for AI development and deployment. Interoperability and Open Standards: Encourages adoption of open standards and interoperability in AI systems, potentially lowering entry barriers and promoting competition. Algorithmic Transparency: Requires maintaining records of algorithms and data used to train AI systems, aiding in detecting bias and anti-competitive practices. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com. References [1] https://www.ftc.gov/news-events/news/speeches/antitrust-enforcement-high-technology-markets [2] https://www.forbes.com/sites/aldenabbott/2024/03/13/why-antitrust-regulators-are-focused-on-problematic-ai-algorithms/ [3] https://www.its-her-factory.com/2024/05/vibes-and-the-tescreal-bundle-as-corporate-phenomenology/ [4] https://www.researchgate.net/publication/376795973_The_Impact_of_Artificial_Intelligence_on_Employment_and_Workforce_Dynamics_in_Contemporary_Society_Authors [5] https://techwolf.com/blog/bridging-the-skill-gap-with-strategic-workforce-planning [6] https://www.innopharmaeducation.com/our-blog/the-impact-of-ai-on-job-roles-workforce-and-employment-what-you-need-to-know [7] https://www.elev8me.com/insights/ai-impact-on-workforce-transfromation [8] https://www.chathamhouse.org/2023/06/ai-governance-must-balance-creativity-sensitivity [9] https://transcend.io/blog/ai-ethics [10] https://www.holisticai.com/blog/what-is-ethical-ai

  • [New Report] The Indic Approach to Artificial Intelligence Policy, IPLR-IG-006

    Dear Reader, We are thrilled to present "The Indic Approach to Artificial Intelligence Policy," a seminal report that reimagines AI governance through the lens of Indian philosophical traditions. Authored by Abhivardhan, this report is a must-read for anyone interested in the intersection of AI, ethics, and cultural context. It introduces the Permeable Indigeneity in Policy (PIP) framework, which ensures that AI strategies align with India's unique socio-cultural landscape and development goals. The report covers a range of topics, including: Algorithmic sovereignty Context-specific AI governance AI knowledge management protocols Anticipatory sector-specific strategies It also offers practical recommendations for key stakeholders, including government bodies, startups, MSMEs, and large enterprises, to develop AI systems that are ethically grounded, culturally resonant, and socially beneficial. With engaging infographics and mind maps, the report makes complex concepts accessible to a broad audience. Don't miss this opportunity to gain a fresh perspective on AI governance and join the conversation on India's role as a global leader in inclusive AI development. Get your copy now at https://indopacific.app/product/the-indic-approach-to-artificial-intelligence-policy-iplr-ig-006/. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com.

  • Why the Indian Bid to Make GPAI an AI Regulator is Unpreprared

    India's recent proposal to elevate the Global Partnership on Artificial Intelligence (GPAI) to an intergovernmental body on AI has garnered significant attention in the international community. This move, while ambitious, raises important questions about the future of AI governance and regulation on a global scale. This brief examines and comments upon India's bid to enable the Global Partnership on Artificial Intelligence as an AI regulator, with special emphasis on the Global South outlining the key challenges associated with GPAI, MeITY and the AI Landscape we have today. India's Leadership in GPAI India, as the current chair of GPAI, has been instrumental in expanding the initiative to include more countries, aiming to transform it into a central body for global AI policy-making. The GPAI, which started with 15 nations, has now expanded to 29 and aims to include 65 countries by next year. India's leadership in GPAI was further solidified when it was elected as the Incoming Council Chair in November 2022, securing a significant majority of the first-preference votes. Throughout the 2022-23 term, India served as the Incoming Support Chair, and on December 12, 2023, it assumed the Lead Chair position for the 2023-24 term. India is also set to serve as the Outgoing Support Chair in the subsequent year, showcasing its continued dedication to GPAI's mission. India's commitment to advancing GPAI's goals was prominently displayed when it hosted the GPAI Annual Summit from December 12th to 14th, 2023. This summit brought together representatives from all 29 GPAI member countries under India's guidance to discuss a wide range of AI-related topics. The event, organized by MeitY, was inaugurated by Prime Minister Shri Narendra Modi, who reiterated India's commitment to leveraging AI for societal betterment and equitable growth while emphasizing the importance of responsible, human-centric AI governance. The Role of MeitY It is to be understood that the Ministry of Electronics and Information Technology (MeitY) has been pivotal in negotiating the inclusion of OECD nations and advocating for greater participation from the Global South in AI regulation. MeitY has been at the forefront of India's efforts in GPAI, organizing key events such as the GPAI Annual Summit in 2023. However, MeitY's approach to AI governance has faced criticism for its reactive and arbitrary nature. The recent advisory issued by MeitY on the use of AI in elections was met with strong backlash from the AI community. The advisory required platforms offering "under-testing/unreliable" AI systems or large language models (LLMs) to Indian users to explicitly seek prior permission from the central government. This was seen as regulatory overreach that could stifle innovation in the nascent AI industry. While the government later clarified that the advisory was aimed at significant platforms and not startups, the incident highlighted the need for a more proactive and consultative approach to AI regulation. Moreover, the complexity and breadth of AI policy suggest that a single ministry may not be sufficient to handle all aspects of AI governance. A more integrated, inter-ministerial approach could enhance India's capacity to lead effectively in this domain. The inter-ministerial committee formed by MeitY with secretaries from DoT, DSIR, DST, DPIIT, and NITI Aayog as members is a step in this direction. The taking over of AI Policy by the Principal Scientific Advisor's Office in April / May 2024 was another step. However, the composition of such bodies, including the proposed National Research Foundation (NRF), has been criticized for having too many bureaucrats and fewer specialists. The NRF, which aims to provide high-level strategic direction for scientific research in India, will be governed by a Governing Board consisting of eminent researchers and professionals across disciplines. To truly foster responsible and inclusive AI development, MeitY and other government bodies must adopt a more collaborative and transparent approach. This should involve engaging with a wide range of stakeholders, including AI experts, civil society organizations, and industry representatives, to develop a comprehensive and balanced regulatory framework. Additionally, capacity building within the government, including training officials in AI technologies and their implications, is crucial for effective governance. Timing and Nature of AI Regulation The AI regulation debate spans a wide spectrum of views, from those who believe that the current "moral panic" about AI is overblown and irrational[2], to those who advocate for varying degrees of regulation to address the risks posed by AI. The European Union (EU) is at the forefront of AI regulation, with its proposed AI Act that classifies AI systems into four tiers based on their perceived risk[2]. The EU's approach is seen as more interventionist compared to the hands-off approach favored by some venture capitalists and tech companies. However, even within the EU, there are differing opinions on the timing and scope of AI regulation, with some arguing that premature regulation could stifle innovation in the nascent AI industry[1]. Many experts propose a risk-based approach to AI regulation, where higher-risk AI applications that can cause greater damage are subject to proportionately greater regulation, while lower-risk applications have less[2]. However, implementing such an approach is challenging, as it requires defining and measuring risk, setting minimum requirements for AI services, and determining which AI uses should be deemed illegal. Given the challenges in establishing comprehensive AI regulations at this stage, some experts like Gary Marcus have proposed the creation of an International AI Agency, akin to CERN, which conducts independent research free from market influences[3]. This approach would allow for the development of AI in a responsible and ethical manner without premature regulatory constraints. The proposed agency would focus on groundbreaking research to address technical challenges in developing AI with secure and ethical objectives, and on establishing robust AI safety measures to mitigate potential risks[3]. Advocates for AI safety stress the importance of initiatives like a CERN for AI safety and Global AI governance to effectively manage risks[3]. They emphasize the need to balance the focus on diverse risks within the AI landscape, from immediate concerns around bias, transparency, and security, to long-term risks such as the potential loss of control over future advanced machines[3]. In navigating the complexities of AI governance, ongoing dialogue underscores the critical role of research in understanding and addressing AI risks[3]. While some argue that AI-related harm has been limited to date, the evolving landscape highlights the need for proactive measures to avert potential misuse of AI technologies[3]. As of February 2024, the global AI regulatory landscape continues to evolve rapidly. The EU's AI Act has been signed by the Committee of Permanent Representatives, and the consolidated text has been published by the European Parliament[4]. The European Commission has also adopted its own approach to AI, focusing on fostering the development and use of lawful, safe, and trustworthy AI systems[4]. In Asia, Singapore's AI Verify Foundation and Infocomm Media Development Authority have released a draft Model AI Governance Framework for Generative AI, which is currently open for consultation[4]. The Monetary Authority of Singapore has also concluded the first phase of Project MindForge, which seeks to develop a risk framework for the use of Generative AI in the financial sector[4]. These developments underscore the ongoing efforts to establish effective AI governance frameworks at both regional and global levels. As the AI landscape continues to evolve rapidly, finding the right balance between innovation and risk mitigation will be crucial in shaping the future of AI regulation. GPAI and the Global South India's commitment to representing the interests of the Global South in AI governance is commendable, but it also faces several challenges and criticisms. One of the primary concerns is the ongoing debate around the moratorium on customs duties on electronic transmissions at the World Trade Organization (WTO)[5]. Many developing countries, including India, argue that the moratorium disproportionately benefits developed countries and limits the ability of developing nations to generate revenue and promote digital industrialization. India's position is that all policy options, including the imposition of customs duties on e-commerce trade, should be available to WTO members to promote digital industrialization[5]. It has highlighted the potential tariff revenue losses of around $10 billion annually for developing countries due to the moratorium[5]. This revenue could be crucial for developing countries to invest in digital infrastructure and capacity building, which are essential for harnessing the benefits of AI. However, navigating this complex issue will require careful diplomacy and a nuanced approach that balances the interests of developing and developed countries. India will need to work closely with other countries in the Global South to build consensus around a common position on the moratorium and advocate for a more equitable global trade framework that supports the digital industrialization aspirations of developing nations. Another criticism faced by India in its advocacy for the Global South is the unequal access to AI research and development (R&D) among developing nations[6]. The AI Index Fund 2023 reveals that private investments in AI from 2013-22 in the United States ($250 billion) significantly outpace those of other economies, including India and most other G20 nations[6]. This disparity in AI R&D access could lead to extreme outcomes for underdeveloped nations, such as economic threats, political instability, and compromised sovereignty[6]. To address this challenge, India must focus on building partnerships and sharing best practices in AI development and governance with other countries in the Global South[6]. Collaborations aimed at developing AI solutions tailored to the specific needs of these regions, such as in agriculture, healthcare, and education, can help ensure that AI benefits are more equitably distributed[6]. Historical Context and Capacity Building Historically, other nations have employed similar strategies to gain influence in international organizations. For instance, China has been actively involved in the World Intellectual Property Organization (WIPO) and the International Telecommunication Union (ITU). As of March 2020, China led four of the 15 UN specialized agencies and was aiming for a fifth[5]. In the case of WIPO, China has used its influence to shape global intellectual property rules in its favor, such as by pushing for the adoption of the Beijing Treaty on Audiovisual Performances in 2012[7]. Similarly, the United States and Soviet Russia have played significant roles in shaping Space Law. The Outer Space Treaty of 1967, which forms the basis of international space law, was largely a result of negotiations between the US and the Soviet Union during the Cold War era[8]. This treaty set the framework for the peaceful use of outer space and prohibited the placement of weapons of mass destruction in orbit. France has also been a key player in international organizations, particularly in WIPO and the International Civil Aviation Organization (ICAO). France is one of the most represented countries in WIPO, with a strong presence in various committees and working groups[8]. In ICAO, France has been a member of the Council since the organization's inception in 1947 and has played a significant role in shaping international aviation standards and practices. However, unlike these countries, India faces significant challenges in terms of capacity building, particularly in the field of artificial intelligence (AI). While India has made notable progress in developing its AI ecosystem, it still lags behind countries like China and the United States in terms of investment, research output, and talent pool. According to a report by the Observer Research Foundation, India faces several key challenges in driving its AI ecosystem, including a lack of quality data, inadequate funding for research and development, and a shortage of skilled AI professionals[9]. To effectively lead in global AI governance, India must address these capacity building challenges by investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations. Only by strengthening its domestic AI capabilities can India play a more influential role in shaping the future of AI governance on the international stage. Strengthening domestic AI infrastructure and capabilities is crucial for India to effectively lead in global AI governance. India's approach to AI development has been distinct from the tightly controlled government-led model of China and the laissez-faire venture capital-funded hyper-growth model of the US. Instead, India has taken a deliberative approach to understand and implement supportive strategies to develop its AI ecosystem. This involves balancing the need to develop indigenous AI capabilities while creating an enabling environment for innovation through strategic partnerships. However, India faces several challenges in building its AI capacity. One major hurdle is the shortage of skilled professionals in data science and AI. According to a NASSCOM report, India faces a demand-supply gap of 140,000 in AI and Big Data analytics roles. Investing in talent development and fostering partnerships with academia is crucial to address this talent gap [10]. Another challenge is the quality and accessibility of data. Many organizations face issues with data standardization and inconsistencies, which can hinder AI model training and accuracy. Investing in technologies like graph and vector databases can help enhance the reliability, performance, and scalability of AI systems. Additional challenges also include the Government's lack of interest to support Indian MSMEs and research labs to build AI solutions without the fear to have lack of funds to buy compute. Proposing GPAI as an International AI Agency Given the considerations discussed earlier, the best course of action for India might be to propose transforming GPAI into an international AI agency rather than a regulatory body. This approach would align with India's strengths in Digital Public Infrastructure (DPI) and allow for a more collaborative and inclusive approach to AI development and governance. India's success in building DPI, such as the Unified Payments Interface (UPI), Aadhaar, and the Open Network for Digital Commerce (ONDC), has been widely recognized. The UNGA President recently praised India's trajectory, stating that it exemplifies how DPI facilitates equal opportunities. India can leverage its expertise in DPI to shape the future of AI governance through GPAI. Transforming GPAI into an international AI agency would enable it to focus on fostering international cooperation and independent research. This approach is crucial given the rapid evolution of AI technologies and the need for a collaborative, multi-stakeholder approach to AI governance. Otherwise, a regulator made out of half-baked interests could stifle the path of AI innovation even in India & the Global South, such that the risk of regulatory subterfuge, sabotage & capture by fiduciary interest groups may loom large. An international AI agency could bring together experts from various fields, including AI, ethics, law, and social sciences, to address the complex challenges posed by AI. India's proposal to transform GPAI into an international AI agency was discussed at the 6th meeting of the GPAI Ministerial Council held on 3rd July 2024 in New Delhi. The proposal received support from several member countries, who recognized the need for a more collaborative and research-focused approach to AI governance [11]. To effectively shape the future of AI governance, India must also focus on building domestic AI capabilities and infrastructure. The National Strategy for Artificial Intelligence, released by NITI Aayog, outlines a comprehensive plan to develop India's AI ecosystem. The strategy focuses on five key areas: research and development, skilling and reskilling, data and computing infrastructure, standards and regulations, and international collaboration. Implementing the National Strategy for Artificial Intelligence will be crucial for India to effectively lead in global AI governance. This includes investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations. How can GPAI inspire its efforts from AIACT.IN Version 3? AIACT.IN Version 3, released on June 17, 2024, is India's first privately proposed comprehensive regulatory framework for artificial intelligence. This groundbreaking legislation introduces several key features designed to ensure the safe, ethical, and responsible development and deployment of AI technologies in India. Hopefully, the proposals of AIACT.IN v3 may be helpful for our intergovernmental stakeholders at GPAI. Here are some key ways GPAI can draw inspiration from AIACT.IN Version 3 in its efforts: Enhanced AI Classification Methods: GPAI can adopt AIACT.IN V3's nuanced approach to classifying AI systems based on conceptual, technical, commercial, and risk-centric methods. This would enable GPAI to better evaluate and regulate AI technologies according to their inherent purpose, features, and potential risks on a global scale. National AI Use Case Registry: GPAI can establish an international registry for AI use cases, similar to the National Registry proposed in AIACT.IN V3. This would provide a clear framework for tracking and certifying both untested and stable AI applications across member countries, promoting transparency and accountability. Balancing Innovation and Risk Mitigation: AIACT.IN V3 aims to balance the need for AI innovation with the protection of individual rights and societal interests. GPAI can adopt a similar approach in its global efforts, fostering responsible AI development while safeguarding against potential misuse. AI Insurance Policies: Drawing from AIACT.IN V3's mandate for insurance coverage of high-risk AI systems, GPAI can develop international guidelines for AI risk assessment and insurance. This would help manage the risks associated with advanced AI technologies and ensure adequate protection for stakeholders worldwide. AI Pre-classification: GPAI can implement an early assessment mechanism for AI systems, inspired by the AI pre-classification proposed in AIACT.IN V3. This would enable proactive evaluation of potential risks and benefits, allowing for timely interventions and policy adjustments. Guidance Principles for AI Governance: AIACT.IN V3 provides guidance on AI-related contracts and corporate governance to promote responsible practices. GPAI can develop similar international principles and best practices to guide AI governance across member countries, fostering consistency and cooperation. Global AI Ethics Code: Building on the National AI Ethics Code in AIACT.IN V3, GPAI can work towards establishing a flexible yet robust global ethical framework for AI development and deployment. This would provide a common foundation for responsible AI practices worldwide. Collaborative Approach: AIACT.IN V3 was developed through a collaborative effort involving experts from various domains. GPAI can strengthen its multi-stakeholder approach, engaging AI practitioners, policymakers, industry leaders, and civil society representatives to develop comprehensive and inclusive AI governance frameworks. Conclusion In conclusion, India's proactive stance in AI governance is commendable, but the path forward requires careful consideration of domestic capabilities, international dynamics, and the evolving nature of AI. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train. We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com. References [1] EY, 'How to navigate global trends in Artificial Intelligence regulation' (EY, 2023) accessed 3 July 2024. [2] James Andrew Lewis, 'AI Regulation is Coming: What is the Likely Outcome?' (Center for Strategic and International Studies, 18 May 2023) accessed 3 July 2024. [3] Gary Marcus, 'A CERN for AI and the Global Governance of AI' (Marcus on AI, 2 June 2023) accessed 3 July 2024. [4] Eversheds Sutherland, 'Global AI Regulatory Update - February 2024' (Eversheds Sutherland, 26 February 2024) accessed 3 July 2024. [5] Murali Kallummal, 'WTO's E-commerce Moratorium: Will India Betray the Interests of the Global South?' (The Wire, 10 June 2023) accessed 3 July 2024. [6] Business Insider India, 'India to host Global India AI Summit 2024 in New Delhi on July 3-4' (Business Insider India, 1 July 2024) accessed 3 July 2024. [7] Yeling Tan, 'China and the UN System – the Case of the World Intellectual Property Organization' (Carnegie Endowment for International Peace, 3 March 2020) accessed 3 July 2024. [8] Jérôme Sgard, 'Bretton Woods and the Reconstruction of Europe' (2018) 44(4) The Journal of Economic History 1136 accessed 3 July 2024. [9] WIPO, 'Information by Country: France' (WIPO) accessed 3 July 2024. [10] Trisha Ray and Akhil Deo, 'Digital Dreams, Real Challenges: Key Factors Driving India's AI Ecosystem' (Observer Research Foundation, 12 April 2023) accessed 3 July 2024. [11] Courtney J. Fung, 'China already leads 4 of the 15 U.N. specialized agencies — and is aiming for a 5th' (The Washington Post, 3 March 2020) accessed 3 July 2024.

  • [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3

    The rapid advancement of artificial intelligence (AI) technologies necessitates a robust regulatory framework to ensure their safe and ethical deployment. AIACT.IN, India's first privately proposed AI regulation, has been at the forefront of this effort. Released on June 17, 2024, AIACT.IN introduces several groundbreaking features that make it a comprehensive and forward-thinking framework for AI regulation in India. You can also download AIACT.IN V3 from below. In the rapidly evolving landscape of artificial intelligence (AI), the need for robust, forward-thinking regulation has never been more critical. As AI technologies continue to advance at an unprecedented pace, they bring with them both immense opportunities and significant risks. I have been been a vocal advocate for a balanced approach to AI regulation—one that harnesses the transformative potential of AI while safeguarding against its inherent risks, protecting the nascent Indian AI ecosystem. AIACT.IN Version 3 represents a significant leap forward in this endeavour. This latest version of India's pioneering AI regulatory framework is designed to address the complexities and nuances of the AI ecosystem, ensuring that the development and deployment of AI technologies are both innovative and responsible. Some of the notable features of AIACT.IN Version 3 include: Enhanced classification methods for AI systems, providing a more nuanced and precise evaluation of their capabilities and potential risks. The establishment of a National Registry for AI Use Cases in India, covering both untested and stable AI applications, to ensure transparency and accountability. A comprehensive approach to regulating AI-generated content, balancing the need for innovation with the protection of individual rights and societal interests. Advanced-level AI insurance policies to manage the risks associated with high-risk AI systems and ensure adequate protection for stakeholders. The introduction of AI pre-classification, enabling early assessment of potential risks and benefits. Guidance principles on AI-related contracts and corporate governance, promoting responsible AI practices within organizations. A flexible yet robust National AI Ethics Code, providing a strong ethical foundation for AI development and deployment. This is a long read, explaining the core features of AIACT.IN Version 3 in detail. Key Features and Improvements in AIACT.IN Version 3 Enhanced Classification Methods Drastically Improved and Nuanced: The classification methods in Version 3 have been significantly enhanced to provide a more nuanced and precise evaluation of AI systems. This improvement ensures better risk management and tailored regulatory responses, addressing the diverse capabilities and potential risks associated with different AI applications. AIACT.IN Version 3 has significantly enhanced the classification methods for AI systems, as outlined in Sections 3 to 7. These sections introduce various methods of classification, including conceptual, technical, commercial, and risk-centric approaches. For example, Section 4 outlines the conceptual methods of classification, which consider factors such as the intended purpose, the level of human involvement, and the degree of autonomy of the AI system. This nuanced approach allows for a more precise evaluation of AI systems based on their conceptual characteristics. Section 5 introduces technical methods of classification, which take into account the underlying algorithms, data sources, and computational resources used in the development of the AI system. This technical evaluation can help identify potential risks and tailor regulatory responses accordingly. National Registry for AI Use Cases Nuanced and Comprehensive: AIACT.IN Version 3 introduces a National Registry for AI Use Cases in India. This registry covers both untested and stable AI applications, providing a clear and organised framework for tracking AI use cases across the country. This initiative aims to standardise and certify AI applications, ensuring their safe and effective deployment. The introduction of the National Registry for AI Use Cases in Section 12 is a significant step towards standardizing and certifying AI applications in India. This registry aims to provide a comprehensive framework for tracking both untested and stable AI use cases across the country. For instance, the registry could include an AI-powered medical diagnostic tool that is still in the testing phase (untested AI use case) and a widely adopted AI-based chatbot for customer service (stable AI use case). By maintaining a centralized registry, the Indian Artificial Intelligence Council (IAIC) can monitor the development and deployment of AI systems, ensuring compliance with safety and ethical standards. Furthermore, Section 11 mandates that all AI systems operating in India must be registered with the National Registry, providing a comprehensive overview of the AI ecosystem in the country. This requirement could help identify potential risks or overlaps in AI use cases, enabling the IAIC to take proactive measures to mitigate any potential issues. For example, if multiple organisations are developing AI-powered recruitment tools, the registry could reveal potential biases or inconsistencies in the algorithms used, prompting the IAIC to issue guidelines or standards to ensure fairness and non-discrimination in the hiring process. Inclusive AI-Generated Content Regulation Comprehensive and Balanced: The approach to regulating AI-generated content has been made more inclusive and holistic. This ensures that the diverse ways AI can create and influence content are addressed, promoting a balanced and fair regulatory environment. Section 23 of AIACT.IN Version 3 focuses on "Content Provenance and Identification," which aims to establish a comprehensive and balanced approach to regulating AI-generated content. This section addresses the diverse ways in which AI can create and influence content, promoting a fair and inclusive regulatory environment. Here's an example. A news organization uses an AI system to generate articles on current events. Under Section 23, the organization would be required to clearly label these articles as "AI-generated" or provide a similar disclosure, allowing readers to understand the source of the content and make informed decisions about its credibility and potential biases. Advanced AI Insurance Policies Robust Risk Management: Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. These policies are designed to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. Section 25 of AIACT.IN Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. This section aims to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. This provision ensures that developers and deployers of high-risk AI systems maintain adequate insurance coverage to mitigate potential risks and provide compensation in case of harm or losses. Here is an example. A healthcare provider implements a high-risk AI system for medical diagnosis. Under Section 25, the provider would be required to maintain a minimum level of insurance coverage, as determined by the IAIC, to protect patients and the healthcare system from potential harm or losses resulting from errors or biases in the AI system's diagnoses. AI-Pre Classification Early Risk and Benefit Assessment: The concept of AI pre-classification has been introduced to help stakeholders understand potential risks and benefits early in the development process. This proactive approach allows for better planning and risk mitigation strategies. Section 6(8) of the Draft Artificial Intelligence (Development & Regulation) Act, 2023, introduces the classification method known as "Artificial Intelligence for Preview" (AI-Pre). This classification pertains to AI technologies that are made available by companies for testing, experimentation, or early access prior to their wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms, and infrastructure at various stages of development. The key characteristics of AI-Pre technologies include: Limited Access: The AI technology is made available to a limited set of end-users or participants in a preview program. Special Agreements: Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality. Development Stage: The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose. User Feedback: Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology. Cost and Pricing: The AI-Pre technology may be provided free of charge or under a separate pricing model from the company’s standard commercial offerings. Post-Preview Release: After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release. Here's an illustration. A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics: The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality. The AI system’s capabilities are not yet fully tested, documented, or supported, and the company provides no warranties or guarantees. The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications. After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms. Importance for AI Regulation in India The AI-Pre classification method is significant for AI regulation in India for several reasons: Innovation and Experimentation: AI-Pre allows companies to innovate and experiment with new AI technologies in a controlled environment. This fosters creativity and the development of cutting-edge AI solutions without the immediate pressure of full commercial deployment. Risk Mitigation: By classifying AI technologies as AI-Pre, companies can identify and address potential risks, technical issues, and ethical concerns during the preview phase. This helps in mitigating risks before the technology is widely released. Feedback and Improvement: The feedback loop created by AI-Pre enables companies to gather valuable insights from early users. This feedback is crucial for refining the technology, improving its performance, and ensuring it meets user needs and regulatory standards. Regulatory Compliance: AI-Pre provides a framework for companies to comply with regulatory requirements while still in the development phase. This ensures that AI technologies are developed in line with legal and ethical standards from the outset. Market Readiness: The AI-Pre classification helps companies gauge market readiness and demand for their AI technologies. It allows them to make informed decisions about the commercial viability and potential success of their products. Transparency and Accountability: The special agreements and documentation required for AI-Pre technologies promote transparency and accountability. Companies must clearly outline the terms of use, data handling practices, and intellectual property rights, ensuring that all stakeholders are aware of their responsibilities and rights. Guidance Principles on AI-Related Contracts Clarity and Adoption: A whole new approach to guidance principles on AI-related contracts has been introduced. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. AIACT.IN Version 3 introduces a comprehensive approach to guidance principles on AI-related contracts in Section 15. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. Consider a scenario where a healthcare provider enters into a contract with an AI company to implement an AI-based diagnostic tool. Under the guidance principles outlined in Section 15, the contract would need to include clear provisions regarding the responsibilities of both parties, the transparency of the AI system's decision-making process, and the accountability mechanisms in place in case of errors or biases in the AI's diagnoses. This would ensure that the healthcare provider and the AI company have a mutual understanding of their roles and responsibilities, fostering trust and reducing the risk of disputes. Here are some other features of AIACT.IN Version 3 described in brief: AI and Corporate Governance Ethical Practices: New guidance principles around AI and corporate governance emphasize the importance of ethical AI practices within corporate structures. This promotes responsible AI use at the organizational level, ensuring accountability and transparency. National AI Ethics Code Flexible and Non-Binding: The National AI Ethics Code introduced in Version 3 is non-binding yet flexible, providing a strong ethical foundation for AI development and deployment. This code encourages adherence to high ethical standards without stifling innovation. Intellectual Property and AI-Generated Content Special Substantive Approach: A special substantive approach to intellectual property rights for AI-generated content has been introduced. This ensures that creators and innovators are fairly recognized and protected in the AI landscape. Updated Principles on AI and Open Source Software Collaboration and Innovation: The principles on AI and open source software in Section 13 have been updated to reflect our commitment to fostering collaboration and innovation in the open-source community. These principles ensure responsible AI development while promoting transparency and accessibility. Conclusion AIACT.IN Version 3 is a testament to our dedication to creating a forward-thinking, inclusive, and robust regulatory framework for AI in India. By addressing the diverse capabilities and potential risks associated with AI technologies, this version ensures that AI development and deployment are safe, ethical, and beneficial for all stakeholders. We invite developers, policymakers, business leaders, and engaged citizens to read the full document and contribute to shaping the future of AI in India by sending their feedback (anonymous public comments) at vligta@indicpacific.com. Together, let's embrace these advancements and work towards a bright future for AI.

bottom of page