Search Results
Results found for empty search
- [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3
The rapid advancement of artificial intelligence (AI) technologies necessitates a robust regulatory framework to ensure their safe and ethical deployment. AIACT.IN, India's first privately proposed AI regulation, has been at the forefront of this effort. Released on June 17, 2024, AIACT.IN introduces several groundbreaking features that make it a comprehensive and forward-thinking framework for AI regulation in India. You can also download AIACT.IN V3 from below. In the rapidly evolving landscape of artificial intelligence (AI), the need for robust, forward-thinking regulation has never been more critical. As AI technologies continue to advance at an unprecedented pace, they bring with them both immense opportunities and significant risks. I have been been a vocal advocate for a balanced approach to AI regulation—one that harnesses the transformative potential of AI while safeguarding against its inherent risks, protecting the nascent Indian AI ecosystem. AIACT.IN Version 3 represents a significant leap forward in this endeavour. This latest version of India's pioneering AI regulatory framework is designed to address the complexities and nuances of the AI ecosystem, ensuring that the development and deployment of AI technologies are both innovative and responsible. Some of the notable features of AIACT.IN Version 3 include: Enhanced classification methods for AI systems, providing a more nuanced and precise evaluation of their capabilities and potential risks. The establishment of a National Registry for AI Use Cases in India, covering both untested and stable AI applications, to ensure transparency and accountability. A comprehensive approach to regulating AI-generated content, balancing the need for innovation with the protection of individual rights and societal interests. Advanced-level AI insurance policies to manage the risks associated with high-risk AI systems and ensure adequate protection for stakeholders. The introduction of AI pre-classification, enabling early assessment of potential risks and benefits. Guidance principles on AI-related contracts and corporate governance, promoting responsible AI practices within organizations. A flexible yet robust National AI Ethics Code, providing a strong ethical foundation for AI development and deployment. This is a long read, explaining the core features of AIACT.IN Version 3 in detail. Key Features and Improvements in AIACT.IN Version 3 Enhanced Classification Methods Drastically Improved and Nuanced: The classification methods in Version 3 have been significantly enhanced to provide a more nuanced and precise evaluation of AI systems. This improvement ensures better risk management and tailored regulatory responses, addressing the diverse capabilities and potential risks associated with different AI applications. AIACT.IN Version 3 has significantly enhanced the classification methods for AI systems, as outlined in Sections 3 to 7. These sections introduce various methods of classification, including conceptual, technical, commercial, and risk-centric approaches. For example, Section 4 outlines the conceptual methods of classification, which consider factors such as the intended purpose, the level of human involvement, and the degree of autonomy of the AI system. This nuanced approach allows for a more precise evaluation of AI systems based on their conceptual characteristics. Section 5 introduces technical methods of classification, which take into account the underlying algorithms, data sources, and computational resources used in the development of the AI system. This technical evaluation can help identify potential risks and tailor regulatory responses accordingly. National Registry for AI Use Cases Nuanced and Comprehensive: AIACT.IN Version 3 introduces a National Registry for AI Use Cases in India. This registry covers both untested and stable AI applications, providing a clear and organised framework for tracking AI use cases across the country. This initiative aims to standardise and certify AI applications, ensuring their safe and effective deployment. The introduction of the National Registry for AI Use Cases in Section 12 is a significant step towards standardizing and certifying AI applications in India. This registry aims to provide a comprehensive framework for tracking both untested and stable AI use cases across the country. For instance, the registry could include an AI-powered medical diagnostic tool that is still in the testing phase (untested AI use case) and a widely adopted AI-based chatbot for customer service (stable AI use case). By maintaining a centralized registry, the Indian Artificial Intelligence Council (IAIC) can monitor the development and deployment of AI systems, ensuring compliance with safety and ethical standards. Furthermore, Section 11 mandates that all AI systems operating in India must be registered with the National Registry, providing a comprehensive overview of the AI ecosystem in the country. This requirement could help identify potential risks or overlaps in AI use cases, enabling the IAIC to take proactive measures to mitigate any potential issues. For example, if multiple organisations are developing AI-powered recruitment tools, the registry could reveal potential biases or inconsistencies in the algorithms used, prompting the IAIC to issue guidelines or standards to ensure fairness and non-discrimination in the hiring process. Inclusive AI-Generated Content Regulation Comprehensive and Balanced: The approach to regulating AI-generated content has been made more inclusive and holistic. This ensures that the diverse ways AI can create and influence content are addressed, promoting a balanced and fair regulatory environment. Section 23 of AIACT.IN Version 3 focuses on "Content Provenance and Identification," which aims to establish a comprehensive and balanced approach to regulating AI-generated content. This section addresses the diverse ways in which AI can create and influence content, promoting a fair and inclusive regulatory environment. Here's an example. A news organization uses an AI system to generate articles on current events. Under Section 23, the organization would be required to clearly label these articles as "AI-generated" or provide a similar disclosure, allowing readers to understand the source of the content and make informed decisions about its credibility and potential biases. Advanced AI Insurance Policies Robust Risk Management: Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. These policies are designed to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. Section 25 of AIACT.IN Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. This section aims to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. This provision ensures that developers and deployers of high-risk AI systems maintain adequate insurance coverage to mitigate potential risks and provide compensation in case of harm or losses. Here is an example. A healthcare provider implements a high-risk AI system for medical diagnosis. Under Section 25, the provider would be required to maintain a minimum level of insurance coverage, as determined by the IAIC, to protect patients and the healthcare system from potential harm or losses resulting from errors or biases in the AI system's diagnoses. AI-Pre Classification Early Risk and Benefit Assessment: The concept of AI pre-classification has been introduced to help stakeholders understand potential risks and benefits early in the development process. This proactive approach allows for better planning and risk mitigation strategies. Section 6(8) of the Draft Artificial Intelligence (Development & Regulation) Act, 2023, introduces the classification method known as "Artificial Intelligence for Preview" (AI-Pre). This classification pertains to AI technologies that are made available by companies for testing, experimentation, or early access prior to their wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms, and infrastructure at various stages of development. The key characteristics of AI-Pre technologies include: Limited Access: The AI technology is made available to a limited set of end-users or participants in a preview program. Special Agreements: Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality. Development Stage: The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose. User Feedback: Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology. Cost and Pricing: The AI-Pre technology may be provided free of charge or under a separate pricing model from the company’s standard commercial offerings. Post-Preview Release: After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release. Here's an illustration. A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics: The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality. The AI system’s capabilities are not yet fully tested, documented, or supported, and the company provides no warranties or guarantees. The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications. After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms. Importance for AI Regulation in India The AI-Pre classification method is significant for AI regulation in India for several reasons: Innovation and Experimentation: AI-Pre allows companies to innovate and experiment with new AI technologies in a controlled environment. This fosters creativity and the development of cutting-edge AI solutions without the immediate pressure of full commercial deployment. Risk Mitigation: By classifying AI technologies as AI-Pre, companies can identify and address potential risks, technical issues, and ethical concerns during the preview phase. This helps in mitigating risks before the technology is widely released. Feedback and Improvement: The feedback loop created by AI-Pre enables companies to gather valuable insights from early users. This feedback is crucial for refining the technology, improving its performance, and ensuring it meets user needs and regulatory standards. Regulatory Compliance: AI-Pre provides a framework for companies to comply with regulatory requirements while still in the development phase. This ensures that AI technologies are developed in line with legal and ethical standards from the outset. Market Readiness: The AI-Pre classification helps companies gauge market readiness and demand for their AI technologies. It allows them to make informed decisions about the commercial viability and potential success of their products. Transparency and Accountability: The special agreements and documentation required for AI-Pre technologies promote transparency and accountability. Companies must clearly outline the terms of use, data handling practices, and intellectual property rights, ensuring that all stakeholders are aware of their responsibilities and rights. Guidance Principles on AI-Related Contracts Clarity and Adoption: A whole new approach to guidance principles on AI-related contracts has been introduced. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. AIACT.IN Version 3 introduces a comprehensive approach to guidance principles on AI-related contracts in Section 15. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. Consider a scenario where a healthcare provider enters into a contract with an AI company to implement an AI-based diagnostic tool. Under the guidance principles outlined in Section 15, the contract would need to include clear provisions regarding the responsibilities of both parties, the transparency of the AI system's decision-making process, and the accountability mechanisms in place in case of errors or biases in the AI's diagnoses. This would ensure that the healthcare provider and the AI company have a mutual understanding of their roles and responsibilities, fostering trust and reducing the risk of disputes. Here are some other features of AIACT.IN Version 3 described in brief: AI and Corporate Governance Ethical Practices: New guidance principles around AI and corporate governance emphasize the importance of ethical AI practices within corporate structures. This promotes responsible AI use at the organizational level, ensuring accountability and transparency. National AI Ethics Code Flexible and Non-Binding: The National AI Ethics Code introduced in Version 3 is non-binding yet flexible, providing a strong ethical foundation for AI development and deployment. This code encourages adherence to high ethical standards without stifling innovation. Intellectual Property and AI-Generated Content Special Substantive Approach: A special substantive approach to intellectual property rights for AI-generated content has been introduced. This ensures that creators and innovators are fairly recognized and protected in the AI landscape. Updated Principles on AI and Open Source Software Collaboration and Innovation: The principles on AI and open source software in Section 13 have been updated to reflect our commitment to fostering collaboration and innovation in the open-source community. These principles ensure responsible AI development while promoting transparency and accessibility. Conclusion AIACT.IN Version 3 is a testament to our dedication to creating a forward-thinking, inclusive, and robust regulatory framework for AI in India. By addressing the diverse capabilities and potential risks associated with AI technologies, this version ensures that AI development and deployment are safe, ethical, and beneficial for all stakeholders. We invite developers, policymakers, business leaders, and engaged citizens to read the full document and contribute to shaping the future of AI in India by sending their feedback (anonymous public comments) at vligta@indicpacific.com. Together, let's embrace these advancements and work towards a bright future for AI.
- The Legal Impact of USPTO AI Patentability Guidelines in Indian Industry Segments
This article is authored by Ankit Verma and Shreyansh Gupta, affiliated to Law Centre 1, University of Delhi. The US Patent Office recently ignited a global conversation by issuing guidance on artificial intelligence's (AI) role in patents. The USPTO's directions offer a crucial map for navigating these uncharted waters. This article delves into the jurisprudential aspects of AI patents in India, analysing the implications of this evolving landscape for Indian innovators and the future of AI-driven inventions[1]. Artificial intelligence (AI), refers to the development of computer systems capable of performing tasks that typically require human intelligence, such as speech recognition, decision-making, and problem-solving. What's a Patent? It provides inventors with exclusive rights to their inventions for a specified period. To file a patent application, individuals or entities must meet specific criteria set by each respective office. In simple words, A patent is a legal property right granted by a government to an inventor or assignee, providing exclusive rights to exploit and profit from their invention for a defined period. A Regulatory Background behind AI Patentability United States of America US-based patents are governed by the Patent Act (U.S. Code: Title 35), which established the statutory body i.e., the United States Patent and Trademark Office (the USPTO) which is subject to the policy direction of the Secretary of Commerce (the US Department of Commerce). India The Controller General of Patents, Designs, and Trade Marks (CGPDTM) generally known as the Indian Patent Office, is a preliminary agency under the Department for Promotion of Industry and Internal Trade (DPIIT) which administers the Indian law of Patents, Designs and Trade Marks under the “Patent act,1970” and compliance with international treaties like the Patent Cooperation Treaty (PCT) and Budapest treaty under Section 2(1) (aba), The Patents Act, 1970. Impact of AI on Traditional Concepts of Inventorship Traditionally, inventorship has been attributed to human intellect and creativity. And it includes the human ingenuity alone. However, with AI, lines blur as machines contribute significantly to the inventive process. AI Inventorship requires a nuanced approach that considers both the contributions of AI systems and human inventors, ultimately shaping the future of intellectual property law and innovation[2]. The Global Situation Europe The European Patent Office (EPO) has stated that an inventor must be a natural person. However, they have also recognized the necessity of AI implementation in future contexts. China The Chinese National Intellectual Property Administration (CNIPA) has clarified that Article 13 of the Rules for the Implementation of the Patent Law defines an "Inventor" or "creator." However, the Guidelines for Patent Examination further specify that the inventor must be an individual. Currently, a machine or device, including AI, cannot be recognized as an "inventor" in China. UK The UK Intellectual Property Office (UKIPO) has emphasized that the law requires an inventor to be a natural person. In terms of existing legal regulations in most countries or regions, the current internationally applicable standard is that an inventor must be a natural person. South Africa The South African Patent Office became the world's first IP office to grant a patent for an invention developed by the AI machine DABUS. However, it is pertinent to note that South African patent law does not define "inventor."[3]. Japan The Japan Patent Office (JPO) has taken a relatively progressive stance regarding the recognition of AI as an inventor. The JPO considers that AI systems can be named as inventors to fulfil certain legal obligations, which may include: Human Representatives Human is required to submit the patent application, providing necessary information and representing the interest of the AI inventor throughout the whole process of application. Ownership & Rights Human representatives or AI must hold ownership rights to the invention and it is essential to clarify the ownership and rights associated with the invention in case of disputes arising. Disclosure requirement The human representative must disclose relevant information and the contribution of its AI in the invention and thus include AI algorithms, data sets, and other relevant technical information. Ethical and legal considerations The human representative must ensure applicable laws, regulations, and ethical guidelines governing AI technology and intellectual property rights. International harmonization JPO collaborates with different international organizations to promote harmonization and consistency in the invention and the invention must aligned with international standards. Patent-worthy Industry Use Case of AI and India: A Perspective Agriculture Sector An important AI system such as landscape monitoring can have a big positive impact on the agriculture sector as a whole. Through the provision of comprehensive insights on the performance of specific fields and their future requirements, this AI technology helps farmers optimize crop yield, minimize waste, and improve sustainability. The AI-powered landscape monitoring system created by Google's AnthroKrishi and Google Partner Innovation teams in India is one practical application of this technology. In order to establish a cohesive "landscape understanding," this system's use cases are satellite imagery and machine learning to pinpoint field boundaries, acreage, irrigation systems, and other crucial information for efficient farm management. This AI system's technology is patent-worthy because of its creative use of AI to solve important agricultural problems. This technology enables farmers to make data-driven decisions, optimize resource use, and raise overall production by giving them exact information about their farms, crop varieties, water availability, and historical data. The AI system is a useful tool for sustainable agricultural operations because of its capacity to provide customized insights at a fine level. When considering patentability under Indian law, factors such as novelty, inventive step, industrial applicability, and non-obviousness are crucial. Evaluating Google's AnthroKrishi's AI-driven landscape monitoring system for patentability requires a thorough analysis of its technological innovations, algorithms, and methodologies. Furthermore, the patent application must sufficiently disclose the inventive aspects and demonstrate how it addresses significant agricultural challenges in a manner not obvious to experts in the field. Defence Many AI inventions in the Indian Defence industry are eligible for patents because of their distinctive uses and their influence on national security. The creation of AI-based surveillance robots, such as the entirely 3D-printed, rail-mounted robot called Silent Sentry, which is intended to improve border security and surveillance capabilities, is one example of this breakthrough. This robot provides the Indian military with real-time monitoring and situational awareness by using AI algorithms to navigate over metal rails put on fences and Automated Integrated Observation Systems (AIOS). The integration of AI-powered surveillance technologies and the autonomous operation of the Silent Sentry within predetermined boundaries provide it a powerful instrument for augmenting the surveillance grid and deterrence capacities of the Indian armed forces. The novel use case of AI in a defence setting, notably in the fields of surveillance and border protection, makes the Silent Sentry patent-worthy. The robot is a ground-breaking technology that has the potential to greatly improve the Indian military's monitoring and response to threat capabilities due to its autonomous operation, AI-driven surveillance capabilities, and interaction with current systems. Its patent-worthy nature is further supported by the possibility that other countries might duplicate or reverse engineer and utilize this technology, which might completely transform border security. Overall, this application offers a strong potential for obtaining patent protection under Indian patent laws, given it meets the requirements such as novelty, inventive step, industrial applicability, non-obviousness, and sufficient disclosure. A thorough examination of its technical aspects and contributions to defence and surveillance is essential to determine its eligibility for patentability accurately. Sports Sector Certain AI-based inventions in the sports industry have the potential to transform athletic performance analysis and improve sports training, making them patent-worthy. The application of AI to predictive analysis and individualized training recommendations is one noteworthy AI innovation in the sports industry. In order to forecast player performance, injury risks, and even game outcomes, AI systems can analyse enormous datasets. This allows them to provide coaches, teams, and players with important information. Based on player tiredness and in-game performance, these AI algorithms can suggest the best player rotations, providing a data-driven method for making decisions in sports. The use case novelty of the recommendation algorithms, the techniques for integrating real-time game data, and the potential influence on enhancing player performance and team tactics make these AI breakthroughs patent-worthy. Sports organizations can improve player development, optimize training programs, and obtain a competitive edge in the sports business by utilizing AI for predictive analysis and individualized training recommendations. These advancements enable sports organizations to optimize training programs, improve player development, and gain a competitive advantage because AI can offer customized insights and recommendations suited to certain players. From a patent law perspective, the unique application of AI in sports analysis and training, along with its potential impact on athletic performance, supports their eligibility for patent protection. Future Outlooks The integration of Artificial Intelligence (AI) into the patent world[4] will be the transforming step towards the future of AI. The exponential rise of AI will lead to cognitive thinking in the AI-based system which subsequently makes the AI invent something that tackles the needs of future issues. Collaboration platforms facilitate communication among different international organizations such as the Patent Office of Nations and WIPO. AI integration can revolutionise patent drafting, prosecution, and management, fostering innovation and economic growth. Shortly, AI may design complex chemical structures for new drugs, optimize engineering designs, and even compose music or create art. Hence, due to this quality of AI invention granting the inventorship title may be justified. Conclusion According to the eminent jurist Salmond, “A person is any being whom the law regards as capable of rights and bound by legal duties”. AI doesn’t have the rights to stand upon and also lacks legal duties, on the other hand person is enriched with both rights and legal duties. According to the Indian Legal nexus Section 11 of Indian Penal Code, 1860[5] and Section 2(1)(s) of the Patents Act, 1970[6] explains the person. This is a non-exhaustive definition and, the word “includes” in the section also incorporates the original notion of the natural human being. The patent inventor must be a Natural Person. Still, so far there is no amendment has been instituted for the Person definition to incorporate the AI. Subsequently, if AI falls under the definition of the Person it can designated as a "Person interested"[7]. Whereas in the history of the “Republic of India,” the Constitution is made under the guiding light of foreign state’s Constitutions also. Recently, in 2016 inspiration for GST (Goods service tax) was taken from the Canadian Dual GST model, But France was the first country to implement the GST in 1954. Similarly, we can also form structured guidelines for the inventorship title to AI. As, Article 51A(h) of The Constitution of India[8]. However, the emergence of AI introduces new capabilities and complexities which must be addressed by the legal framework. Policymakers and stakeholders scrutinize the incorporation and ensure a delicate balance between fostering creativity and safeguarding legal and ethical principles. So that we can unlock the full potential of AI-driven invention upholding the integrity of intellectual property rights. For the time being AI is not considered as an “INVENTOR” as per Indian Law. References [1] https://www.indicpacific.com/post/uspto-inventorship-guidance-on-ai-patentability-for-indian-stakeholders [2] https://www.borsamip.com/Policyfocus/2572.html [3] https://www.michalsons.com/blog/ai-listed-as-inventor-for-first-time-ever-south-africa/51248 [4] http://surl.li/rogxx [5] Section11, IPC - The word “person” includes any Company or Association or body of persons, whether incorporated or not. [6] Section 2(1)(s), The Patents Act, 1970 - "person" includes the Government; [7] Section 2(1)(t) of The Patents Act 1970 [8] Article 51(A)(h) of the Constitution of India, 1949 imparts to develop scientific temper, humanism and the spirit of inquiry and reform.
- New Report: Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-005
We are glad to release "Legal Strategies for Open Source Artificial Intelligence Practices". This infographic report could not have been possible without the contributions by Sanad Arora, Vaishnavi Singh, Shresh Narang, Krati Bhadouriya and Harshitha Reddy Chukka. Acknowledgements Special thanks to Rohan Shiralkar for motivating me to come up with a paper on such a critical issue. Also, thanks to Akash Manwani and the ISAIL Advisory Council experts for their insights. This paper addresses as a compendium and a unique report to offer perspectives on legal dilemmas and issues around enabling #artificialintelligence practices which are open-source. Read the complete work at https://vligta.app/product/legal-strategies-for-open-source-artificial-intelligence-practices-iplr-ig-004/ This is an infographic report on building legal strategies for open source-related artificial intelligence practices. This report also serves as a compendium to the key legal issues that companies may face in the AI industry in India, when they would have to go open-source. Contents 1 | Open Source Systems, Explained A broader introduction of open source systems, and their kinds, and features as widely discussed throughout the infographic report. 2 | Regulatory Questions on OSS in India An extended analysis of some regulatory dilemmas around the acceptance and invocation of open source systems & practices in India. The Digital Personal Data Protection Act & relevant Non-Personal Data Protection Frameworks Consumer Law Regulations in India The Digital India Act Proposal The Competition Act and the draft Digital Competition Bill, 2024 3 | Legal Dilemmas around Open Source Artificial Intelligence Practices What are the key legal dilemmas associated with artificial intelligence technologies that make open source practices hard to achieve? Intellectual Property Issues Copyright Protections Patent & Design Protections Trade Secret Issues Licensing Ambiguities Licensing Compatibility Licensing Proliferation Modifications & Derivatives Industrial Viability 4 | Making Open Source Feasible for AI Start-ups & MSMEs What kind of sector-neutral, sector-specific, industrially viable and privacy-friendly practices may be feasibly adopted by AI start-ups and MSMEs? 5 | Key Challenges & Recommendations for Open Source AI Practices We have offered recommendations on enabling better open-source practices for AI companies, which are legally viable, due to the absence of regulatory clarity, and despite the risk of regulatory capture & regulatory subterfuge. You can access the complete paper at https://vligta.app/product/legal-strategies-for-open-source-artificial-intelligence-practices-iplr-ig-004/
- AI, CX & Telemarketing: Insights on Legal Safeguards
The author of this insight is a Research Intern at the Indian Society of Artificial Intelligence and Law as of March 2024. The rise of Artificial Intelligence (AI) has brought about significant changes in industries like telemarketing, telesales, and customer service. People are discussing the idea of using AI instead of human agents in these fields. In this insight, we will dive into whether it is doable and what ethical concerns we must consider, especially regarding putting legal protections in place. AI in Customer Services & Telemarketing So Far Using AI in telemarketing and customer service seems like a great way to make things smoother and more effective when dealing with customers. Thanks to fancy AI tech like natural language processing (NLP) and speech recognition, AI systems can handle customer questions and sales tasks really well now. They can even chat in different languages, which is a promising tool for customer convenience. The reason AI integration seems possible is because it can automate monotonous tasks, analyze loads of data, and give customers personalized experiences. Take chatbots, for example. They can chat with customers, figure out what they like, and suggest stuff they might want to buy. This can make customers happier and even lead to more sales. Also, AI can predict what customers might need next, so companies can be proactive about helping them out. Nevertheless, there are some significant ethical concerns with using AI in telemarketing and customer service that we cannot ignore. One issue is that AI might lack the human touch. Sure, it can chat like a human, but it cannot certainly understand emotions like a human person can. This might make customers feel like they are not being listened to or understood. Another worry is about keeping customer data safe and private. AI needs a ton of data to work well, which could be risky if it is not appropriately protected. Companies need to make sure they are following strict rules, like GDPR, to keep customer info safe from hackers. Plus, there is a risk that AI might make unfair decisions, like treating some customers differently because of biases in the data it is trained on. To solve this problem, companies need to be essentially open about how their AI works and make sure it is treating everyone fairly. So, to tackle these ethical issues, we need some legal rules in place. We could set clear standards for how AI should be developed and used in telemarketing and customer service. This means making sure it is transparent, fair, and accountable. Regulators also need to keep a close eye on how companies handle customer data. They should ensure everyone follows the rules to protect people's privacy. Companies might have to do assessments to see if using AI might put people's data at risk, and they should ask for permission before collecting any personal info. To add, companies need to train their employees on how to use AI responsibly. This means teaching them how to spot biases, make ethical decisions, and use AI in a way that's fair to everyone. Ultimately, using AI in telemarketing, telesales, and customer service could improve things for everyone. Nevertheless, we must be careful and make sure we are doing it in a way that respects people's rights and security. US' FCC's Notice of Inquiry as an Exemplar The recent Notice of Inquiry (NOI) [1] by the Federal Communications Commission (FCC) of the United States about how AI affects telemarketing and tele-calling under the Telephone Consumer Protection Act (TCPA) is a significant step undertaken by a governmental organisation in the United States of America to make it an imperative for governments worldwide to formulate legislations to regulate usage of AI in telemarketing and customer service. It shows that they are taking a serious look at how technology is changing the way we communicate. As businesses use AI more in things like customer service and marketing, it is crucial to understand the rules and protections that need to be in place. The TCPA was initially made to prohibit bothersome telemarketing calls, but now it has to deal with the challenge of regulating AI-powered communication systems. With AI getting better at sounding like humans and having honest conversations, there is worry about whether these interactions are actual or legal. The FCC's inquiry is all about figuring out how AI fits into the rules of the TCPA and what kind of impact it might have, both good and bad. One big thing the FCC is looking into is how genuine AI-generated voices sound in telemarketing calls. Unlike old-style robocalls that sound pretty robotic, AI calls can sound just like real people, which could trick folks into thinking they are talking to a person. This means we need rules to make sure AI calls are honest and accountable. Things like adding watermarks or disclaimers could help people know they are talking to a machine. The FCC is also thinking about how AI chatbots fit into the rules. These are like little computer programs that can chat with customers through text. As more businesses use these chatbots, we need to know if they fall under the same rules as voice calls. Getting clear on this is essential for making sure customers are protected. However, it is not all bad news. The FCC knows that AI can also make things better for consumers. It can help send personalised messages, ensure companies do not call people who do not want to be called, and even help people with disabilities access services more efficiently. Still, there is a risk of activities like scams or tricking people happening. To figure all this out, startups and the government must work together to make reasonable rules. This means deciding what counts as AI, enshrining what it can and cannot do, and ensuring it is used correctly. It is also essential to teach people, especially those who might be more vulnerable, like elderly citizens, those who do not speak English well, or those who are not as literate as others, how to spot and deal with AI communications. The FCC's Notice of Inquiry about how AI affects the TCPA has indeed got people talking about using AI in telemarketing. Since AI can sound just like humans, we need to update the rules to keep up. Some ideas include ensuring trusted sources are evidently marked, adding disclaimers to AI calls, and figuring out exactly how AI fits into the TCPA. It is all about finding a balance between letting new tech like AI grow and ensuring people are safe. Startups and governments need to work together to ensure AI is used in telemarketing fairly and ethically. This means ensuring it does not get used to trick or scam people. Therefore, by working together, we can ensure tele-calling services keep improving without risking people's trust or safety. AI Use Cases in Telemarketing, Telesales & Customer Service The launch of Krutrim by Ola CEO Bhavish Aggarwal's Krutrim Si Designs (an AI startup) marks a significant step in integrating AI into telemarketing and tele-calling. With its multilingual capabilities and personalized responses, the chatbot demonstrates the potential of AI to revolutionise customer service in diverse linguistic contexts. However, the development of AI-powered chatbots also raises ethical considerations, particularly regarding biases in AI models [2]. Union Minister Ashwini Vaishnaw's statements on the recently stated AI Advisory by the Ministry of Electronics and Information Technology underscore the importance of addressing biases in AI models to ensure fair and unbiased interactions with users. In the context of telemarketing and tele-calling, where AI systems may interact directly with customers, it becomes crucial to implement legal safeguards and guardrails to prevent biases and discrimination. Legal solutions could include mandates for rigorous testing and validation of AI algorithms to detect and mitigate biases and regulations requiring transparency and accountability in AI deployment. Additionally, government entities could collaborate with startups and industry stakeholders to establish ethical guidelines and standards for AI integration in customer service, promoting fairness, inclusivity, and ethical conduct in AI-driven interactions. By proactively addressing ethical considerations and implementing legal safeguards, businesses and government entities can harness the benefits of AI in telemarketing and tele-calling while upholding fundamental principles of fairness and non-discrimination. Also, in July 2023, the news of Dukaan (Bengaluru-based startup by Sumit Shah) replacing its customer support roles with AI chatbots called Lina, came to light, highlighting the growing trend of AI integration in customer service functions, including telemarketing and tele-calling. While AI-driven solutions offer efficiency and cost savings for startups like Dukaan, they also raise ethical considerations and potential legal challenges. As AI technology advances, concerns about job displacement and the impact on human workers become increasingly relevant [3]. Legal safeguards and guardrails must be established to ensure fairness, transparency, and accountability in deploying AI in telemarketing and customer service. These safeguards may include regulations governing the responsible use of AI, guidelines for ethical AI deployment, and mechanisms for addressing biases and discrimination in AI algorithms. Additionally, collaboration between startups, government entities, and industry stakeholders is essential to develop comprehensive legal frameworks that balance the benefits of AI innovation with the protection of workers' rights and consumer interests. By proactively addressing these ethical and legal considerations, startups can harness the benefits of AI while mitigating potential risks and ensuring compliance with regulatory requirements. The increasing adoption of AI and automation in the retail sector, as highlighted by the insights provided, underscores the transformative potential of these technologies in enhancing [3] customer experiences and operational efficiency. However, as retailers integrate AI into telemarketing, telesales, and customer service functions, it is imperative to consider the ethical and legal implications [4]. Legal safeguards and guardrails must be established to ensure AI-powered systems adhere to regulatory frameworks governing customer privacy, data protection, and fair practices. This includes implementing mechanisms to safeguard personally identifiable information (PII) and ensuring transparent communication with customers about using AI in customer interactions. Moreover, ethical considerations such as algorithmic bias and discrimination need to be addressed through responsible AI governance frameworks. Companies should prioritize fairness, accountability, and transparency in AI deployment and establish protocols for addressing biases and ensuring equitable treatment of customers. Additionally, regulations may need to be updated or expanded to address the unique challenges posed by AI in customer service contexts. This could involve mandates for AI transparency, algorithmic accountability, and mechanisms for auditing and oversight. By addressing these ethical and legal considerations, startups and government entities can harness the benefits of AI while ensuring that customer interactions remain ethical, fair, and compliant with regulatory requirements. Possible Legal Solutions, Suggested The idea of employing Artificial Intelligence (AI) in telemarketing and tele-calling brings both excitement and apprehension for businesses. While AI-powered chatbots have the potential to revolutionize customer service by enhancing efficiency and personalization, concerns persist regarding data privacy, bias, and potential job displacement. In this rapidly evolving landscape, it is imperative for businesses to strike a balance between innovation and responsibility by integrating legal safeguards and ethical considerations. Data privacy and security stand out as primary concerns in utilizing AI for telemarketing. To address this, businesses must ensure compliance with data protection regulations applicable in their respective countries. This entails transparent communication with customers regarding data collection, processing, and storage, along with obtaining consent for AI-driven interactions. By implementing robust measures to safeguard customer data, businesses can foster trust and mitigate the risk of data breaches [4]. Another critical consideration is the presence of bias in AI systems. AI algorithms can inadvertently reflect biases inherent in the data they are trained on, resulting in unfair treatment of specific demographic groups. To address this, businesses should integrate bias detection and correction tools into their AI systems. Regular audits conducted by third-party organizations can help identify and rectify biases, while ongoing training can enhance the accuracy and fairness of AI responses. By tackling bias in AI, businesses can ensure that their tele-calling operations are impartial and equitable for all customers. Job displacement is also a concern associated with AI in telemarketing. While AI has the potential to automate various tasks, businesses must ensure that it complements human capabilities rather than replacing human workers. This could involve fostering collaboration between AI and human agents, offering training and upskilling initiatives for call center agents, and establishing guidelines for responsible AI deployment in the workplace. By empowering employees to embrace new technologies and roles, businesses can alleviate the impact of AI on jobs and foster a more inclusive workforce. In addition to legal safeguards, ethical considerations should guide the integration of AI into telemarketing and tele-calling operations. Businesses must prioritize ethical AI development and deployment practices, ensuring that their AI systems uphold principles such as transparency, accountability, and fairness. This may entail establishing ethical guidelines for AI use, conducting regular ethical assessments, and involving stakeholders in decision-making processes. By embedding ethical considerations into their AI strategies, businesses can build trust with customers and stakeholders and demonstrate their commitment to responsible innovation. Conclusion To conclude, the adoption of AI in telemarketing and tele-calling holds promise for enhancing customer service and operational efficiency. However, businesses must implement robust legal safeguards and ethical considerations to harness these benefits while mitigating risks. By prioritizing data privacy, addressing bias, mitigating job displacement, and integrating ethical principles into their AI strategies, businesses can navigate the complexities of AI integration and drive positive outcomes for both customers and employees. References [1] Frank Nolan et al., Tech & Telecom, Professional Perspective - FCC Issues Notice of Inquiry for AI’s Changing Impact on the TCPA, https://www.bloomberglaw.com/external/document/XC9VATGG000000/tech-telecom- professional-perspective-fcc-issues-notice-of-inqui (last visited Mar 12, 2024). [2] Amazon Pay secures payment aggregator licence; Krutrim AI’s chatbot, The Economic Times, https://economictimes.indiatimes.com/tech/newsletters/tech-top-5/amazon-pay-gets-payment-aggregator- licence-krutrim-launches-chatgpt-rival/articleshow/108016633.cms (last visited Mar 15, 2024). [3] Asmita Dey, AI Coming for Our Jobs? Dukaan Replaces Customer Support Roles with AI Chatbot, The Times of India, Jul. 11, 2023, https://timesofindia.indiatimes.com/india/ai-coming-for-our-jobs-dukaan-replaces- customer-support-roles-with-ai-chatbot/articleshow/101675374.cms. [4] Sujit John & Shilpa Phadnis, How AI & Automation Are Making Retail Come Alive for the New Gen, The Times of India, Feb. 7, 2024, https://timesofindia.indiatimes.com/business/india-business/how-ai-automation- are-making-retail-come-alive-for-the-new-gen/articleshow/107475869.cms.
- New Report: Draft Digital Competition Bill, 2024 for India: Feedback Report, IPLR-IG-003
We are delighted to present IPLR-IG-003, a Feedback Report to the recently proposed Digital Competition Bill, 2024 & the complete report submitted by the Committee on Digital Competition Law, which was submitted to the Ministry Of Corporate Affairs, Government of India. This feedback report was also possible, thanks to the support and efforts of Vaishnavi Singh, Shresh Narang and Krati Bhadouriya, Research Interns at the Indian Society of Artificial Intelligence and Law. We express special thanks to the Distinguished Experts at the ISAIL Advisory Council for their insights, and Akash Manwani for his insights & support. You can access the complete feedback report at https://vligta.app/product/draft-digital-competition-bill-2024-for-india-feedback-report-iplr-ig-003/ This report offers a feedback to the Digital Competition Bill, 2024, from Page 69 onwards, but also offers a proper breakdown of the whole CDCL Report, from the Stakeholder Consultations, to the DPDPA, Consumer Laws, and even the key international practices that may have inspired the current draft of the Bill. A general reading suggests that the initial chapters of the Bill have a heavy inspiration from the Digital Markets Act of the European Union, but there is no doubt to concur that the Bill offers unique Indian approaches to digital competition law, especially in Sections 3, 4, 7 and 12-15. We have also inferred some recommendations based on the aiact.in version 2 on aspects of how use of #artificialintelligence may promote anti-competitive practices on issues related to intellectual property and knowledge management. Here are all points of feedback, summarised: General Recommendations Expand the definition of "non-public data" (Section 12): The current section covers data generated by business users and end-users. However, it should also explicitly include data generated by the platforms themselves through their operations, analytics, and user tracking mechanisms. This would prevent circumvention by claiming platform-generated data is not covered. Enable data portability for platform-generated data: While Section 12 enables portability of user data, it should also mandate portability of inferred data, user profiles, and analytics generated by the platforms based on user activities. This levels the playing field for new entrants. If that’s not feasible within the mandate of CCI, perhaps the Ministry of Consumer Affairs must incorporate data portability guidelines, since this might become a latent consumer law issue. Expand anti-steering to cover all marketing channels: Section 14 should prohibit restrictions on business users promoting through any channel (email, in-app notifications, etc.), not just direct communications with end-users. Tighten the definition of "integral" products/services (Section 15): Clear objective criteria should define what constitutes an "integral" tied/bundled product to prevent over-broad interpretations that could undermine the provision's intent. Incorporate a principle of Fair, Reasonable and Non-Discriminatory (FRAND) treatment: A general FRAND obligation could prevent discriminatory treatment of business users by dominant platforms across various practices. Recommendations based on AIACT.IN V2 In this segment, we have offered a set of recommendations based on a draft of the proposed Artificial Intelligence (Development & Regulation) Act, 2023, Version 2 as proposed by the first author of this report. The recommendations in this segment may be largely associated with any core digital services or SSDEs in which the involvement of AI technologies is deeply integrated or attributable. Establish AI-specific Merger Control Guidelines: Develop specific guidelines or considerations for evaluating mergers and acquisitions involving companies with significant AI capabilities or data assets. These guidelines could address issues such as data concentration, algorithmic biases, and the potential for leveraging AI to foreclose competition or engage in self-preferencing practices. Shared Sector-Neutral Standards: The Digital Competition Bill should consider adopting shared sector-neutral standards for AI systems, as mentioned in Section 16 of the AIACT.IN Version 2. This would promote interoperability and fair competition among AI-driven digital services. Interoperability and Open Standards: The Digital Competition Bill should encourage the adoption of open standards and interoperability in AI systems deployed by Systemically Significant Digital Enterprises (SSDEs). This aligns with Section 16(5) of AIACT.IN v2, which promotes open source and interoperability in AI development. Fostering interoperability can lower entry barriers and promote competition in digital markets. AI Explainability Obligations: Drawing from the AI Explainability Agreement mentioned in Section 10(1)(d) of AIACT.IN v2, the Digital Competition Bill could mandate SSDEs to provide clear explanations for the outputs of their AI systems. This can enhance transparency and accountability, allowing users to better understand how these systems impact competition. Algorithmic Transparency: Drawing from the content provenance provisions in Section 17 of AIACT.IN v2, the Digital Competition Bill could require SSDEs to maintain records of the algorithms and data used to train their AI systems. This can aid in detecting algorithmic bias and anti-competitive practices. Interoperability considerations for IP protections (Section 15): The AIACT.IN draft recognizes the need to balance IP protections for AI systems with promoting interoperability and preventing undue restrictions on access to data and knowledge assets. The Digital Competition Bill could similarly mandate that IP protections for dominant digital platforms should not unduly hinder interoperability or access to key data/knowledge assets needed for competition. Sharing of AI-related knowledge assets (Section 8(8)): The AIACT.IN draft encourages sharing of datasets, models and algorithms through open source repositories, subject to IP rights. The Digital Competition Bill could similarly promote voluntary sharing of certain non-sensitive datasets and tools by dominant platforms to spur innovation, while respecting their legitimate IP interests. IP implications of content provenance requirements (Section 17): The AIACT.IN draft's content provenance provisions, including watermarking of AI-generated content, have IP implications that need to be considered. Likewise, any content attribution or transparency measures in the Digital Competition Bill should be designed in a manner compatible with IP laws. While the AIACT.IN Version 2 draft and the Digital Competition Bill have distinct objectives, selectively drawing upon the AI-specific IP and knowledge management provisions in the former could enrich and future-proof the competition framework for digital markets. We hope the feedback report would be helpful for the Ministry of Corporate Affairs, Government of India and the Competition Commission of India. We express our heartfelt gratitude for the authors to write such an important paper on digital competition policy, with an Indian standpoint. Should any considerations arise to discuss any of the feedback points, please feel free to reach out at vligta@indicpacific.com.
- AI & AdTech: Examining the Role of Intermediaries
The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as of March 2024. In today's digital age, advertising has undergone a significant transformation with the integration of Artificial Intelligence (AI) technology. This high-tech wizardry has completely changed the game for businesses by transforming how they connect with their target audiences, manage their advertising budgets, and supercharge their marketing strategies on social media platforms. This insight delves into the profound impact of AI technology on advertising budgetary issues in social media platforms, exploring how intermediaries and third parties play a crucial role in leveraging AI for effective advertising campaigns. Additionally, scrutinising the pivotal role of intermediaries and third parties in shaping and optimising AI-driven advertising campaigns. The Evolution of Advertising in the Digital Era Advertising has evolved from traditional methods to digital platforms, with social media becoming a prominent channel for businesses to connect with their customers. The vast user base and engagement levels on platforms like Facebook, Instagram, Twitter, and LinkedIn have made them ideal spaces for targeted advertising. However, managing advertising budgets effectively on these platforms can be challenging without the right tools and strategies. AI technology has emerged as a game-changer in the advertising landscape, offering advanced capabilities for data analysis, audience targeting, ad personalization, and performance optimization. By harnessing the power of AI algorithms and machine learning models, businesses can make data-driven decisions to maximize the impact of their advertising campaigns while minimizing costs. Social media platforms have become central hubs for advertising, offering diverse audience demographics and sophisticated targeting options. As advertisers flock to these platforms, the need for efficient budget management becomes paramount. Dynamic Budget Allocation Artificial Intelligence (AI) is like a magic wand for advertisers, especially when it comes to managing budgets in the dynamic world of social media. With AI, advertisers get the superpower of adjusting budgets on the fly based on how well their ads are doing. If an ad is hitting the bullseye, AI suggests putting more money into it, but if something isn't quite clicking, it advises scaling back. This dynamic approach ensures that every penny spent on advertising is a wise investment, maximizing returns. But the AI magic doesn't stop there. Predictive analytics, powered by AI, takes the guesswork out of budget planning. By crunching numbers from past campaigns and spotting market trends, AI algorithms become crystal balls for advertisers. They predict how ads will perform in the future, helping businesses plan their budgets with precision. It's like having a financial advisor for your advertising dollars, guiding you to spend where it matters most. Now, while AI brings a treasure trove of benefits to budget management in social media advertising, it's not all smooth sailing. Businesses might face challenges along the way that can shake up their budget strategies. These challenges include: Ad Fraud and Click Fraud Ad fraud remains a significant concern in digital advertising, with malicious actors engaging in click fraud to inflate ad engagement metrics artificially. Businesses need to implement robust fraud detection mechanisms powered by AI to identify and mitigate fraudulent activities that can drain advertising budgets without delivering genuine results. Budget Overruns Advertisers face the risk of going over their budgets if they don't have effective monitoring and optimization strategies. AI tools can be a game-changer, offering real-time insights into how ads are performing and making automatic adjustments to keep spending within the planned limits. This helps avoid unexpected costs and ensures efficient campaign management. Competitive Bidding In highly competitive social media ad spaces, bidding wars can escalate costs and strain advertisers' budgets. AI-powered bidding strategies become invaluable in such scenarios. These tools optimize bid prices by considering factors like target audience, ad relevance, and the likelihood of conversion. This ensures that businesses achieve cost-effective results even in fiercely competitive environments. The Role of Intermediaries and Third Parties in AI-Driven Advertising Intermediaries and third parties are pivotal players in the world of AI-driven advertising, playing a vital role in making the most of AI technologies and improving advertising strategies on social media platforms. These entities offer specialized knowledge, tools, and resources that empower businesses to effectively use AI for their targeted advertising campaigns. In simpler terms, they act as valuable partners, helping companies navigate the complex landscape of AI-powered advertising on platforms like social media. Their expertise and resources contribute to the success of businesses in reaching their advertising goals through smart and targeted campaigns. A major advantage of using AI technologies and third-party data for marketers is the capability to improve customer targeting through precise segmentation and personalized experiences. Intermediaries play a crucial role in helping businesses turbocharge their audience segments with third-party data, allowing for highly personalized customer interactions across different channels. Through the strategic use of AI algorithms and third-party data, advertisers can pinpoint specific characteristics of their audience, enabling them to enhance personalization on a larger scale. This leads to more tailored and effective marketing efforts that resonate with individual customers, ultimately boosting engagement and satisfaction. AI's predictive modeling is a powerful tool for businesses to understand audience intent and focus on those with a higher likelihood of converting. By examining patterns in data, demographics, past behaviors, and characteristics, AI helps identify valuable customers and build lookalike audiences. Intermediaries play a key role in implementing predictive analytics strategies, aiding businesses in refining their marketing methods, boosting return on investment (ROI), and making well-informed decisions about brand partnerships and enhancing customer experiences. This collaboration ensures that businesses can optimize their marketing efforts, better connect with their target audience, and ultimately achieve more successful outcomes. Establishing a well-thought-out strategy for managing relationships with third-party middlemen is essential for enhancing performance, creating value, and minimizing risks within the broader business network. Businesses frequently collaborate with these intermediaries for functions such as logistics, sales, distribution, marketing, and human resources. These middlemen play a crucial role in helping companies handle risks associated with these partnerships, including adhering to regulations, managing financial risks, sustaining business operations in tough times, safeguarding reputation, addressing operational disruptions, countering cyber threats, and ensuring alignment with strategic objectives. By having a structured plan in place, companies can navigate challenges more effectively, capitalize on opportunities, and foster successful collaborations with their third-party partners. Intermediaries are valuable partners for businesses, supporting them in meeting regulatory requirements concerning AI use in advertising. They play a crucial role in ensuring compliance with data privacy regulations and operational resilience standards. These intermediaries aid companies in navigating the complexities of third-party dependencies in AI models. They offer guidance on protecting data privacy, understanding how AI models function, addressing issues related to intellectual property, minimizing risks tied to external dependencies, and strengthening operational resilience against potential cyber threats. In simpler terms, intermediaries help businesses stay on the right side of the law and operate securely when utilizing AI in their advertising practices. Kinds of Intermediaries and Third Parties Ad Agencies and Marketing Firms Ad agencies and marketing firms act as intermediaries, assisting organizations in navigating the complexities of AI-driven advertising. They offer expertise, resources, and specialized tools to optimize campaigns and enhance ROI. Data Analytics Providers Third-party data analytics providers play a pivotal role in interpreting vast amounts of consumer data. They offer insights that inform advertising strategies, helping organizations refine their targeting and messaging approaches. Ethical Considerations in Third-Party Involvement & Mitigation Measures While intermediaries and third parties offer valuable services, ethical considerations arise. Issues such as data privacy, transparency, and potential conflicts of interest require careful examination to ensure responsible and ethical advertising practices. Comprehensive Budget Planning It's important for organizations to plan their budget thoroughly when diving into AI-driven advertising. This means considering the initial investment in AI technologies, ongoing maintenance costs, and being prepared for potential changes in advertising performance. Taking a proactive approach to budget planning helps ensure financial stability and success in AI-powered campaigns. Continuous Monitoring and Adaptation Keeping a close eye on how advertising is performing and adapting strategies when needed is crucial. Regularly monitoring campaigns and adjusting strategies in response to algorithm changes is a proactive way for organizations to optimize their advertising efforts. This adaptability helps minimize the impact of uncertainties and keeps campaigns on track. Collaboration with Reputable Intermediaries Choosing trustworthy partners, such as ad agencies, marketing firms, and data analytics providers, is a must. Collaborating with reputable intermediaries ensures organizations receive expert guidance and ethical practices. This partnership increases the likelihood of achieving advertising goals and maintaining a positive reputation in the industry. Enhancing Data Intermediation for Trusted Digital Agency Data intermediaries play a crucial role in making data sharing smooth and trustworthy between individuals and technology platforms. They act like digital agents, allowing users to make decisions autonomously. To build trust, intermediaries establish reputation mechanisms, get third-party verification, and create assurance structures to minimize risks for both intermediaries and rights holders. This approach boosts confidence in interactions between humans and technology in the expanding data ecosystem, ensuring that information can be shared reliably and securely. Conclusion To conclude, intermediaries and third-party players are crucial in unlocking the full potential of AI technology for advertising on social media platforms. Their expertise spans audience segmentation, predictive modeling, risk management across extended enterprises, adherence to regulatory standards, bolstering operational resilience, and building trust through data intermediation. Through these vital contributions, these entities play a substantial role in ensuring the triumph of AI-driven advertising campaigns.
- The New York Times vs OpenAI, Explained
The author of this insight is a Research Intern at the Indian Society of Artificial Intelligence and Law. The New York Times Company has filed a lawsuit against Microsoft Corporation, OpenAI Inc., and various other entities associated with OpenAI, alleging copyright infringement. The lawsuit, filed in the United States District Court for the Southern District of New York, claims that OpenAI's Generative Pre-trained Transformer models (GPT), including GPT-3 and GPT-4, were trained using vast amounts of copyrighted content from The New York Times without authorisation. This explainer addresses certain facts and aspects of the lawsuit filed by The New York Times against OpenAI and Microsoft. Facts about The New York Times Company v. Microsoft Corporation, OpenAI Inc. et al. Plaintiff: The New York Times Company Defendants: Microsoft Corporation, OpenAI Inc., OpenAI LP, OpenAI GP, LLC, OpenAI LLC, OpenAI OpCo LLC, OpenAI Global LLC, OAI Corporation, OpenAI Holdings, LLC Jurisdiction: United States District Court, Southern District of New York The United States District Court, Southern District of New York has subject matter jurisdiction as provided under 28 U.S.C. § 1338(a). The United States District Court, Southern District of New York has territorial jurisdiction as the defendants Microsoft Corporation, OpenAI Inc. either themselves or through their subsidiaries, agents have been found as provided under 28 U.S.C. §1400(a). The United States District Court, Southern District of New York is the proper venue as 28 U.S.C. §1391(b)(2) entitles the plaintiff (The New York Times Company, in this case) to file suit as a substantial portion of property (The copyrighted material of The New York Times, Company) in this case is situated. Allegations made by The New York Times Company against the defendants, summarised The New York Times Company alleges that Microsoft Corporation, OpenAI Inc. et al. were unauthorised to use and copied the content of The New York Times Company in the following manner – #1 - Defendants were unauthorised to reproduce the work of the plaintiffs to train Generative AI 17 U.S.C. §106(1) entitles the owner of the copyright to reproduce the copyrighted work in copies or phonorecords. The plaintiff alleges that the defendants violated their right recognised under 17 U.S.C. §106(1) as the defendants GPT Models are based on Large Language Models (hereinafter, LLMs). The plaintiffs allege that the pre-training stage of the LLM requires “collecting and storing text content to create training datasets and processing the content through the GPT models”, therefor the defendants used Common Crawl, a copy of the internet which has 16 million records of content from The New York Times Company. The plaintiffs allege that the defendants copied the content of The New York Times Company without license and providing compensation. #2 - The GPT Models reproduced derivatives of the copyrighted content of The New York Times Company The plaintiff alleges that the defendants GPT Models have memorised the copyrighted content of The New York Times Company and thereafter reproduce the content memorised verbatim. The plaintiffs attached outputs from GPT-4 highlighting the reproduction of the following articles- As Thousands of Taxi Drivers Were Trapped in Loans, Top Officials Counted the Money by Brian M. Rosenthal How the U.S. Lost Out on iPhone Work by Charles Duhigg & Keith Bradsher #3 - Defendants GPT Models displayed the copyrighted content of The New York Times Company which was behind a paywall The plaintiff, The New York Times Company alleges that the defendants GPT models displayed the copyrighted content in the following ways: (1) by allegedly showing copies of content from The New York Times Company which have been memorised by the GPT Models, (2) by showing search results of the content which are similar to the copyrighted material. The plaint highlights a user’s prompt which requires ChatGPT to type the content of the article : Snow Fall: The Avalanche at Tunnel Creek verbatim. The plaint also highlighted ChatGPT reproducing Pete Wells review of Guy Fieri’s American Kitchen & Bar when prompted by a user. #4 - Defendants disseminating current news by retrieving copyrighted material from of The New York Times Company The plaintiff alleges that the defendants GPT models use “grounding” techniques. The grounding techniques involve the receiving a prompt from the user, using the internet to get copyrighted content from The New York Times Company and then the LLM stitches the additional words required to respond to the prompt. To provide evidence, the plaint highlighted the reproduction of Catherine Porter’s article, ‘To Experience Paris Up Close and Personal, Plunge Into a Public Pool’ After reproducing the content, the defendants GPT model does not provide a link to access the website of The New York Times Company. The plaint further highlights ChatGPT reproducing Hurbie Meko’s article, ‘The Precarious, Terrifying Hours After a Woman Was Shoved Into a Train.” Based on the allegations made pertaining to unauthorised reproduction of copyrighted content, reproduction of derivatives of copyrighted content, reproducing copyrighted content which was behind a paywall and disseminating current news by retrieving copyrighted material from The New York Times Company, the plaintiff alleges that the defendants have inflicted the following injuries upon the plaintiff: Count 1: Copyright Infringement against all defendants 17 U.S.C. §501(a) holds that anyone who violates the exclusive rights of the copyright owner as provided by sections 106 through sections 122 …. Is an infringer of copyright. The New York Times Company alleges that all defendants through their GPT Models distributed copyrighted material belonging to The New York Times Company and therefore violated the right of The New York Times Company to reproduce the copyrighted work as recognised by 17 U.S.C. §106(1). The New York Times Company also alleges that all the defendants violated 17 U.S.C. §106(1) by storing, processing and reproducing the copyrighted content of The New York Times Company to train their LLM. The New York Times Company further alleges that the GPT Models have memorised the copyrighted content and therefor reproduces the content in response to a prompt of an user over which The New York Times Company has a copyright, an act which violates 17 U.S.C. §106(1). Count 2: Vicarious Copyright Infringement against Microsoft Corporation, OpenAI Inc., OpenAI GP, LLC, OpenAI LP, OAI Corporation, OpenAI Holdings LLC, OpenAI Global LLC The New York Times Company alleges that defendant, Microsoft Corporation had directed, controlled and profited from the violating rights of the New York Times Company. The New York Times further alleges that OpenAI Inc., OpenAI GP, LLC, OpenAI LP, OAI Corporation, OpenAI Holdings LLC, OpenAI Global LLC directed, controlled and profited from the copyright infringement of the GPT Model. Count 3: The New York Times Company alleges that Microsoft Corporation has assisted the other defendants to infringe copyright of the New York Times Company by: Helping the other defendants to build a dataset to collect copyrighted material of The New York Times Company Processing and reproduction of the content over which The New York Times Company had a copyright Providing the computational resources necessary to operate the GPT models Count 4: All other defendants are allegedly liable as the actions taken by each one of them contribute to the infringement of copyright of The New York Times Company. The defendants have allegedly: Developed the LLM Model which has memorised and reproduces the content over which The New York Times Company has a copyright Built a training model for the development of the LLM model The New York Times Company also alleges that the defendants were fully aware that the GPT model can memorise, reproduce and distribute copyrighted content. Count 5: Removal/Alteration of Copyright Management Information against All Defendants The plaintiff, The New York Times Company alleges that the defendants violated 17 U.S.C. §1202(b)(1) as they removed/altered copyright management information as copyright notice, title, identifying information and terms of use were removed. The copyrighted material was then used to train the LLM. The defendants further allege that the aforementioned acts of removing copyright notice, title, identifying information and terms of use were done intentionally and knowingly to facilitate infringement of the copyrighted material. Count 6: Competition deemed to be unfair by Common Law owing to Misappropriation of the Copyrighted Material against all defendants The plaintiffs allege that the defendants have copied the content over which the plaintiff had a copyright and without the consent of the plaintiff the defendants trained their LLM That, the defendants removed tags which would indicate that the plaintiff had a copyright over the content and the aforementioned act of the plaintiff has caused monetary loss to The New York Times Company. Relief Sought by The New York Times Company In light of the allegations made against Microsoft Corporation, OpenAI Inc. et. al. the plaintiff seeks the following: Compensation in the form of statutory damages, compensatory damages, disgorgement and other relief permitted by the law of equity. An injunction enjoining the defendants from infringing the copyrighted content of The New York Times Company. A court order demanding the destruction of GPT Models which were built on the content over which The New York Times Company had a copyright. Attorney’s fees. Additional Allegations and Context Fair Use and Training AI Models: OpenAI has argued that the utilisation of copyrighted material for AI training can be viewed as transformative use, potentially qualifying for protection under the fair use doctrine. This argument is central to the ongoing debate about the extent to which AI can utilise existing copyrighted works to create new, generative content. OpenAI's Response to the Lawsuit: OpenAI has publicly responded to the lawsuit, asserting that the case lacks merit and suggesting that The New York Times may have manipulated prompts to generate the replicated content. OpenAI has also mentioned its efforts to reduce content replication from its models and highlighted The New York Times' refusal to share examples of this reproduction before filing the lawsuit. Impact on AI Research and Development The lawsuit raises significant questions about the future of AI research and development, particularly regarding the balance between copyright protection and the necessity for AI models to access a wide range of data to learn and tackle new challenges. OpenAI has stressed the importance of accessing "the enormous aggregate of human knowledge" for effective AI functioning. The case is being closely monitored as it could establish precedents for how AI companies utilise copyrighted content. Potential Implications of the Lawsuit Precedent-Setting Case This lawsuit is one of the first instances where a major media organisation is taking legal action against AI companies for copyright infringement. The outcome of this case could establish a legal precedent for how copyrighted content is employed to train AI models. Innovation vs. Copyright Protection The case underscores the tension between fostering innovation in AI and safeguarding the rights of copyright holders. The court's decision could have far-reaching implications for both AI advancement and the protection of intellectual property. Conclusion and Next Steps The case is currently pending in the United States District Court for the Southern District of New York. The court's rulings on various counts of copyright infringement, vicarious and contributory copyright infringement, and unfair competition will be pivotal in determining the lawsuit's outcome. The lawsuit might prompt other copyright holders to evaluate how their content is utilised by AI companies and could result in additional legal actions or calls for legislative amendments to address the use of copyrighted material in AI training datasets. Both parties may continue to explore potential solutions, which could include licensing agreements, the development of AI models that do not rely on copyrighted content, or the establishment of industry standards for the ethical utilization of data in AI.
- Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024
This is a feedback report developed to offer inputs to a coveted paper published by the Economic Advisory Council to the Prime Minister (EAC-PM) of India, entitled “A Complex Adaptive System Framework to Regulate Artificial Intelligence” authored by Sanjeev Sanyal, Pranav Sharma, and Chirag Dudani. You can access the complete feedback report here. This report provides a detailed examination of the EAC-PM paper "A Complex Adaptive System Framework to Regulate Artificial Intelligence." It delves into the core principles proposed by the authors, including instituting guardrails and partitions, ensuring human control, promoting transparency and explainability, establishing distinct accountability, and creating a specialized, agile regulatory body.Through a series of infographics and concise explanations, the report breaks down the intricate concepts of complex adaptivity and its application to AI governance. It offers a fresh perspective on viewing AI systems as complex adaptive systems, highlighting the challenges of traditional regulatory approaches and the need for adaptive, responsive frameworks. Key Highlights: In-depth analysis of the EAC-PM paper's recommendations for AI regulation. Practical feedback and policy suggestions for each proposed regulatory principle. Insights into the unique characteristics of AI systems as complex adaptive systems. Exploration of financial markets as a real-world example of complex adaptive systems. Recommendations for a balanced approach fostering innovation and responsible AI development. Whether you are a policymaker, researcher, industry professional, or simply interested in the future of AI governance, this report provides a valuable resource for understanding the complexities involved and the potential solutions offered by a complex adaptive systems approach. Download the "Artificial Intelligence Governance using Complex Adaptivity: Feedback Report" today and gain a comprehensive understanding of this critical topic. Engage with the thought-provoking insights and contribute to the ongoing dialogue on responsible AI development.Stay informed, stay ahead in the era of AI governance. About the EAC-PM Paper The paper proposes a novel framework to regulate Artificial Intelligence (AI) by viewing it through the lens of a Complex Adaptive System (CAS). Traditional regulatory approaches based on ex-ante impact analysis are inadequate for governing the complex, non-linear and unpredictable nature of AI systems.The paper conducts a comparative analysis of existing AI regulatory approaches across the United States, United Kingdom, European Union, China, and the United Nations. It highlights the gaps and limitations in these frameworks when dealing with AI's CAS characteristics.To effectively regulate AI, the paper recommends a CAS-inspired framework based on five guiding principles: Instituting Guardrails and Partitions: Implement clear boundary conditions to restrict undesirable AI behaviours. Create "partitions" or barriers between distinct AI systems to prevent cascading systemic failures, akin to firebreaks in forests. Ensuring Human Control via Overrides and Authorizations: Mandate manual override mechanisms for human intervention when AI systems behave erratically. Implement multi-factor authentication protocols requiring consensus from multiple credentialed humans before executing high-risk AI actions. Transparency and Explainability: Promote open licensing of core AI algorithms for external audits. Mandate standardized "AI factsheets" detailing system development, training data, and known limitations. Conduct periodic mandatory audits for transparency and explainability. Distinct Accountability: Establish predefined liability protocols and standardized incident reporting to ensure accountability for AI-related malfunctions or unintended outcomes. Implement traceability mechanisms throughout the AI technology stack. Specialized, Agile Regulatory Body: Create a dedicated regulatory authority with a broad mandate, expertise, and agility to respond swiftly to emerging AI challenges. Maintain a national registry of AI algorithms for compliance and a repository of unforeseen events. The paper draws insights from the regulation of financial markets, which exhibit CAS characteristics with emergent behaviours arising from diverse interacting agents. It highlights regulatory mechanisms like dedicated oversight bodies, transparency requirements, control chokepoints, and personal accountability measures that can inform AI governance.
- [Draft] Artificial Intelligence Act for India, Version 2
The Artificial Intelligence (Development & Regulation) Bill, 2023 (AIACT.In) Version 2, released on March 14, 2024, builds upon the framework established in Version 1 while introducing several new provisions and amendments. This draft legislation proposed by our Founder, Mr Abhivardhan, aims to promote responsible AI development and deployment in India through a comprehensive regulatory framework. Please note that draft AIACT.IN (Version 2) is an Open Proposal developed by Mr Abhivardhan and Indic Pacific Legal Research, and is not a draft legislation proposed by any Ministry of the Government of India. You can access and download the Version 2 of the AIACT.IN by clicking below. Key Features of Artificial Intelligence Act for India [AIACT.In] Version 2 Categorization of AI Systems: Version 2 introduces a detailed categorization of AI systems based on conceptual, technical, commercial, and risk-centric methods of classification. This stratification helps in identifying and regulating AI technologies according to their inherent purpose, technical features, and potential risks. Prohibition of Unintended Risk AI Systems: The development, deployment, and use of unintended risk AI systems, as classified under Section 3, is prohibited in Version 2. This provision aims to mitigate the potential harm caused by AI systems that may emerge from complex interactions and pose unforeseen risks. Sector-Specific Standards for High-Risk AI: Version 2 mandates the development of sector-specific standards for high-risk AI systems in strategic sectors. These standards will address issues such as safety, security, reliability, transparency, accountability, and ethical considerations. Certification and Ethics Code: The IDRC (IndiaAI Development & Regulation Council) is tasked with establishing a voluntary certification scheme for AI systems based on their industry use cases and risk levels. Additionally, an Ethics Code for narrow and medium-risk AI systems is introduced to promote responsible AI development and utilization. Knowledge Management and Decision-Making: Version 2 emphasizes the importance of knowledge management and decision-making processes for high-risk AI systems. The IDRC is required to develop comprehensive model standards in these areas, and entities engaged in the development or deployment of high-risk AI systems must comply with these standards. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to address the spatial aspects of AI systems. The IDRC is tasked with establishing consultative mechanisms for the identification, protection, and enforcement of intellectual property rights related to AI. Comparison with AIACT.In Version 1 Expanded Scope: Version 2 expands upon the regulatory framework established in Version 1, introducing new provisions and amendments to address the evolving landscape of AI development and deployment. Detailed Categorization: While Version 1 provided a basic categorization of AI systems, Version 2 introduces a more comprehensive and nuanced approach to classification based on conceptual, technical, commercial, and risk-centric methods. Sector-Specific Standards: Version 2 places a greater emphasis on the development of sector-specific standards for high-risk AI systems in strategic sectors, compared to the more general approach taken in Version 1. Knowledge Management and Decision-Making: The importance of knowledge management and decision-making processes for high-risk AI systems is highlighted in Version 2, with the IDRC tasked with developing comprehensive model standards in these areas. This aspect was not as prominently featured in Version 1. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to AI systems, whereas Version 1 did not delve into the specifics of intellectual property protections for AI. Detailed Description of the Features of AIACT.IN Version 2 Significance of Key Section 2 Definitions Section 2 of AIACT.IN provides essential definitions that signify the legislative intent of the Act. Some of the key definitions are: Artificial Intelligence: The Act defines AI as an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. This broad definition encompasses various subcategories of technical, commercial, and sectoral nature, as set forth in Section 3. AI-Generated Content: This refers to content, physical or digital, that has been created or significantly modified by an artificial intelligence technology. This includes text, images, audio, and video created through various techniques, subject to the test case or use case of the AI application. Algorithmic Bias: The Act defines algorithmic bias as inherent technical limitations within an AI product, service, or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results. This includes technical limitations that emerge from the design, development, and operational stages of AI. Combinations of Intellectual Property Protections: This refers to the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of AI systems. Content Provenance: The Act defines content provenance as the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history. This includes the source data, models, and algorithms used to generate the content, as well as the individuals or entities involved in its creation, modification, and distribution. Data: The Act defines data as a representation of information, facts, concepts, opinions, or instructions in a manner suitable for communication, interpretation, or processing by human beings or by automated or augmented means. Data Fiduciary: A data fiduciary is any person who alone or in conjunction with other persons determines the purpose and means of processing personal data. Data Portability: Data portability refers to the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary. Data Principal: The data principal is the individual to whom the personal data relates. In the case of a child or a person with a disability, this includes the parents, lawful guardian, or lawful guardian acting on their behalf. Data Protection Officer: A data protection officer is an individual appointed by the Significant Data Fiduciary under the Digital Personal Data Protection Act, 2023. Digital Office: A digital office is an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode. Digital Personal Data: Digital personal data refers to personal data in digital form. Digital Public Infrastructure (DPI): DPI refers to the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including digital identity systems, digital payment systems, data exchange platforms, digital registries and databases, and open application programming interfaces (APIs) and standards. Knowledge Asset: A knowledge asset includes intellectual property rights, documented knowledge, tacit knowledge and expertise, organizational processes, customer-related knowledge, knowledge derived from data analysis, and collaborative knowledge. Knowledge Management: Knowledge management refers to the systematic processes and methods employed by organizations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of AI systems. IDRC: IDRC stands for IndiaAI Development & Regulation Council, a statutory and regulatory body established to oversee the development and regulation of AI systems across government bodies, ministries, and departments. Inherent Purpose: The inherent purpose refers to the underlying technical objective for which an AI technology is designed, developed, and deployed, encompassing the specific tasks, functions, and capabilities that the AI technology is intended to perform or achieve. Insurance Policy: Insurance policy refers to measures and requirements concerning insurance for research and development, production, and implementation of AI technologies. Interoperability Considerations: Interoperability considerations are the technical, legal, and operational factors that enable AI systems to work together seamlessly, exchange information, and operate across different platforms and environments. Open Source Software: Open source software is computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose. National Registry of Artificial Intelligence Use Cases: The National Registry of Artificial Intelligence Use Cases is a national-level digitized registry of use cases of AI technologies based on their technical & commercial features and inherent purpose, maintained by the Central Government for the purposes of standardization and certification of use cases of AI technologies. These definitions provide a clear understanding of the scope and intent of AIACT.IN, ensuring that the Act effectively addresses the complexities and challenges associated with the development and regulation of AI systems in India. Here is a list of some FAQs (frequently asked questions, that are addressed in detail. Here is how you can participate in the AIACT.IN discourse: Read and understand the document: The first step to participating in the discourse is to read and understand the AIACT.IN Version 2 document. This will give you a clear idea of the proposed regulations and standards for AI development and regulation in India. To submit your suggestions to us, write to us at vligta@indicpacific.com. Identify key areas of interest: Once you have read the document, identify the key areas that are of interest to you or your organization. This could include sections on intellectual property protections, shared sector-neutral standards, content provenance, employment and insurance, or alternative dispute resolution. Provide constructive feedback: Share your feedback on the proposed regulations and standards, highlighting any areas of concern or suggestions for improvement. Be sure to provide constructive feedback that is backed by evidence and data, where possible. Engage in discussions: Participate in discussions with other stakeholders in the AI ecosystem, including industry experts, policymakers, and researchers. This will help you gain a broader perspective on the proposed regulations and standards, and identify areas of consensus and disagreement. Stay informed: Keep up to date with the latest developments in the AI ecosystem, including new regulations, standards, and best practices. This will help you stay informed and engaged in the discourse, and ensure that your feedback is relevant and timely. Collaborate with others: Consider collaborating with other stakeholders in the AI ecosystem to develop joint submissions or position papers on the proposed regulations and standards. This will help amplify your voice and increase your impact in the discourse. Participate in consultations: Look out for opportunities to participate in consultations on the proposed regulations and standards. This will give you the opportunity to share your feedback directly with policymakers and regulators, and help shape the final regulations and standards. You can even participate in the committee sessions & meetings held by the Indian Society of Artificial Intelligence and Law. To participate, you may contact the Secretariat at executive@isail.co.in.
- The French, Italian and German Compromise on Foundation Models of GenAI
The author is pursuing law studies at National Law University, Odisha and is a former Research Intern at the Indian Society of Artificial Intelligence and Law. Almost every economy around the world is trying to curate a model that would put guardrails on the AI technology that is rapidly developing all around the world and is being increasingly used by people. Like other economies, the European Union (EU) is trying to lead the world in developing AI technology and in coming up with an efficient and effective way to regulate it. In mid-2023, the EU passed one of the first major laws to regulate AI which was a model that would aid policymakers. The European Parliament, for example, had passed a draft law, the EU AI Act, which would impose restrictions on the technology’s riskiest uses. Unlike the United States, which has taken up the challenge to create such a model quite recently, the EU has been trying to do so for more than two years. They took it up with greater urgency after the release of ChatGPT in 2022. On 18 November 2023, Germany, France, and Italy reached an important pact on AI regulation and released a joint non-paper that countered some basic approaches undertaken by the EU AI Act. They suggested alternate approaches that they claim would be more feasible and efficient. The joint non-paper underlines that the AI Act must aim to regulate the application of AI and not the technology itself because innate risks lie in the former and not in the latter. The joint non-paper highlights some key areas in which they beg to differ from the point of view of the AI Act passed by the Parliament. The highlights are: Fostering innovation while balancing responsible AI adoption within the EU The joint paper pushes for mandatory self-regulation for foundation models and advocate for stringent control over AI's foundational models, aiming to enhance accountability and transparency in the AI development process. While the EU AI Act targets only major AI producers, the joint paper advocates for a universal adherence to avoid compromising trust in the security of smaller EU companies. Immediate sanctions for defaulters of codes of conduct are excluded but a future sanction system is proposed. The focus is on regulating the application of AI and not the AI technology itself. Therefore, the development process of AI models should not be subject to regulation. What are Foundation Models of AI? Foundation models, also called general purpose AI, are AI systems which can be used to conduct a wide range of tasks that pertain to various fields such as understanding language, generating text and images, and conversing in natural language. This can be done so without majorly modifying and fine-tuning them. They can be used through several innovative methods. They are deep learning neural networks that change the approach adopted by machine learning. Data scientists use foundation models as starting points of developing AI instead of starting from scratch. This makes the process fast and cost-effective. The European Union AI Act The EU AI Act is a comprehensive framework of law that governs the sale and use of AI in EU. It sets consistent standards for AI systems across EU. It tries to address the risks of AI through obligations and standards that intend to safeguard the safety and fundamental rights of citizens in the EU and globally. It works as a part of a wider legal and policy framework and regulates different aspects of the digital economy which includes General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act. It moves away from a “one law fixes all” approach to this emerging AI regime. Risk-based Approach of the AI Act versus the joint pact of FR, IT and DE One of the key aspects of the AI Act is to regulate foundation models that support a wide range of AI applications. The most prominent AI companies like OpenAI, Google DeepMind, and Meta develop such foundation models. The AI Act aims to regulate these models by running safety tests and apprise the governments of their results in order to ensure accountability and transparency and to mitigate risks. The recent U.K. AI Safety Summit focused on risks associated with the most advanced foundation models. So basically, it is the technology that is regulated in order to manage risks. The joint non-paper aims to change this narrative of regulating the technology to regulating the application of the technology. Developers of foundation models would have to publish the kinds of testing done to ensure their model is safe. No sanctions would be applied initially to companies who did not publish this information as per the code of conduct, but the non-paper suggests a future sanction system could be set up in future. The three countries oppose the “two-tier” approach to foundation model regulation, originally proposed by the EU because it would impose stricter regulations on the most capable models expected to have the largest impact. It will hit on the legs of innovation and hamper the growth of AI technologies. French company Aleph Alpha and German company Mistral AI, which are the most prominent AI companies of Europe, are opposed to this approach of risk management. The discourse on how regulation should happen and adopting an optimal strategy would require a close look at the pros of cons of each approach i.e. regulation of the technology (EU’s two tier approach) and regulation of the applications of the technology (Franco-German-Italian view). Regulating the technology itself Pros: Directly regulating the technology establishes uniform and consistent standards which all applications can adopt. This gives clarity in compliance. Technological regulation can prevent creation and use of harmful and malicious AI applications. Cons: Strict regulations on the technology will hamper and stifle innovation by limiting the exploration of potentially beneficial applications. Technological advancements happen much faster than prescriptive regulations. The very purpose of regulations will be defeated if it fails to regulate the advanced versions of the technology. Regulating the application of technology Pros: When regulation is application-specific, it allows for a flexible approach where rules of risk mitigation can be tailored according to different applications and their uses instead of a one size fits all approach which is impractical. Regulations based on application can help to focus on responsible use of the technology and on measures of accountability in case potential risks manifest themselves. It still permits smooth flow of innovation. Cons: A risk of ignoring certain aspects of AI technology remains if the regulation is solely focused on applications, which may leave for misuse or unintended effects. Desirably, a balanced approach that combines elements of both technology-focused and application-focused regulation can be most effective. This can be done by curating rules and standards for the technology itself and also framing regulations specific to the potential risks associated with their application. But a case must be made which opines that regulation of the application of technology is a better way to go because, in a world where use of technology in the day to day life has become the norm for many, stifling innovation, which is a direct result of regulating technology, can be a bad idea. Technological advancements are required because everyone has the psychological tendency to get their work done easily and quickly, especially when society is evolving to increase employment and involvement of people in skilled work. Focus must be on ensuring the large masses of people benefit from the boons of the AI technology while ensuring that incentives for bad actors are minimised through regulation of the use of AI tools. This is to be done by increasing accountability on its use, by giving proper guidance on how to use it efficiently and ethically and how to prevent potential harms that can arise from uninformed or irresponsible use. The Model Cards requirement under mandatory self-regulation The non-paper proposes regulating specific applications rather than foundation models, aligning more with the risk-based approach. It requires defining model cards as a means to achieve such an approach. Defining model cards is a mandatory element of self-regulation. This means foundation model developers would have to define model cards, including technical documentation that presents information about trained models in an accessible way, following best practices within the developers’ community. Defining model cards promotes the principle of ‘transparency of AI’. Model cards require inclusion of limits on intended uses, potential limitations, biases, and security assessments. But it only advises users to make decisions of purchasing or not. But when it comes to recognising the transparency, accountability, and responsible AI criteria, then many users might find it highly complex to comprehend due to the technical nature of AI applications. They will not be able to adequately interpret the information on model cards. Model cards are more accessible to developers and researchers with a high level of education in AI. So there arises an imbalance of power between developers and users regarding the understanding of AI. Standardization of information remains an anomaly if model cards are used. Providing a high volume of information in model cards may confuse users. Maintaining a balance between transparency and simplicity is crucial. Users may not be aware of the existence of model cards or may not take the time to review them, especially in cases where AI systems are very complex. The model card requirement may lack feasibility because there is no scope of external monitoring over its elements. It is inflexible to the pace in which technology develops and so its information can get outdated, resulting in stifling innovation by binding new technologies with outdated compliances and information.
- OpenAI's Qualia-Type 'AGI' and Cybersecurity Dilemmas
The author of this insight is pursuing law at National Law University, Odisha and a former research intern at the Indian Society of Artificial Intelligence and Law. OpenAI CEO Sam Altman was fired from its Board of Directors for a short spell in November 2023. Along with him, another member of the Board, Grog Brockman, was also fired. Both the spokespersons of OpenAI and these two people refused to provide any reasons for this when they were reached out to. However, it came to light that several researchers and staff of OpenAI had written a letter to the Board, before the firing, warning of a powerful artificial intelligence discovery that they said could threaten humanity. OpenAI was initially created as a non-profit organisation whose mission was “to ensure that artificial general intelligence benefits all of humanity.”[1] Later, in 2019, it opened up a for-profit branch. This was a cause of concern because it was anticipated that this for-profit wing will dilute the original mission of OpenAI to develop AI for the benefit of humanity and will rather act for profit, which can often lead to non-adherence to ethical growth of the technology. Sam Altman and Grog Bockman were in favour of strengthening this wing while the other 4 Board members were against giving too much share power to it and instead, wanted to stick to developing AI for human benefit rather than to achieve business goals. It was cited by OpenAI that Sam Altman was not consistent with his communication with the rest of the Board regarding the development of a long-anticipated breakthrough - Q* AI model which is a Artificial General Intelligence (AGI) model that can surpass all existing AI developments and can achieve tasks and goals way beyond what we can imagine AI to do currently. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. It was presumed that CEO Altman had knowledge of advanced developments regarding AGI within OpenAI. It was reported in the media that he had concealed facts from the Board of Directors, leading them to firing him. With several debates going around about this in the AI community, including within OpenAI employees who protested against his firing, Altman’s position was restored and structural changes within OpenAI were suggested that includes the involvement of people like Satya Nadella. But the concern is that, with its advanced abilities, Q* AGI could be problematic for being opaque. Any presumed form of AGI would naturally work on reinforcement learning at least based on the current scientific knowledge we have. In machine learning (ML), a drawback is the vast amount of data that models require for training. The more the complexity in the model, the more data it requires. Still, the data may not be reliable. It may have false or missing values or may be collected from untrustworthy sources. Reinforcement Learning overcomes the problem of data acquisition by almost completely removing the need for data. Reinforcement learning is a branch of ML that trains a model to come to an optimum solution for a problem by taking decisions by itself. This feature is what gives AGI its strong capabilities. It has vast computing capabilities which can help it to predict most future events and phenomena including the fluctuations in stock and investment markets, advanced weather conditions, election outcomes and much more. It can use its mathematical algorithms and other complex technical elements to foresee how the human mind will think and traverse, thus gaining outcome knowledge and taking decisions that can alter and influence many things. One of the biggest apprehensions behind the development of Q* technology was that it could pose a cybersecurity risk to national governments and their classified data. Can QUALIA crack Advanced Encryption Standards (AES) which uses bitcoin-like blockchain and encryption technology to protect classified information and documents? How can major cyber security breaches affect us? How to safeguard against such cyber security breaches? These are some questions one must consider. Some Considerations We depend a lot on encryption for securing our data. It might be perceivable that the encryption safeguards that we rely on keep our data secure. That is not true entirely. As discussed before, tech like Q* has the ability to break AES as well. It may have accomplished a feat that was once considered impossible, to break modern encryption. The recent LLaMA leak on 4chan suggests that Q* can solve AES-192 and AES-256 encryption using a ciphertext attack. With AES compromised, the entire digital economy can fall apart. Government secrets, healthcare data, banking data, etc., can be exposed. The NSA has previously shown interest in breaking encryption through their Project Tundra which is similar to the alleged capabilities of Q*. This raises questions about the ethical implications of such AI advancements by state and non-state actors. Recommendations Standards and Certifications There needs to be implementation of mandatory legislation that requires nations and specific organizations to have minimum cyber security standards in place. There should be a self-regulatory set of standards to help organizations develop their cyber security measures. States must establish a Computer Incident Response Team, a national Network and Information Systems (NIS) authority and a national NIS strategy. Companies must adopt state of the art security approaches that are appropriate to manage the risks posed to their systems. Another element of standards and certification can be a regulation on a set of standards for electronic identification and trust services for electronic transactions. Regulation of Encryption standards Sensitive data can be protected by ensuring efficient encryption measures. Data must be classified based on its sensitive nature and significance. Investing on equally encrypting all types of data is unnecessary. More sensitive data requires a greater and stronger level of encryption with added layers of security. When and how encryption should be applied is a major consideration here. A multi-factor authentication is recommended to add an extra layer of security, even if an attacker gains access to encrypted data keys. It is recommended that the best industry practices are adopted while doing so. End-to-End encryption is a best practice to protect data throughout its entire lifecycle, from creation to storage to transmission. Strong and widely accepted encryption algorithms that are fully updated must be used and there should be periodic checks of upgradation requirements. Conducting regular audits and assessments is necessary and there must be a supervisory body that ensures these regular checks. Regulating reinforcement learning Regulation of reinforcement learning by AI must be done which involves establishing guidelines and frameworks in order ensure responsible and ethical use. Transparency in the development and deployment of RL algorithms is crucial. RL developers should create a manual which provides information about the algorithms' goals, training data, and decision-making processes. This should be done, especially where RL is used in critical applications that can affect society. Liability mechanisms must be in place for holding developers and organizations accountable for the actions of RL algorithms. Frameworks must be developed for comprehensively defining the rights and liabilities in case losses occur and harm is caused by RL-based AI systems. When personal data of individuals are involved, privacy concerns emerge. Measures must be implemented to ensure compliance with data protection regulations and safeguard user privacy. Since billions of people are currently using AI tools in various aspects of their lives, it is necessary for them to have a basic knowledge about RL technology. Definitely policymakers, developers, and the general public must understand the benefits and potential risks associated with RL so that they can make informed choices about their use and create effective policies. It is a good idea to collaborate with international organizations and regulatory bodies to establish consistent global standards for RL. Cyber security insurance Cyber insurance is important with the development of AI because there will be a new set of risks that tradition insurance policies may not cover. Some risks can be data breaches, property damage, business interruption, or even physical harm to humans. It is quite unpredictable what kind of risks AGI models may pose. Also, malicious actors will misuses these models who may try to steal, corrupt, or manipulate them for their own purposes and insurance must definitely cover resulting losses. AI systems may also fail unintentionally due to faulty assumptions, design flaws, or unexpected situations that may produce unsafe or undesirable outcomes. This is another area where cyber insurance can help cover the costs and liabilities associated with these potential failures and provide guidance and support for preventing them. Legal regulation Existing legal instruments may not be enough to cover and address the risks that will accompany security breaches by AGI. Integrating AI security requirements into existing data protection laws is necessary and every national parliament must develop and committee consisting of AI and legal experts who will draft stringent laws to prevent any cyber security breach by AGI models once they come to use. References [1] https://openai.com/about
- The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023
This is Infographic Report IPLR-IG-001, in which we have addressed the concept and phenomenon of multipolarity in the context of India’s geopolitical and policy realities per se. You can also purchase the report at the VLiGTA App. Here is a sneak peek of the report. Here is the table of contents of the report: 1 | Understanding Multipolarity What is multipolarity, and how has this concept evolved for India in political, economic and social aspects. 2 | The Transformative Role of Multipolarity How multipolarity as a policy phenomenon (and a geopolitical phenomenon changed the way we look at global issues and problems. Global Dynamics and Politics Diversifying Power & Competence Dynamics Old Multilateralism vs New Multilateralism Multi-Alignment as the New Normal Plurilateralism and ‘Minilateralism’ The Rise of Specialised Non-State Actors The World is Post-Ideological Crisis and Realism The Language and Outlook of ‘Polycrisis’ Principled Realism The Weaponization of Everything Technology and Modernity The Penetrable and Irreversible Role of Digital Technologies The Big Tech and the Red Tech New Technological & Economic Modernity Economic Strategies Economic and Technological Regionalism Enabling Strategic Hedging 3 | Emerging Use Cases in a Multi-polar World A set of proposed use cases in law, policy and economics which could steer the volatility of the multipolar world. Policy Development and Implementation The Role of Civilisation Science to Solve Complex Policy Problems Economic and Environmental Strategies A New Industrial Policy and the Circular Economy The Bundling & Unbundling of Soft Power International Law and Diplomacy The Rise of Multi-aligned thinking in International Law The Standardisation of Modern Diplomacy by Middle Powers at a micro-level Regulatory Trends and Sovereignty The Rise and Rise of Regulatory Sovereignty & Subterfuge The Rise of Soft Law & Self-Regulation You can download the complete infographic report, here. The multipolar world both as a phenomenon and a concept, have been hard to understand. Nevertheless, through this infographic report, we have discussed various aspects of a multipolar world, by focusing on certain trends andBoth a compelling phenomenon and a nuanced concept, the multipolar world often eludes easy understanding. However, in our meticulously crafted infographic report, we delve deep into the multifaceted nature of a multipolar world. We shine a light on prevailing trends and tackle complex problems from a structural perspective. Should you find yourself curious or in need of further discussion or help in any project or initiative as a professional or a business, we warmly invite you to reach out to us at vligta@indicpacific.com. Your inquiries are not just welcome, they're anticipated!
![[AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3](https://static.wixstatic.com/media/f0525d_09080c2013fe40d5be76a9d333c31ac1~mv2.png/v1/fit/w_52,h_36,q_85,usm_0.66_1.00_0.01,blur_2,enc_auto/f0525d_09080c2013fe40d5be76a9d333c31ac1~mv2.png)







![[Draft] Artificial Intelligence Act for India, Version 2](https://static.wixstatic.com/media/f0525d_0e961118df0041bd8e94c05226d6eb88~mv2.jpg/v1/fit/w_93,h_66,q_80,usm_0.66_1.00_0.01,blur_2,enc_auto/f0525d_0e961118df0041bd8e94c05226d6eb88~mv2.jpg)


