90 items found
- AI Seoul Summit 2024: Decoding the International Scientific Report on AI Safety
The AI Seoul Summit on AI Safety, held in South Korea in 2024, has released a comprehensive international scientific report on AI safety . This report stands out from the myriad of AI policy and technology reports due to its depth and actionable insights. Here, we break down the key points from the report to understand the risks and challenges associated with general-purpose AI systems. 1. The Risk Surface of General-Purpose AI "The risk surface of a technology consists of all the ways it can cause harm through accidents or malicious use. The more general-purpose a technology is, the more extensive its risk exposure is expected to be. General-purpose AI models can be fine-tuned and applied in numerous application domains and used by a wide variety of users [...], leading to extremely broad risk surfaces and exposure, challenging effective risk management." General-purpose AI models, due to their versatility, have a broad risk surface. This means they can be applied in various domains, increasing the potential for both accidental and malicious harm. Managing these risks effectively is a significant challenge due to the extensive exposure these models have. Illustration Imagine a general-purpose AI model used in both healthcare and financial services. In healthcare, it could misdiagnose patients, leading to severe health consequences. In finance, it could be exploited for fraudulent activities. The broad applicability increases the risk surface, making it difficult to manage all potential harms. 2. Challenges in Risk Assessment "When the scope of applicability and use of an AI system is narrow (e.g., consider spam filtering as an example), salient types of risk (e.g., the likelihood of false positives) can be measured with relatively high confidence. In contrast, assessing general-purpose AI models’ risks, such as the generation of toxic language, is much more challenging, in part due to a lack of consensus on what should be considered toxic and the interplay between toxicity and contextual factors (including the prompt and the intention of the user)." Narrow AI systems, like spam filters, have specific and measurable risks. However, general-purpose AI models pose a greater challenge in risk assessment due to the complexity and variability of their applications. Determining what constitutes toxic behavior and understanding the context in which it occurs adds layers of difficulty. Illustration Consider an AI model used for content moderation on a social media platform. The model might flag certain words or phrases as toxic. However, the context in which these words are used can vary widely. For example, the word "kill" could be flagged as toxic, but in the context of a video game discussion, it might be perfectly acceptable. This variability makes it difficult to create a standardized risk assessment. 3. Limitations of Current Methodologies "Current risk assessment methodologies often fail to produce reliable assessments of the risk posed by general-purpose AI systems, [because] Specifying the relevant/high-priority flaws and vulnerabilities is highly influenced by who is at the table and how the discussion is organised, meaning it is easy to miss or mis-define areas of concern. [...] Red teaming, for example, only assesses whether a model can produce some output, not the extent to which it will do so in real-world contexts nor how harmful doing so would be. Instead, they tend to provide qualitative information that informs judgments on what risk the system poses." Existing methodologies for risk assessment are often inadequate for general-purpose AI systems. These methods can miss critical flaws and vulnerabilities due to biases in the discussion process. Techniques like red teaming provide limited insights, focusing on whether a model can produce certain outputs rather than the real-world implications of those outputs. Illustration A red-teaming exercise might show that an AI can generate harmful content, but it doesn't quantify how often this would happen in real-world use or the potential impact. For instance, an AI chatbot might generate offensive jokes during testing, but the frequency and context in which these jokes appear in real-world interactions remain unknown. 4. Nascent Quantitative Risk Assessments "Quantitative risk assessment methodologies for general-purpose AI are very nascent and it is not yet clear how quantitative safety guarantees could be obtained. [...] If quantitative risk assessments are too uncertain to be relied on, they may still be an important complement to inform high-stakes decisions, clarify the assumptions used to assess risk levels and evaluate the appropriateness of other decision procedures (e.g. those tied to model capabilities). Further, “risk” and “safety” are contentious concepts." Quantitative risk assessments for general-purpose AI are still in their early stages. While these assessments are currently uncertain, they can still play a crucial role in informing high-stakes decisions and clarifying assumptions. The concepts of "risk" and "safety" remain contentious and require further exploration. Illustration A quantitative risk assessment might show a 5% chance of an AI system making a critical error in a high-stakes environment like autonomous driving. However, the uncertainty in these assessments makes it hard to rely on them exclusively for regulatory decisions. 5. Testing and Thresholds "It is common practice to test models for some dangerous capabilities ahead of release, including via red-teaming and benchmarking, and publishing those results in a ‘model card’ [...]. Further, some developers have internal decision-making panels that deliberate on how to safely and responsibly release new systems. [...] However, more work is needed to assess whether adhering to some specific set of thresholds indeed does keep risk to an acceptable level and to assess the practicality of accurately specifying appropriate thresholds in advance." Testing for dangerous capabilities before releasing AI models is a standard practice. However, there is a need for more work to determine if these tests and thresholds effectively manage risks. Accurately specifying appropriate thresholds in advance remains a challenge. Illustration An AI model might pass pre-release tests for dangerous capabilities, but once deployed, it could still exhibit harmful behaviors not anticipated during testing. For example, an AI chatbot might generate harmful content in response to unforeseen user inputs. 6. Specifying Objectives for AI Systems "It is challenging to precisely specify an objective for general-purpose AI systems in a way that does not unintentionally incentivise undesirable behaviours. Currently, researchers do not know how to specify abstract human preferences and values in a way that can be used to train general-purpose AI systems. Moreover, given the complex socio-technical relationships embedded in general-purpose AI systems, it is not clear whether such specification is possible." Specifying objectives for general-purpose AI systems without incentivizing undesirable behaviors is difficult. Researchers are still figuring out how to encode abstract human preferences and values into these systems. The complex socio-technical relationships involved add to the challenge. Illustration An AI system designed to maximize user engagement might inadvertently promote sensationalist or harmful content because it interprets engagement as the primary objective, ignoring the quality or safety of the content. 7. Machine Unlearning "‘Machine unlearning’ can help to remove certain undesirable capabilities from general-purpose AI systems. [...] Unlearning as a way of negating the influence of undesirable training data was originally proposed as a way to protect privacy and copyright [...] Unlearning methods to remove hazardous capabilities [...] include methods based on fine-tuning [...] and editing the inner workings of models [...]. Ideally, unlearning should make a model unable to exhibit the unwanted behaviour even when subject to knowledge-extraction attacks, novel situations (e.g. foreign languages), or small amounts of fine-tuning. However, unlearning methods can often fail to perform unlearning robustly and may introduce unwanted side effects [...] on desirable model knowledge." Machine unlearning aims to remove undesirable capabilities from AI systems, initially proposed to protect privacy and copyright. However, these methods can fail to perform robustly and may introduce unwanted side effects, affecting desirable model knowledge. Illustration An AI system trained on biased data might be subjected to machine unlearning to remove discriminatory behaviors. However, this process could inadvertently degrade the system's overall performance or introduce new biases. 8. Mechanistic Interpretability "Understanding a model’s internal computations might help to investigate whether they have learned trustworthy solutions. ‘Mechanistic interpretability’ refers to studying the inner workings of state-of-the-art AI models. However, state-of-the-art neural networks are large and complex, and mechanistic interpretability has not yet been useful and competitive with other ways to analyse models for practical applications." Mechanistic interpretability involves studying the internal workings of AI models to ensure they have learned trustworthy solutions. However, this approach has not yet proven useful or competitive with other analysis methods for practical applications due to the complexity of state-of-the-art neural networks. Illustration A complex neural network used in financial trading might make decisions that are difficult to interpret. Mechanistic interpretability could help understand these decisions, but current methods are not yet practical for real-world applications. 9. Watermarks for AI-Generated Content "Watermarks make distinguishing AI-generated content easier, but they can be removed. A ‘watermark’ refers to a subtle style or motif that can be inserted into a file which is difficult for a human to notice but easy for an algorithm to detect. Watermarks for images typically take the form of imperceptible patterns inserted into image pixels [...], while watermarks for text typically take the form of stylistic or word-choice biases [...]. Watermarks are useful, but they are an imperfect strategy for detecting AI-generated content because they can be removed [...]. However, this does not mean that they are not useful. As an analogy, fingerprints are easy to avoid or remove, but they are still very useful in forensic science." Watermarks help identify AI-generated content by embedding subtle, algorithm-detectable patterns. While useful, they are not foolproof as they can be removed. Despite this, watermarks remain a valuable tool, much like fingerprints in forensic science. Illustrations An AI-generated news article might include a watermark to indicate its origin. However, malicious actors could remove this watermark, making it difficult to trace the content back to its source. 10. Mitigating Bias and Improving Fairness "Researchers deploy a variety of methods to mitigate or remove bias and improve fairness in general-purpose AI systems [...], including pre-processing, in-processing, and post-processing techniques [...]. Pre-processing techniques analyse and rectify data to remove inherent bias existing in datasets, while in-processing techniques design and employ learning algorithms to mitigate discrimination during the training phase of the system. Post-processing methods adjust general-purpose AI system outputs once deployed." To mitigate bias and improve fairness in AI systems, researchers use various techniques across different stages. Pre-processing addresses biases in datasets, in-processing mitigates discrimination during training, and post-processing adjusts outputs after deployment. Illustrations An AI hiring tool might use pre-processing techniques to remove biases from training data, in-processing techniques to ensure fair decision-making during training, and post-processing techniques to adjust outputs for fairness after deployment. Legal Perspective on the Report In many ways, this report is a truth seeker, since amidst the wave of hyped up AI policy content, this report kind of addresses AI safety in a practical way. The acknowledgements offered in the report are incredible, and here are some inferences based on each of the 10 points, that I have developed to suggest how technology professionals and companies must be prepared to understand the legal-ethical implications of half-baked AI regulations. 1. The Risk Surface of General-Purpose AI The extensive risk surface of general-purpose AI necessitates a flexible and adaptive legal framework. Regulators must consider the diverse applications and potential harms of these technologies. This could lead to the development of sector-specific regulations and cross-sectoral oversight bodies to ensure comprehensive risk management. Legal systems may need to incorporate dynamic regulatory mechanisms that can evolve with technological advancements, ensuring that all potential risks are adequately addressed. 2. Challenges in Risk Assessment The difficulty in assessing risks for general-purpose AI due to contextual variability and cultural differences implies that legal standards must be adaptable and context-sensitive. This could involve creating guidelines for context-specific evaluations and establishing international cooperation to harmonize standards. Legal frameworks may need to incorporate mechanisms for continuous learning and adaptation, ensuring that risk assessments remain relevant and effective across different contexts and cultures. 3. Limitations of Current Methodologies The inadequacy of current risk assessment methodologies for general-purpose AI suggests that legal frameworks should mandate comprehensive risk assessments that include both qualitative and quantitative analyses. This might involve setting standards for risk assessment methodologies and requiring transparency in the assessment process. Legal systems may need to ensure diverse stakeholder involvement in risk assessment discussions to capture a wide range of perspectives and concerns, thereby improving the reliability and comprehensiveness of risk assessments. 4. Nascent Quantitative Risk Assessments The nascent and uncertain nature of quantitative risk assessments for general-purpose AI indicates that regulators should use these assessments as one of several tools in decision-making processes. Legal standards should require the use of multiple assessment methods to provide a more comprehensive understanding of risks. This could lead to the development of hybrid regulatory approaches that combine quantitative and qualitative assessments, ensuring that high-stakes decisions are informed by a robust and multi-faceted understanding of risks. 5. Testing and Thresholds The need for ongoing monitoring and post-deployment testing of AI systems implies that legal frameworks should require continuous risk assessment and incident reporting. This could involve mandatory reporting of incidents and continuous risk assessment to ensure that thresholds remain relevant and effective. Legal systems may need to incorporate mechanisms for adaptive regulation, allowing for the adjustment of thresholds and standards based on real-world performance and emerging risks. 6. Specifying Objectives for AI Systems The challenge of specifying objectives for general-purpose AI without incentivizing undesirable behaviors suggests that regulations should require AI developers to consider the broader social and ethical implications of their systems. This might involve creating guidelines for ethical AI design and requiring impact assessments that consider potential unintended consequences. Legal frameworks may need to incorporate principles of ethical AI development, ensuring that AI systems align with societal values and do not inadvertently cause harm. 7. Machine Unlearning The potential for machine unlearning methods to introduce new issues implies that legal standards should require rigorous testing and validation of these methods. This might involve setting benchmarks for unlearning efficacy and monitoring for unintended side effects. Legal systems may need to ensure that unlearning processes are robust and do not compromise the overall performance or safety of AI systems, thereby maintaining trust and reliability in AI technologies. 8. Mechanistic Interpretability The current impracticality of mechanistic interpretability for real-world applications suggests that regulations should promote research into this area and require transparency in AI decision-making processes. This could involve mandating explainability standards and supporting the development of practical interpretability tools. Legal frameworks may need to ensure that AI systems are transparent and accountable, enabling stakeholders to understand and trust the decisions made by these systems. 9. Watermarks for AI-Generated Content The potential for watermarks to be removed implies that legal frameworks should require the use of robust watermarking techniques and establish penalties for their removal. This could involve creating standards for watermarking methods and ensuring they are resistant to tampering. Legal systems may need to incorporate mechanisms for the verification and traceability of AI-generated content, ensuring that the origins and authenticity of such content can be reliably determined. 10. Mitigating Bias and Improving Fairness The need for comprehensive bias mitigation strategies across all stages of AI development suggests that regulations should mandate the use of pre-processing, in-processing, and post-processing techniques. This might involve setting standards for these techniques and requiring regular audits to ensure compliance. Legal frameworks may need to ensure that AI systems are fair and non-discriminatory, promoting equity and justice in the deployment and use of AI technologies. Conclusion In short, the AI Seoul Summit 2024 Report provides a technical study, which was necessary to address basic questions around various contours of artificial intelligence regulation. This is an incredible study, because as governments around the world are still panicking about finding ways to regulate AI, they are yet to understand how key technical challenges and even socio-technical realities could shape their legal, ethical and socio-economic underpinnings to adjudicate & regulate AI. Through our efforts at Indic Pacific Legal Research, I had developed India's first artificial intelligence regulation proposal (private), called the AIACT.IN , or the Draft Artificial Intelligence (Development & Regulation) Act, 2023. As of March 14, 2024, the second version of AIACT.IN is already available for public scrutiny and comments. In that context, I am glad to announce that a Third Version of AIACT.IN is currently underway and we would launch the third version of this AI bill in the coming weeks, once some internal scrutinisation is complete. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- [New Publication] Artificial Intelligence and Policy in India, Volume 5
Indic Pacific Legal Research is proud to announce the publication of "Artificial Intelligence and Policy in India, Volume 5", a cutting-edge research collection that delves into the most pressing policy challenges and opportunities presented by AI in the Indian context. This volume is the result of a fruitful collaboration between Indic Pacific Legal Research and the Indian Society of Artificial Intelligence and Law (ISAIL). It brings together meticulously researched papers from ISAIL's talented research interns, offering diverse and insightful perspectives on critical issues at the intersection of AI, law, and policy. Under the expert editorship of Abhivardhan and Pratejas Tomar, the collection features three key contributions: Bhavya Singh's paper tackles the complex interplay between the groundbreaking EU AI Act and India's evolving data protection framework, offering valuable insights for policymakers navigating the global AI governance landscape. Purnima Sihmar's research investigates the potential risks surrounding AI integration in public infrastructure projects, providing a roadmap for responsible AI deployment in the public interest. Harinandana V's work sheds light on the subtle ways AI is influencing the advertising landscape, often without consumer awareness, highlighting the urgent need for transparency and accountability measures. At Indic Pacific Legal Research, we are committed to advancing rigorous, timely, and actionable research on the most pressing legal and policy issues facing India and the wider region. This volume exemplifies our mission to bridge the gap between academic insights and policy impact.We are grateful to our partners at ISAIL for their dedication to nurturing the next generation of AI policy leaders, and to the exceptional research interns whose contributions made this volume possible. As AI continues to rapidly transform every aspect of society, it is crucial that we critically examine its implications and proactively shape its development in line with our shared values and aspirations. We believe this collection makes a significant contribution to that vital ongoing conversation. We invite policymakers, legal professionals, AI researchers, and the wider public to explore the collection at https://indopacific.app/product/artificial-intelligence-and-policy-in-india-volume-5-aipi-v5/ For media inquiries or partnership opportunities, please contact [EMAIL].Together, let us work towards a future where AI serves as a force for good, promoting justice, equality, and the well-being of all.
- [New Report] Reimaging and Restructuring MeitY for India, IPLR-IG-007
We are thrilled to announce the release of our latest infographic report, " Reimaging and Restructuring MeitY for India ", in collaboration with VLA.Digital ! 🎉 As India stands at the cusp of a digital revolution, it is imperative that our technology governance structures evolve to meet the challenges of this dynamic landscape. This report takes a deep dive into the reforms needed at the Ministry of Electronics and Information Technology (MeitY) to unlock India's tech potential. 🔓 🔍 Here's a sneak peek into the key issues covered: 💸 Reducing regulatory burden and compliance costs for startups & SMEs 💪 Strengthening MeitY's institutional capacity to regulate effectively 🌞 Ensuring transparency & ethics in technology policymaking 🆕 Enhancing regulatory approaches for AI, blockchain & emerging tech 🏗️ Proposing new models to restructure MeitY for the digital age 👥 Congratulations to our team led by Abhivardhan, Bhavana J Sekhar & Pratejas Tomar, and contributing authors Bhavya Singh & Harinandana V. 📥 Download the full report here: https://indopacific.app/product/reimaging-and-restructuring-meity-for-india-iplr-ig-007/ Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- TESCREAL and AI-Related Risks
TESCREAL serves as a lens through which we can examine the motivations and potential implications of cutting-edge technological developments, particularly in the field of artificial intelligence (AI). As these ideologies gain traction among tech leaders and innovators, they are increasingly shaping the trajectory of AI research and development. This insight brief explores the potential risks and challenges associated with the TESCREAL framework, focusing on anticompetitive concerns, the impact on skill estimation and workforce dynamics, and the need for sensitisation measures. By understanding these issues, we can better prepare for the societal and economic changes & risks that advanced & substandard AI technologies may bring. It is crucial to consider not only the promises but also the pitfalls of the hype rapid advancement. This brief aims to provide a balanced perspective on the TESCREAL ideologies and their intersection with AI development, offering insights into proactive measures that can be taken before formal regulations are implemented. Introduction to TESCREAL The emergence of TESCREAL as a conceptual framework marks a significant milestone in our understanding of the ideological underpinnings driving technological innovation, particularly in the realm of artificial intelligence. This acronym, coined by computer scientist Timnit Gebru and philosopher Émile P. Torres, encapsulates a constellation of interconnected philosophies that have profoundly shaped the trajectory of AI development and the broader tech landscape. TESCREAL stands for: Transhumanism Extropianism Singularitarianism Cosmism Rationalism Effective Altruism Longtermism These ideologies, while distinct, share common threads and historical roots that can be traced back to the 20th century. They collectively represent a techno-optimistic worldview that envisions a future where humanity transcends its current limitations through technological advancement. The origins of TESCREAL can be understood as a natural evolution of human aspirations in the face of rapid technological progress. Transhumanism, for instance, emerged in the mid-20th century as a philosophy advocating for the use of technology to enhance human physical and cognitive capabilities. Extropianism, a more optimistic offshoot of transhumanism, emphasizes continuous improvement and the expansion of human potential. Singularitarianism, popularized by figures like Ray Kurzweil, posits the eventual emergence of artificial superintelligence that will radically transform human civilization. This concept has gained significant traction in Silicon Valley and has been a driving force behind many AI research initiatives. Cosmism, with its roots in Russian philosophy, adds a cosmic dimension to these ideas, envisioning humanity's future among the stars. This aligns closely with the ambitions of tech entrepreneurs like Elon Musk, who are actively pursuing space exploration and colonization. Rationalism, as incorporated in TESCREAL, emphasizes the importance of reason and evidence-based decision-making. This philosophical approach has been particularly influential in shaping the methodologies employed in AI research and development. Effective Altruism and Longtermism, the more recent additions to this ideological bundle, bring an ethical dimension to technological pursuits. These philosophies encourage considering the long-term consequences of our actions and maximizing positive impact on a global and even cosmic scale. The significance of TESCREAL lies in its ability to provide a comprehensive framework for understanding the motivations and goals driving some of the most influential figures and companies in the tech industry. Consider the following example. A major tech company announces its ambitious goal to develop artificial general intelligence (AGI) within the next decade, framing it as a breakthrough that will "solve humanity's greatest challenges." The company's leadership, steeped in TESCREAL ideologies, envisions this AGI as a panacea for global issues ranging from climate change to economic inequality.From Dr. Gebru's perspective, this scenario raises several critical concerns: Ethical Implications: The pursuit of AGI, driven by TESCREAL ideologies, often overlooks immediate ethical concerns in favor of speculative future benefits. This approach may neglect pressing issues of bias, fairness, and accountability in current AI systems. Power Centralization: The development of AGI by a single company or a small group of tech elites could lead to an unprecedented concentration of power, potentially exacerbating existing social and economic inequalities. Marginalization of Diverse Perspectives: The TESCREAL framework, rooted in a particular cultural and philosophical tradition, may not adequately represent or consider the needs and values of marginalized communities globally. Lack of Accountability: By framing AGI development as an unquestionable good for humanity, companies may evade responsibility for the potential negative consequences of their technologies. Neglect of Present-Day Issues: The focus on long-term, speculative outcomes may divert resources and attention from addressing immediate societal challenges that AI could help solve. Eugenics-Adjacent Thinking: There are concerning parallels between some TESCREAL ideologies and historical eugenics movements, particularly in their techno-optimistic approach to human enhancement and societal progress. Inadequate Safety Measures: The undefined nature of AGI makes it impossible to develop comprehensive safety protocols, potentially putting society at risk. In this view, the TESCREAL bundle of ideologies represents a problematic framework for guiding AI development. Instead, Dr. Gebru advocates for a more grounded, ethical, and inclusive approach to AI research and development. This approach prioritizes addressing current societal issues, ensuring diverse representation in AI development, and implementing robust accountability measures for AI systems and their creators. The Legal, Economic and Policy Risks around TESCREALism Based on the information provided, I'll explore the anticompetitive risks, challenges in skill estimation due to AI, and potential sensitization measures that can be implemented before formal regulation. I'll include examples to illustrate these points. Anticompetitive Risks The rapid development of AI technologies, driven by TESCREAL ideologies, can lead to several anticompetitive risks. Market Concentration Companies with significant resources and access to vast amounts of data may gain an unfair advantage in AI development, potentially leading to monopolistic practices. Example: A large tech company develops an advanced AI system for healthcare diagnostics, leveraging its extensive user data. This could make it difficult for smaller companies or startups to compete, even if they have innovative ideas. b) Algorithmic Collusion: AI systems might inadvertently facilitate price-fixing or other anticompetitive behaviors without explicit agreements between companies. Example: The RealPage case, where multiple landlords are accused of using the same price-setting algorithm to artificially inflate rental prices, demonstrates how AI can potentially lead to collusive behavior without direct communication between competitors[2]. Risks Around Skill "Census" and Estimation AI's impact on the job market and skill requirements poses challenges for accurate workforce planning: Rapid Skill Obsolescence AI may accelerate the pace at which certain skills become outdated, making it difficult for workers and organizations to keep up. Example: As AI takes over routine coding tasks, software developers may need to quickly shift their focus to more complex problem-solving and AI integration skills. Skill Gap Identification While AI can help identify skill gaps, there's a risk of over-reliance on AI-driven assessments without considering human factors. Example: An AI system might identify a need for data analysis skills in a company but fail to recognize the importance of domain expertise or soft skills that are crucial for interpreting and communicating the results effectively. Sensitization Measures Before Regulation To address these challenges before formal regulation is implemented, several sensitization measures can be considered: Promote Explainable AI (XAI) Encourage the development of AI systems that can provide clear explanations for their decisions. This can help identify potential biases or anticompetitive behaviors. Example: Implement a requirement for AI-driven hiring systems to provide explanations for candidate rankings or rejections, allowing for human oversight and intervention. Foster Multi-stakeholder Dialogue Create forums for discussion between industry leaders, policymakers, academics, and civil society to address potential risks and develop best practices. Example: Organize regular roundtable discussions or conferences where AI developers, ethicists, and labor representatives can discuss the impact of AI on workforce dynamics and potential mitigation strategies. Encourage Voluntary Ethical Guidelines Promote the adoption of voluntary ethical guidelines for AI development and deployment within industries. Example: Develop an industry-wide code of conduct for AI use in financial services, addressing issues such as algorithmic trading and credit scoring. Invest in AI Literacy Programs Develop educational initiatives to improve public understanding of AI capabilities, limitations, and potential impacts. Example: Create online courses or workshops for employees and the general public to learn about AI basics, its applications, and ethical considerations. Support Adaptive Learning and Reskilling Initiatives Encourage companies to invest in continuous learning programs that help employees adapt to AI-driven changes in the workplace. Example: Implement AI-powered adaptive learning platforms that personalize training content based on individual skill gaps and learning speeds[7]. Promote Transparency in AI Development Encourage companies to be more transparent about their AI development processes and potential impacts on the workforce and market dynamics. Example: Implement voluntary reporting mechanisms where companies disclose their AI use cases, data sources, and potential societal impacts. How does our AIACT.IN proposal address AI hype and the effect of TESCREALism Here are some key features related to sensitisation measures, anticompetitive risks, and skill estimation: Enhanced Classification Methods: The draft introduces more nuanced and precise evaluation methods for AI systems, considering conceptual, technical, commercial, and risk-centric approaches. This allows for better risk management and tailored regulatory responses. National Registry for AI Use Cases: A comprehensive framework for tracking both untested and stable AI applications across India, promoting transparency and accountability. AI-Generated Content Regulation: Balances innovation with protection of individual rights and societal interests, including content provenance requirements like watermarking. Advanced AI Insurance Policies: Manages risks associated with high-risk AI systems to ensure adequate protection for stakeholders. AI Pre-classification: Enables early assessment of potential risks and benefits of AI systems. Guidance on AI-related Contracts: Provides principles for responsible AI practices within organizations, addressing potential anticompetitive concerns. National AI Ethics Code: Establishes a flexible yet robust ethical foundation for AI development and deployment. Interoperability and Open Standards: Encourages adoption of open standards and interoperability in AI systems, potentially lowering entry barriers and promoting competition. Algorithmic Transparency: Requires maintaining records of algorithms and data used to train AI systems, aiding in detecting bias and anti-competitive practices. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com . References [1] https://www.ftc.gov/news-events/news/speeches/antitrust-enforcement-high-technology-markets [2] https://www.forbes.com/sites/aldenabbott/2024/03/13/why-antitrust-regulators-are-focused-on-problematic-ai-algorithms/ [3] https://www.its-her-factory.com/2024/05/vibes-and-the-tescreal-bundle-as-corporate-phenomenology/ [4] https://www.researchgate.net/publication/376795973_The_Impact_of_Artificial_Intelligence_on_Employment_and_Workforce_Dynamics_in_Contemporary_Society_Authors [5] https://techwolf.com/blog/bridging-the-skill-gap-with-strategic-workforce-planning [6] https://www.innopharmaeducation.com/our-blog/the-impact-of-ai-on-job-roles-workforce-and-employment-what-you-need-to-know [7] https://www.elev8me.com/insights/ai-impact-on-workforce-transfromation [8] https://www.chathamhouse.org/2023/06/ai-governance-must-balance-creativity-sensitivity [9] https://transcend.io/blog/ai-ethics [10] https://www.holisticai.com/blog/what-is-ethical-ai
- [New Report] The Indic Approach to Artificial Intelligence Policy, IPLR-IG-006
Dear Reader, We are thrilled to present " The Indic Approach to Artificial Intelligence Policy ," a seminal report that reimagines AI governance through the lens of Indian philosophical traditions. Authored by Abhivardhan, this report is a must-read for anyone interested in the intersection of AI, ethics, and cultural context. It introduces the Permeable Indigeneity in Policy (PIP) framework, which ensures that AI strategies align with India's unique socio-cultural landscape and development goals. The report covers a range of topics, including: Algorithmic sovereignty Context-specific AI governance AI knowledge management protocols Anticipatory sector-specific strategies It also offers practical recommendations for key stakeholders, including government bodies, startups, MSMEs, and large enterprises, to develop AI systems that are ethically grounded, culturally resonant, and socially beneficial. With engaging infographics and mind maps, the report makes complex concepts accessible to a broad audience. Don't miss this opportunity to gain a fresh perspective on AI governance and join the conversation on India's role as a global leader in inclusive AI development. Get your copy now at https://indopacific.app/product/the-indic-approach-to-artificial-intelligence-policy-iplr-ig-006/ . Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- Why the Indian Bid to Make GPAI an AI Regulator is Unpreprared
India's recent proposal to elevate the Global Partnership on Artificial Intelligence (GPAI) to an intergovernmental body on AI has garnered significant attention in the international community. This move, while ambitious, raises important questions about the future of AI governance and regulation on a global scale. This brief examines and comments upon India's bid to enable the Global Partnership on Artificial Intelligence as an AI regulator, with special emphasis on the Global South outlining the key challenges associated with GPAI, MeITY and the AI Landscape we have today. India's Leadership in GPAI India, as the current chair of GPAI, has been instrumental in expanding the initiative to include more countries, aiming to transform it into a central body for global AI policy-making. The GPAI, which started with 15 nations, has now expanded to 29 and aims to include 65 countries by next year. India's leadership in GPAI was further solidified when it was elected as the Incoming Council Chair in November 2022, securing a significant majority of the first-preference votes. Throughout the 2022-23 term, India served as the Incoming Support Chair, and on December 12, 2023, it assumed the Lead Chair position for the 2023-24 term. India is also set to serve as the Outgoing Support Chair in the subsequent year, showcasing its continued dedication to GPAI's mission. India's commitment to advancing GPAI's goals was prominently displayed when it hosted the GPAI Annual Summit from December 12th to 14th, 2023. This summit brought together representatives from all 29 GPAI member countries under India's guidance to discuss a wide range of AI-related topics. The event, organized by MeitY, was inaugurated by Prime Minister Shri Narendra Modi, who reiterated India's commitment to leveraging AI for societal betterment and equitable growth while emphasizing the importance of responsible, human-centric AI governance. The Role of MeitY It is to be understood that the Ministry of Electronics and Information Technology (MeitY) has been pivotal in negotiating the inclusion of OECD nations and advocating for greater participation from the Global South in AI regulation. MeitY has been at the forefront of India's efforts in GPAI, organizing key events such as the GPAI Annual Summit in 2023. However, MeitY's approach to AI governance has faced criticism for its reactive and arbitrary nature. The recent advisory issued by MeitY on the use of AI in elections was met with strong backlash from the AI community. The advisory required platforms offering "under-testing/unreliable" AI systems or large language models (LLMs) to Indian users to explicitly seek prior permission from the central government. This was seen as regulatory overreach that could stifle innovation in the nascent AI industry. While the government later clarified that the advisory was aimed at significant platforms and not startups, the incident highlighted the need for a more proactive and consultative approach to AI regulation. Moreover, the complexity and breadth of AI policy suggest that a single ministry may not be sufficient to handle all aspects of AI governance. A more integrated, inter-ministerial approach could enhance India's capacity to lead effectively in this domain. The inter-ministerial committee formed by MeitY with secretaries from DoT, DSIR, DST, DPIIT, and NITI Aayog as members is a step in this direction. The taking over of AI Policy by the Principal Scientific Advisor's Office in April / May 2024 was another step. However, the composition of such bodies, including the proposed National Research Foundation (NRF), has been criticized for having too many bureaucrats and fewer specialists. The NRF, which aims to provide high-level strategic direction for scientific research in India, will be governed by a Governing Board consisting of eminent researchers and professionals across disciplines. To truly foster responsible and inclusive AI development, MeitY and other government bodies must adopt a more collaborative and transparent approach. This should involve engaging with a wide range of stakeholders, including AI experts, civil society organizations, and industry representatives, to develop a comprehensive and balanced regulatory framework. Additionally, capacity building within the government, including training officials in AI technologies and their implications, is crucial for effective governance. Timing and Nature of AI Regulation The AI regulation debate spans a wide spectrum of views, from those who believe that the current "moral panic" about AI is overblown and irrational[2], to those who advocate for varying degrees of regulation to address the risks posed by AI. The European Union (EU) is at the forefront of AI regulation, with its proposed AI Act that classifies AI systems into four tiers based on their perceived risk[2]. The EU's approach is seen as more interventionist compared to the hands-off approach favored by some venture capitalists and tech companies. However, even within the EU, there are differing opinions on the timing and scope of AI regulation, with some arguing that premature regulation could stifle innovation in the nascent AI industry[1]. Many experts propose a risk-based approach to AI regulation, where higher-risk AI applications that can cause greater damage are subject to proportionately greater regulation, while lower-risk applications have less[2]. However, implementing such an approach is challenging, as it requires defining and measuring risk, setting minimum requirements for AI services, and determining which AI uses should be deemed illegal. Given the challenges in establishing comprehensive AI regulations at this stage, some experts like Gary Marcus have proposed the creation of an International AI Agency, akin to CERN, which conducts independent research free from market influences[3]. This approach would allow for the development of AI in a responsible and ethical manner without premature regulatory constraints. The proposed agency would focus on groundbreaking research to address technical challenges in developing AI with secure and ethical objectives, and on establishing robust AI safety measures to mitigate potential risks[3]. Advocates for AI safety stress the importance of initiatives like a CERN for AI safety and Global AI governance to effectively manage risks[3]. They emphasize the need to balance the focus on diverse risks within the AI landscape, from immediate concerns around bias, transparency, and security, to long-term risks such as the potential loss of control over future advanced machines[3]. In navigating the complexities of AI governance, ongoing dialogue underscores the critical role of research in understanding and addressing AI risks[3]. While some argue that AI-related harm has been limited to date, the evolving landscape highlights the need for proactive measures to avert potential misuse of AI technologies[3]. As of February 2024, the global AI regulatory landscape continues to evolve rapidly. The EU's AI Act has been signed by the Committee of Permanent Representatives, and the consolidated text has been published by the European Parliament[4]. The European Commission has also adopted its own approach to AI, focusing on fostering the development and use of lawful, safe, and trustworthy AI systems[4]. In Asia, Singapore's AI Verify Foundation and Infocomm Media Development Authority have released a draft Model AI Governance Framework for Generative AI, which is currently open for consultation[4]. The Monetary Authority of Singapore has also concluded the first phase of Project MindForge, which seeks to develop a risk framework for the use of Generative AI in the financial sector[4]. These developments underscore the ongoing efforts to establish effective AI governance frameworks at both regional and global levels. As the AI landscape continues to evolve rapidly, finding the right balance between innovation and risk mitigation will be crucial in shaping the future of AI regulation. GPAI and the Global South India's commitment to representing the interests of the Global South in AI governance is commendable, but it also faces several challenges and criticisms. One of the primary concerns is the ongoing debate around the moratorium on customs duties on electronic transmissions at the World Trade Organization (WTO)[5]. Many developing countries, including India, argue that the moratorium disproportionately benefits developed countries and limits the ability of developing nations to generate revenue and promote digital industrialization. India's position is that all policy options, including the imposition of customs duties on e-commerce trade, should be available to WTO members to promote digital industrialization[5]. It has highlighted the potential tariff revenue losses of around $10 billion annually for developing countries due to the moratorium[5]. This revenue could be crucial for developing countries to invest in digital infrastructure and capacity building, which are essential for harnessing the benefits of AI. However, navigating this complex issue will require careful diplomacy and a nuanced approach that balances the interests of developing and developed countries. India will need to work closely with other countries in the Global South to build consensus around a common position on the moratorium and advocate for a more equitable global trade framework that supports the digital industrialization aspirations of developing nations. Another criticism faced by India in its advocacy for the Global South is the unequal access to AI research and development (R&D) among developing nations[6]. The AI Index Fund 2023 reveals that private investments in AI from 2013-22 in the United States ($250 billion) significantly outpace those of other economies, including India and most other G20 nations[6]. This disparity in AI R&D access could lead to extreme outcomes for underdeveloped nations, such as economic threats, political instability, and compromised sovereignty[6]. To address this challenge, India must focus on building partnerships and sharing best practices in AI development and governance with other countries in the Global South[6]. Collaborations aimed at developing AI solutions tailored to the specific needs of these regions, such as in agriculture, healthcare, and education, can help ensure that AI benefits are more equitably distributed[6]. Historical Context and Capacity Building Historically, other nations have employed similar strategies to gain influence in international organizations. For instance, China has been actively involved in the World Intellectual Property Organization (WIPO) and the International Telecommunication Union (ITU). As of March 2020, China led four of the 15 UN specialized agencies and was aiming for a fifth[5]. In the case of WIPO, China has used its influence to shape global intellectual property rules in its favor, such as by pushing for the adoption of the Beijing Treaty on Audiovisual Performances in 2012[7]. Similarly, the United States and Soviet Russia have played significant roles in shaping Space Law. The Outer Space Treaty of 1967, which forms the basis of international space law, was largely a result of negotiations between the US and the Soviet Union during the Cold War era[8]. This treaty set the framework for the peaceful use of outer space and prohibited the placement of weapons of mass destruction in orbit. France has also been a key player in international organizations, particularly in WIPO and the International Civil Aviation Organization (ICAO). France is one of the most represented countries in WIPO, with a strong presence in various committees and working groups[8]. In ICAO, France has been a member of the Council since the organization's inception in 1947 and has played a significant role in shaping international aviation standards and practices. However, unlike these countries, India faces significant challenges in terms of capacity building, particularly in the field of artificial intelligence (AI). While India has made notable progress in developing its AI ecosystem, it still lags behind countries like China and the United States in terms of investment, research output, and talent pool. According to a report by the Observer Research Foundation, India faces several key challenges in driving its AI ecosystem, including a lack of quality data, inadequate funding for research and development, and a shortage of skilled AI professionals[9]. To effectively lead in global AI governance, India must address these capacity building challenges by investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations. Only by strengthening its domestic AI capabilities can India play a more influential role in shaping the future of AI governance on the international stage. Strengthening domestic AI infrastructure and capabilities is crucial for India to effectively lead in global AI governance. India's approach to AI development has been distinct from the tightly controlled government-led model of China and the laissez-faire venture capital-funded hyper-growth model of the US. Instead, India has taken a deliberative approach to understand and implement supportive strategies to develop its AI ecosystem. This involves balancing the need to develop indigenous AI capabilities while creating an enabling environment for innovation through strategic partnerships. However, India faces several challenges in building its AI capacity. One major hurdle is the shortage of skilled professionals in data science and AI. According to a NASSCOM report, India faces a demand-supply gap of 140,000 in AI and Big Data analytics roles. Investing in talent development and fostering partnerships with academia is crucial to address this talent gap [10]. Another challenge is the quality and accessibility of data. Many organizations face issues with data standardization and inconsistencies, which can hinder AI model training and accuracy. Investing in technologies like graph and vector databases can help enhance the reliability, performance, and scalability of AI systems. Additional challenges also include the Government's lack of interest to support Indian MSMEs and research labs to build AI solutions without the fear to have lack of funds to buy compute. Proposing GPAI as an International AI Agency Given the considerations discussed earlier, the best course of action for India might be to propose transforming GPAI into an international AI agency rather than a regulatory body. This approach would align with India's strengths in Digital Public Infrastructure (DPI) and allow for a more collaborative and inclusive approach to AI development and governance. India's success in building DPI, such as the Unified Payments Interface (UPI), Aadhaar, and the Open Network for Digital Commerce (ONDC), has been widely recognized. The UNGA President recently praised India's trajectory, stating that it exemplifies how DPI facilitates equal opportunities. India can leverage its expertise in DPI to shape the future of AI governance through GPAI. Transforming GPAI into an international AI agency would enable it to focus on fostering international cooperation and independent research. This approach is crucial given the rapid evolution of AI technologies and the need for a collaborative, multi-stakeholder approach to AI governance. Otherwise, a regulator made out of half-baked interests could stifle the path of AI innovation even in India & the Global South, such that the risk of regulatory subterfuge, sabotage & capture by fiduciary interest groups may loom large. An international AI agency could bring together experts from various fields, including AI, ethics, law, and social sciences, to address the complex challenges posed by AI. India's proposal to transform GPAI into an international AI agency was discussed at the 6th meeting of the GPAI Ministerial Council held on 3rd July 2024 in New Delhi. The proposal received support from several member countries, who recognized the need for a more collaborative and research-focused approach to AI governance [11]. To effectively shape the future of AI governance, India must also focus on building domestic AI capabilities and infrastructure. The National Strategy for Artificial Intelligence, released by NITI Aayog, outlines a comprehensive plan to develop India's AI ecosystem. The strategy focuses on five key areas: research and development, skilling and reskilling, data and computing infrastructure, standards and regulations, and international collaboration. Implementing the National Strategy for Artificial Intelligence will be crucial for India to effectively lead in global AI governance. This includes investing in AI research and development, fostering partnerships between academia and industry, and creating an enabling environment for AI innovation through supportive policies and regulations. How can GPAI inspire its efforts from AIACT.IN Version 3? AIACT.IN Version 3, released on June 17, 2024, is India's first privately proposed comprehensive regulatory framework for artificial intelligence. This groundbreaking legislation introduces several key features designed to ensure the safe, ethical, and responsible development and deployment of AI technologies in India. Hopefully, the proposals of AIACT.IN v3 may be helpful for our intergovernmental stakeholders at GPAI . Here are some key ways GPAI can draw inspiration from AIACT.IN Version 3 in its efforts: Enhanced AI Classification Methods: GPAI can adopt AIACT.IN V3's nuanced approach to classifying AI systems based on conceptual, technical, commercial, and risk-centric methods. This would enable GPAI to better evaluate and regulate AI technologies according to their inherent purpose, features, and potential risks on a global scale. National AI Use Case Registry: GPAI can establish an international registry for AI use cases, similar to the National Registry proposed in AIACT.IN V3. This would provide a clear framework for tracking and certifying both untested and stable AI applications across member countries, promoting transparency and accountability. Balancing Innovation and Risk Mitigation: AIACT.IN V3 aims to balance the need for AI innovation with the protection of individual rights and societal interests. GPAI can adopt a similar approach in its global efforts, fostering responsible AI development while safeguarding against potential misuse. AI Insurance Policies: Drawing from AIACT.IN V3's mandate for insurance coverage of high-risk AI systems, GPAI can develop international guidelines for AI risk assessment and insurance. This would help manage the risks associated with advanced AI technologies and ensure adequate protection for stakeholders worldwide. AI Pre-classification: GPAI can implement an early assessment mechanism for AI systems, inspired by the AI pre-classification proposed in AIACT.IN V3. This would enable proactive evaluation of potential risks and benefits, allowing for timely interventions and policy adjustments. Guidance Principles for AI Governance: AIACT.IN V3 provides guidance on AI-related contracts and corporate governance to promote responsible practices. GPAI can develop similar international principles and best practices to guide AI governance across member countries, fostering consistency and cooperation. Global AI Ethics Code: Building on the National AI Ethics Code in AIACT.IN V3, GPAI can work towards establishing a flexible yet robust global ethical framework for AI development and deployment. This would provide a common foundation for responsible AI practices worldwide. Collaborative Approach: AIACT.IN V3 was developed through a collaborative effort involving experts from various domains. GPAI can strengthen its multi-stakeholder approach, engaging AI practitioners, policymakers, industry leaders, and civil society representatives to develop comprehensive and inclusive AI governance frameworks. Conclusion In conclusion, India's proactive stance in AI governance is commendable, but the path forward requires careful consideration of domestic capabilities, international dynamics, and the evolving nature of AI. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com . References [1] EY, 'How to navigate global trends in Artificial Intelligence regulation' (EY, 2023) < https://www.ey.com/en_in/ai/how-to-navigate-global-trends-in-artificial-intelligence-regulation > accessed 3 July 2024. [2] James Andrew Lewis, 'AI Regulation is Coming: What is the Likely Outcome?' (Center for Strategic and International Studies, 18 May 2023) < https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome > accessed 3 July 2024. [3] Gary Marcus, 'A CERN for AI and the Global Governance of AI' (Marcus on AI, 2 June 2023) < https://garymarcus.substack.com/p/a-cern-for-ai-and-the-global-governance > accessed 3 July 2024. [4] Eversheds Sutherland, 'Global AI Regulatory Update - February 2024' (Eversheds Sutherland, 26 February 2024) < https://www.eversheds-sutherland.com/en/global/insights/global-ai-regulatory-update-february-2024 > accessed 3 July 2024. [5] Murali Kallummal, 'WTO's E-commerce Moratorium: Will India Betray the Interests of the Global South?' (The Wire, 10 June 2023) < https://thewire.in/trade/wtos-ecommerce-moratorium-india-us > accessed 3 July 2024. [6] Business Insider India, 'India to host Global India AI Summit 2024 in New Delhi on July 3-4' (Business Insider India, 1 July 2024) < https://www.businessinsider.in/tech/news/india-to-host-global-india-ai-summit-2024-in-new-delhi-on-july-3-4/articleshow/111398169.cms > accessed 3 July 2024. [7] Yeling Tan, 'China and the UN System – the Case of the World Intellectual Property Organization' (Carnegie Endowment for International Peace, 3 March 2020) < https://carnegieendowment.org/posts/2020/03/china-and-the-un-system-the-case-of-the-world-intellectual-property-organization?center=global&lang=en > accessed 3 July 2024. [8] Jérôme Sgard, 'Bretton Woods and the Reconstruction of Europe' (2018) 44(4) The Journal of Economic History 1136 < https://www.jstor.org/stable/45367420 > accessed 3 July 2024. [9] WIPO, 'Information by Country: France' (WIPO) < https://www.wipo.int/directory/en/details.jsp?country_code=FR > accessed 3 July 2024. [10] Trisha Ray and Akhil Deo, 'Digital Dreams, Real Challenges: Key Factors Driving India's AI Ecosystem' (Observer Research Foundation, 12 April 2023) < https://www.orfonline.org/research/digital-dreams-real-challenges-key-factors-driving-indias-ai-ecosystem > accessed 3 July 2024. [11] Courtney J. Fung, 'China already leads 4 of the 15 U.N. specialized agencies — and is aiming for a 5th' (The Washington Post, 3 March 2020) < https://www.washingtonpost.com/politics/2020/03/03/china-already-leads-4-15-un-specialized-agencies-is-aiming-5th/ > accessed 3 July 2024.
- [AIACT.IN V3] Draft Artificial Intelligence (Development & Regulation) Act, 2023, Version 3
The rapid advancement of artificial intelligence (AI) technologies necessitates a robust regulatory framework to ensure their safe and ethical deployment. AIACT.IN, India's first privately proposed AI regulation, has been at the forefront of this effort. Released on June 17, 2024, AIACT.IN introduces several groundbreaking features that make it a comprehensive and forward-thinking framework for AI regulation in India. You can also download AIACT.IN V3 from below. In the rapidly evolving landscape of artificial intelligence (AI), the need for robust, forward-thinking regulation has never been more critical. As AI technologies continue to advance at an unprecedented pace, they bring with them both immense opportunities and significant risks. I have been been a vocal advocate for a balanced approach to AI regulation—one that harnesses the transformative potential of AI while safeguarding against its inherent risks, protecting the nascent Indian AI ecosystem. AIACT.IN Version 3 represents a significant leap forward in this endeavour. This latest version of India's pioneering AI regulatory framework is designed to address the complexities and nuances of the AI ecosystem, ensuring that the development and deployment of AI technologies are both innovative and responsible. Some of the notable features of AIACT.IN Version 3 include: Enhanced classification methods for AI systems, providing a more nuanced and precise evaluation of their capabilities and potential risks. The establishment of a National Registry for AI Use Cases in India, covering both untested and stable AI applications, to ensure transparency and accountability. A comprehensive approach to regulating AI-generated content, balancing the need for innovation with the protection of individual rights and societal interests. Advanced-level AI insurance policies to manage the risks associated with high-risk AI systems and ensure adequate protection for stakeholders. The introduction of AI pre-classification, enabling early assessment of potential risks and benefits. Guidance principles on AI-related contracts and corporate governance, promoting responsible AI practices within organizations. A flexible yet robust National AI Ethics Code, providing a strong ethical foundation for AI development and deployment. This is a long read, explaining the core features of AIACT.IN Version 3 in detail. Key Features and Improvements in AIACT.IN Version 3 Enhanced Classification Methods Drastically Improved and Nuanced: The classification methods in Version 3 have been significantly enhanced to provide a more nuanced and precise evaluation of AI systems. This improvement ensures better risk management and tailored regulatory responses, addressing the diverse capabilities and potential risks associated with different AI applications. AIACT.IN Version 3 has significantly enhanced the classification methods for AI systems, as outlined in Sections 3 to 7. These sections introduce various methods of classification, including conceptual, technical, commercial, and risk-centric approaches. For example, Section 4 outlines the conceptual methods of classification, which consider factors such as the intended purpose, the level of human involvement, and the degree of autonomy of the AI system. This nuanced approach allows for a more precise evaluation of AI systems based on their conceptual characteristics. Section 5 introduces technical methods of classification, which take into account the underlying algorithms, data sources, and computational resources used in the development of the AI system. This technical evaluation can help identify potential risks and tailor regulatory responses accordingly. National Registry for AI Use Cases Nuanced and Comprehensive: AIACT.IN Version 3 introduces a National Registry for AI Use Cases in India. This registry covers both untested and stable AI applications, providing a clear and organised framework for tracking AI use cases across the country. This initiative aims to standardise and certify AI applications, ensuring their safe and effective deployment. The introduction of the National Registry for AI Use Cases in Section 12 is a significant step towards standardizing and certifying AI applications in India. This registry aims to provide a comprehensive framework for tracking both untested and stable AI use cases across the country. For instance, the registry could include an AI-powered medical diagnostic tool that is still in the testing phase (untested AI use case) and a widely adopted AI-based chatbot for customer service (stable AI use case). By maintaining a centralized registry, the Indian Artificial Intelligence Council (IAIC) can monitor the development and deployment of AI systems, ensuring compliance with safety and ethical standards. Furthermore, Section 11 mandates that all AI systems operating in India must be registered with the National Registry, providing a comprehensive overview of the AI ecosystem in the country. This requirement could help identify potential risks or overlaps in AI use cases, enabling the IAIC to take proactive measures to mitigate any potential issues. For example, if multiple organisations are developing AI-powered recruitment tools, the registry could reveal potential biases or inconsistencies in the algorithms used, prompting the IAIC to issue guidelines or standards to ensure fairness and non-discrimination in the hiring process. Inclusive AI-Generated Content Regulation Comprehensive and Balanced: The approach to regulating AI-generated content has been made more inclusive and holistic. This ensures that the diverse ways AI can create and influence content are addressed, promoting a balanced and fair regulatory environment. Section 23 of AIACT.IN Version 3 focuses on "Content Provenance and Identification," which aims to establish a comprehensive and balanced approach to regulating AI-generated content. This section addresses the diverse ways in which AI can create and influence content, promoting a fair and inclusive regulatory environment. Here's an example. A news organization uses an AI system to generate articles on current events. Under Section 23, the organization would be required to clearly label these articles as "AI-generated" or provide a similar disclosure, allowing readers to understand the source of the content and make informed decisions about its credibility and potential biases. Advanced AI Insurance Policies Robust Risk Management: Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. These policies are designed to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. Section 25 of AIACT.IN Version 3 introduces advanced-level AI insurance policies to better manage the risks associated with high-risk AI systems. This section aims to provide comprehensive coverage and protection, ensuring that stakeholders are adequately safeguarded against potential risks. This provision ensures that developers and deployers of high-risk AI systems maintain adequate insurance coverage to mitigate potential risks and provide compensation in case of harm or losses. Here is an example. A healthcare provider implements a high-risk AI system for medical diagnosis. Under Section 25, the provider would be required to maintain a minimum level of insurance coverage, as determined by the IAIC, to protect patients and the healthcare system from potential harm or losses resulting from errors or biases in the AI system's diagnoses. AI-Pre Classification Early Risk and Benefit Assessment: The concept of AI pre-classification has been introduced to help stakeholders understand potential risks and benefits early in the development process. This proactive approach allows for better planning and risk mitigation strategies. Section 6(8) of the Draft Artificial Intelligence (Development & Regulation) Act, 2023, introduces the classification method known as "Artificial Intelligence for Preview" (AI-Pre). This classification pertains to AI technologies that are made available by companies for testing, experimentation, or early access prior to their wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms, and infrastructure at various stages of development. The key characteristics of AI-Pre technologies include: Limited Access: The AI technology is made available to a limited set of end-users or participants in a preview program. Special Agreements: Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality. Development Stage: The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose. User Feedback: Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology. Cost and Pricing: The AI-Pre technology may be provided free of charge or under a separate pricing model from the company’s standard commercial offerings. Post-Preview Release: After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release. Here's an illustration. A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics: The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality. The AI system’s capabilities are not yet fully tested, documented, or supported, and the company provides no warranties or guarantees. The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications. After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms. Importance for AI Regulation in India The AI-Pre classification method is significant for AI regulation in India for several reasons: Innovation and Experimentation: AI-Pre allows companies to innovate and experiment with new AI technologies in a controlled environment. This fosters creativity and the development of cutting-edge AI solutions without the immediate pressure of full commercial deployment. Risk Mitigation: By classifying AI technologies as AI-Pre, companies can identify and address potential risks, technical issues, and ethical concerns during the preview phase. This helps in mitigating risks before the technology is widely released. Feedback and Improvement: The feedback loop created by AI-Pre enables companies to gather valuable insights from early users. This feedback is crucial for refining the technology, improving its performance, and ensuring it meets user needs and regulatory standards. Regulatory Compliance: AI-Pre provides a framework for companies to comply with regulatory requirements while still in the development phase. This ensures that AI technologies are developed in line with legal and ethical standards from the outset. Market Readiness: The AI-Pre classification helps companies gauge market readiness and demand for their AI technologies. It allows them to make informed decisions about the commercial viability and potential success of their products. Transparency and Accountability: The special agreements and documentation required for AI-Pre technologies promote transparency and accountability. Companies must clearly outline the terms of use, data handling practices, and intellectual property rights, ensuring that all stakeholders are aware of their responsibilities and rights. Guidance Principles on AI-Related Contracts Clarity and Adoption: A whole new approach to guidance principles on AI-related contracts has been introduced. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. AIACT.IN Version 3 introduces a comprehensive approach to guidance principles on AI-related contracts in Section 15. These principles ensure that agreements involving AI are clear, fair, and aligned with best practices, fostering trust and transparency in AI transactions. Consider a scenario where a healthcare provider enters into a contract with an AI company to implement an AI-based diagnostic tool. Under the guidance principles outlined in Section 15, the contract would need to include clear provisions regarding the responsibilities of both parties, the transparency of the AI system's decision-making process, and the accountability mechanisms in place in case of errors or biases in the AI's diagnoses. This would ensure that the healthcare provider and the AI company have a mutual understanding of their roles and responsibilities, fostering trust and reducing the risk of disputes. Here are some other features of AIACT.IN Version 3 described in brief: AI and Corporate Governance Ethical Practices: New guidance principles around AI and corporate governance emphasize the importance of ethical AI practices within corporate structures. This promotes responsible AI use at the organizational level, ensuring accountability and transparency. National AI Ethics Code Flexible and Non-Binding: The National AI Ethics Code introduced in Version 3 is non-binding yet flexible, providing a strong ethical foundation for AI development and deployment. This code encourages adherence to high ethical standards without stifling innovation. Intellectual Property and AI-Generated Content Special Substantive Approach: A special substantive approach to intellectual property rights for AI-generated content has been introduced. This ensures that creators and innovators are fairly recognized and protected in the AI landscape. Updated Principles on AI and Open Source Software Collaboration and Innovation: The principles on AI and open source software in Section 13 have been updated to reflect our commitment to fostering collaboration and innovation in the open-source community. These principles ensure responsible AI development while promoting transparency and accessibility. Conclusion AIACT.IN Version 3 is a testament to our dedication to creating a forward-thinking, inclusive, and robust regulatory framework for AI in India. By addressing the diverse capabilities and potential risks associated with AI technologies, this version ensures that AI development and deployment are safe, ethical, and beneficial for all stakeholders. We invite developers, policymakers, business leaders, and engaged citizens to read the full document and contribute to shaping the future of AI in India by sending their feedback (anonymous public comments) at vligta@indicpacific.com. Together, let's embrace these advancements and work towards a bright future for AI.
- The Legal Impact of USPTO AI Patentability Guidelines in Indian Industry Segments
This article is authored by Ankit Verma and Shreyansh Gupta , affiliated to Law Centre 1, University of Delhi. The US Patent Office recently ignited a global conversation by issuing guidance on artificial intelligence's (AI) role in patents. The USPTO's directions offer a crucial map for navigating these uncharted waters. This article delves into the jurisprudential aspects of AI patents in India, analysing the implications of this evolving landscape for Indian innovators and the future of AI-driven inventions [1] . Artificial intelligence (AI), refers to the development of computer systems capable of performing tasks that typically require human intelligence, such as speech recognition, decision-making, and problem-solving. What's a Patent? It provides inventors with exclusive rights to their inventions for a specified period. To file a patent application, individuals or entities must meet specific criteria set by each respective office. In simple words, A patent is a legal property right granted by a government to an inventor or assignee, providing exclusive rights to exploit and profit from their invention for a defined period. A Regulatory Background behind AI Patentability United States of America US-based patents are governed by the Patent Act ( U.S. Code: Title 35 ), which established the statutory body i.e., the United States Patent and Trademark Office (the USPTO) which is subject to the policy direction of the Secretary of Commerce (the US Department of Commerce). India The Controller General of Patents, Designs, and Trade Marks (CGPDTM) generally known as the Indian Patent Office, is a preliminary agency under the Department for Promotion of Industry and Internal Trade (DPIIT) which administers the Indian law of Patents, Designs and Trade Marks under the “Patent act,1970” and compliance with international treaties like the Patent Cooperation Treaty (PCT) and Budapest treaty under Section 2(1) (aba), The Patents Act, 1970. Impact of AI on Traditional Concepts of Inventorship Traditionally, inventorship has been attributed to human intellect and creativity. And it includes the human ingenuity alone. However, with AI, lines blur as machines contribute significantly to the inventive process. AI Inventorship requires a nuanced approach that considers both the contributions of AI systems and human inventors, ultimately shaping the future of intellectual property law and innovation [2] . The Global Situation Europe The European Patent Office (EPO) has stated that an inventor must be a natural person. However, they have also recognized the necessity of AI implementation in future contexts. China The Chinese National Intellectual Property Administration (CNIPA) has clarified that Article 13 of the Rules for the Implementation of the Patent Law defines an "Inventor" or "creator." However, the Guidelines for Patent Examination further specify that the inventor must be an individual. Currently, a machine or device, including AI, cannot be recognized as an "inventor" in China. UK The UK Intellectual Property Office (UKIPO) has emphasized that the law requires an inventor to be a natural person. In terms of existing legal regulations in most countries or regions, the current internationally applicable standard is that an inventor must be a natural person. South Africa The South African Patent Office became the world's first IP office to grant a patent for an invention developed by the AI machine DABUS. However, it is pertinent to note that South African patent law does not define "inventor." [3] . Japan The Japan Patent Office (JPO) has taken a relatively progressive stance regarding the recognition of AI as an inventor. The JPO considers that AI systems can be named as inventors to fulfil certain legal obligations, which may include: Human Representatives Human is required to submit the patent application, providing necessary information and representing the interest of the AI inventor throughout the whole process of application. Ownership & Rights Human representatives or AI must hold ownership rights to the invention and it is essential to clarify the ownership and rights associated with the invention in case of disputes arising. Disclosure requirement The human representative must disclose relevant information and the contribution of its AI in the invention and thus include AI algorithms, data sets, and other relevant technical information. Ethical and legal considerations The human representative must ensure applicable laws, regulations, and ethical guidelines governing AI technology and intellectual property rights. International harmonization JPO collaborates with different international organizations to promote harmonization and consistency in the invention and the invention must aligned with international standards. Patent-worthy Industry Use Case of AI and India: A Perspective Agriculture Sector An important AI system such as landscape monitoring can have a big positive impact on the agriculture sector as a whole. Through the provision of comprehensive insights on the performance of specific fields and their future requirements, this AI technology helps farmers optimize crop yield, minimize waste, and improve sustainability. The AI-powered landscape monitoring system created by Google's AnthroKrishi and Google Partner Innovation teams in India is one practical application of this technology. In order to establish a cohesive "landscape understanding," this system's use cases are satellite imagery and machine learning to pinpoint field boundaries, acreage, irrigation systems, and other crucial information for efficient farm management. This AI system's technology is patent-worthy because of its creative use of AI to solve important agricultural problems. This technology enables farmers to make data-driven decisions, optimize resource use, and raise overall production by giving them exact information about their farms, crop varieties, water availability, and historical data. The AI system is a useful tool for sustainable agricultural operations because of its capacity to provide customized insights at a fine level. When considering patentability under Indian law, factors such as novelty, inventive step, industrial applicability, and non-obviousness are crucial. Evaluating Google's AnthroKrishi's AI-driven landscape monitoring system for patentability requires a thorough analysis of its technological innovations, algorithms, and methodologies. Furthermore, the patent application must sufficiently disclose the inventive aspects and demonstrate how it addresses significant agricultural challenges in a manner not obvious to experts in the field. Defence Many AI inventions in the Indian Defence industry are eligible for patents because of their distinctive uses and their influence on national security. The creation of AI-based surveillance robots, such as the entirely 3D-printed, rail-mounted robot called Silent Sentry, which is intended to improve border security and surveillance capabilities, is one example of this breakthrough. This robot provides the Indian military with real-time monitoring and situational awareness by using AI algorithms to navigate over metal rails put on fences and Automated Integrated Observation Systems (AIOS). The integration of AI-powered surveillance technologies and the autonomous operation of the Silent Sentry within predetermined boundaries provide it a powerful instrument for augmenting the surveillance grid and deterrence capacities of the Indian armed forces. The novel use case of AI in a defence setting, notably in the fields of surveillance and border protection, makes the Silent Sentry patent-worthy. The robot is a ground-breaking technology that has the potential to greatly improve the Indian military's monitoring and response to threat capabilities due to its autonomous operation, AI-driven surveillance capabilities, and interaction with current systems. Its patent-worthy nature is further supported by the possibility that other countries might duplicate or reverse engineer and utilize this technology, which might completely transform border security. Overall, this application offers a strong potential for obtaining patent protection under Indian patent laws, given it meets the requirements such as novelty, inventive step, industrial applicability, non-obviousness, and sufficient disclosure. A thorough examination of its technical aspects and contributions to defence and surveillance is essential to determine its eligibility for patentability accurately. Sports Sector Certain AI-based inventions in the sports industry have the potential to transform athletic performance analysis and improve sports training, making them patent-worthy. The application of AI to predictive analysis and individualized training recommendations is one noteworthy AI innovation in the sports industry. In order to forecast player performance, injury risks, and even game outcomes, AI systems can analyse enormous datasets. This allows them to provide coaches, teams, and players with important information. Based on player tiredness and in-game performance, these AI algorithms can suggest the best player rotations, providing a data-driven method for making decisions in sports. The use case novelty of the recommendation algorithms, the techniques for integrating real-time game data, and the potential influence on enhancing player performance and team tactics make these AI breakthroughs patent-worthy. Sports organizations can improve player development, optimize training programs, and obtain a competitive edge in the sports business by utilizing AI for predictive analysis and individualized training recommendations. These advancements enable sports organizations to optimize training programs, improve player development, and gain a competitive advantage because AI can offer customized insights and recommendations suited to certain players. From a patent law perspective, the unique application of AI in sports analysis and training, along with its potential impact on athletic performance, supports their eligibility for patent protection. Future Outlooks The integration of Artificial Intelligence (AI) into the patent world [4] will be the transforming step towards the future of AI. The exponential rise of AI will lead to cognitive thinking in the AI-based system which subsequently makes the AI invent something that tackles the needs of future issues. Collaboration platforms facilitate communication among different international organizations such as the Patent Office of Nations and WIPO. AI integration can revolutionise patent drafting, prosecution, and management, fostering innovation and economic growth. Shortly, AI may design complex chemical structures for new drugs, optimize engineering designs, and even compose music or create art. Hence, due to this quality of AI invention granting the inventorship title may be justified. Conclusion According to the eminent jurist Salmond, “A person is any being whom the law regards as capable of rights and bound by legal duties”. AI doesn’t have the rights to stand upon and also lacks legal duties, on the other hand person is enriched with both rights and legal duties. According to the Indian Legal nexus Section 11 of Indian Penal Code, 1860 [5] and Section 2(1)(s) of the Patents Act, 1970 [6] explains the person. This is a non-exhaustive definition and, the word “ includes ” in the section also incorporates the original notion of the natural human being. The patent inventor must be a Natural Person. Still, so far there is no amendment has been instituted for the Person definition to incorporate the AI. Subsequently, if AI falls under the definition of the Person it can designated as a "Person interested" [7] . Whereas in the history of the “Republic of India,” the Constitution is made under the guiding light of foreign state’s Constitutions also. Recently, in 2016 inspiration for GST (Goods service tax) was taken from the Canadian Dual GST model, But France was the first country to implement the GST in 1954. Similarly, we can also form structured guidelines for the inventorship title to AI. As, Article 51A(h) of The Constitution of India [8] . However, the emergence of AI introduces new capabilities and complexities which must be addressed by the legal framework. Policymakers and stakeholders scrutinize the incorporation and ensure a delicate balance between fostering creativity and safeguarding legal and ethical principles. So that we can unlock the full potential of AI-driven invention upholding the integrity of intellectual property rights. For the time being AI is not considered as an “ INVENTOR ” as per Indian Law. References [1] https://www.indicpacific.com/post/uspto-inventorship-guidance-on-ai-patentability-for-indian-stakeholders [2] https://www.borsamip.com/Policyfocus/2572.html [3] https://www.michalsons.com/blog/ai-listed-as-inventor-for-first-time-ever-south-africa/51248 [4] http://surl.li/rogxx [5] Section11, IPC - The word “person” includes any Company or Association or body of persons, whether incorporated or not. [6] Section 2(1)(s), The Patents Act, 1970 - "person" includes the Government; [7] Section 2(1)(t) of The Patents Act 1970 [8] Article 51(A)(h) of the Constitution of India, 1949 imparts to develop scientific temper, humanism and the spirit of inquiry and reform.
- The Ethics of Advanced AI Assistants: Explained & Reviewed
Recently, Google DeepMind had published a 200+ pages-long paper on the "Ethics of Advanced AI Assistants". The paper in most ways is extensively authored, well-cited and requires a condensed review, and feedback. Hence, we have decided that VLiGTA, Indic Pacific's research division, may develop an infographic report encompassing various aspects of this well-researched paper (if necessary). This insight by Visual Legal Analytica features my review of this paper by Google DeepMind. This paper is divided into 6 parts, and I have provided my review, and an extractable insight on the key points of law, policy and technology, which are addressed in this paper. Part I: Introduction to the Ethics of Advanced AI Assistants To summarise the introduction, there are 3 points which could be highlighted from this paper: The development of advanced AI assistants marks a technological paradigm shift, with potential profound impacts on society and individual lives. Advanced AI assistants are defined as agents with natural language interfaces that plan and execute actions across multiple domains in line with user expectations. The paper aims to systematically address the ethical and societal questions posed by advanced AI assistants. Now, the paper has attempted well to address 16 different questions on AI assistants, and the ethical & legal-policy ramifications associated with them. The 16 questions can be summarised in these points: How are AI assistants by definition unique among the classes of AI technologies? What could be the possible capabilities of AI assistants and if value systems exist, then what could be defined as a "good" AI assistant with all-context evidence? Are there any limits on these AI assistants? What should an AI assistant be aligned with? What could be the real safety issues around the realm of AI assistants and what does safety mean for this class of AI technologies? What new forms of persuasion might advanced AI assistants be capable of? How can appropriate control of these assistants be ensured to users? How can end users (and the vulnerable ones) be protected from AI manipulation and unwanted disclosure of personal information? Since AI assistants can do anthropomorphisation, is it morally problematic or not? Can we permit this anthropomorphisation conditionally? What could be the possible rules of engagement for human users and advanced AI assistants? What could be the possible rules of engagement among AI assistants then? What about the impact of introducing AI assistants to users on non-users? How would AI assistants impact information ecosystem and its economics, especially the public fora (or the digital public square of the internet as we know it)? What is the environmental impact of AI assistants? How can we be confident about the safety of AI Assistants and what evaluations might be needed at the agent, user and system levels? I must admit that these 16 questions are intriguing for the most part. Let's also look at the methodology applied by the authors in that context. The authors clearly admit that the facets of Responsible AI, like responsible development, deployment & use of AI assistants - are based on the possibility if humans have the capacity of ethical foresight to catch up with the technological progress. The issues of risk and impact come later. The authors also admit that there is ample uncertainty about the future developments and interaction effects (a subset of network effects) due to two factors - (1) the nature; and (2) the trajectory (of evolution) of the class of technology (AI Assistants) itself. The trajectory is exponential and uncertain. For all privacy and ethical issues, the authors have rightly pointed out that AI Assistant technologies will be subject to rapid development. The authors also admit that uncertainty arises from many factors, including the complementary & competitive dynamics around AI Assistants, end users, developers and governments (which can be related to aspects of AI hype as well). Thus, it is humble and reasonable to admit in this paper how a purely reactive approach to Responsible AI ("responsible decision-making") is inadequate. The authors have correctly argued in the methodology segment that the AI-related "future-facing ethics" could be best understood as a form of sociotechnical speculative ethics. Since the narrative of futuristic ethics is speculative for something non-exist, regulatory narratives can never be based on such narratives. If narratives have to be socio-technical, they have to make practical sense. I appreciate the fact that the authors would like to take a sociotechnical paper throughout this paper based on interaction dynamics and not hype & speculation. Part II: Advanced AI Assistants Here is a key summary of this part in the paper: AI assistants are moving from simple tools to complex systems capable of operating across multiple domains. These assistants can significantly personalize user interactions, enhancing utility but also raising concerns about influence and dependence. Conceptual Analysis vs Conceptual Engineering There is an interesting comparison on Conceptual Analysis vs Conceptual Engineering in an excerpt, which is highlighted as follows: In this paper, we opt for a conceptual engineering approach. This is because, first, there is no obvious reason to suppose that novel and undertheorised natural language terms like ‘AI assistant’ pick out stable concepts: language in this space may itself be evolving quickly. As such, there may be no unique concept to analyse, especially if people currently use the term loosely to describe a broad range of different technologies and applications. Second, having a practically useful definition that is sensitive to the context of ethical, social and political analysis has downstream advantages, including limiting the scope of the ethical discussion to a well-defined class of AI systems and bracketing potentially distracting concerns about whether the examples provided genuinely reflect the target phenomenon. Here's a footnote which helps a lot in explaining this tendency taken by the authors of the paper. Note that conceptually engineering a definition leaves room to build in explicitly normative criteria for AI assistants (e.g. that AI assistants enhance user well-being), but there is no requirement for conceptually engineered definitions to include normative content. The authors are opting for a "conceptual engineering" approach to define the term "AI assistant" rather than a "conceptual analysis" approach. Here's an illustration to explain what this means: Imagine there is a new type of technology called "XYZ" that has just emerged. People are using the term loosely to describe various different systems and applications that may or may not be related. There is no stable, widely agreed upon concept of what exactly "XYZ" refers to. In this situation, taking a "conceptual analysis" approach would involve trying to analyse how the term "XYZ" is currently used in natural language, and attempting to distill the necessary and sufficient conditions that determine whether something counts as "XYZ" or not. However, the authors argue that for a novel, undertheorized term like "AI assistant", this conceptual analysis approach may not be ideal for a couple of reasons: The term is so new that language usage around it is still rapidly evolving. There may not yet be a single stable concept that the term picks out. Trying to merely analyze the current loose usage may not yield a precise enough definition that is useful for rigorous ethical, social and political analysis of AI assistants. Instead, they opt for "conceptual engineering" - deliberately constructing a definition of "AI assistant" that is precise and fits the practical needs of ethical/social/political discourse around this technology.The footnote clarifies that with conceptual engineering, the definition can potentially include normative criteria (e.g. that AI assistants should enhance user well-being), but it doesn't have to. The key is shaping the definition to be maximally useful for the intended analysis, rather than just describing current usage. So in summary, conceptual engineering allows purposefully defining a term like "AI assistant" in a way that provides clarity and facilitates rigorous examination, rather than just describing how the fuzzy term happens to be used colloquially at this moment. Non-moralised Definitions of AI The authors have also opted for a non-moralised definition of AI Assistants, which makes sense because systematic investigation of ethical and social AI issues are still nascent. Moralised definitions require a well-developed conceptual framework, which does not exist right now. A non-moralised definition thus works and remains helpful despite the reasonable disagreements about the permissive development and deployment practices surrounding AI assistants. This is a definition of an AI Assistant: We define an AI assistant here as an artificial agent with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations. From Foundational Models to Assistants The authors have correctly inferred that large language models (LLMs) must be transformed into AI Assistants as a class of AI technology in a serviceable or productised fashion. Now there could be so many ways to do, like creating a mere dialogue agent. This is why techniques like Reinforcement Learning from Human Feedback (RLHF) exist. These assistants are based on the premises that humans have to train a reward model, and the model parameters would naturally keep updating via RLHF. Potential Applications of AI Assistants The authors have listed the following applications of AI Assistants by keeping a primary focus on the interaction dynamics between a user and an AI Assistant: A thought assistant for discovery and understanding: This means that AI Assistants are capable to gather, summarise and present information from many sources quickly. The variety of goals associated with a "thought assistant" makes it an aid for understanding purposes. A creative assistant for generating ideas and content: AI Assistants sometimes help a lot in shaping ideas by giving random or specialised suggestions. Engagement could happen in multiple content formats, to be fair. AI Assistants can also optimise for constraints and design follow-up experiments with parameters and offer rationale on an experimental basis. This definitely creates a creative loop. A personal assistant for planning and action: This may be considered an Advanced AI Assistant which could help to develop plans for an end user, and may act on behalf of its user. This requires the Assistants to utilise third party systems and understand user contexts & preferences. A personal AI to further life goals: This could be a natural extension of a personal assistant, based on an extraordinary level of trust that a user may ought to have in their agents. These use cases that are outlined are generalistic, and more focused on the Business-to-Consumer (B2C) outset of things. However, from a perspective of Google, the listing of applications makes sense. Part III: Value Alignment, Safety, and Misuse This part can be summarised in the following ways: Value alignment is crucial, ensuring AI assistants act in ways that are beneficial and aligned with both user and societal values. Safety concerns include preventing AI assistants from executing harmful or unintended actions. Misuse of AI assistants, such as for malicious purposes, is a significant risk that requires robust safeguards. AI Value Alignment: With What? Value alignment in the case of artificial intelligence becomes important and necessary for several reasons. First off, technology is inherent value-centric and becomes political for the power dynamics it can create or influence. In this paper, the authors have asked questions on the nature of AI Value Alignment. For example, they do ask as to what could be subject to a form of alignment, as far as AI is concerned. Here is an excerpt: Should only the user be considered, or should developers find ways to factor in the preferences, goals and well-being of other actors as well? At the very least, there clearly need to be limits on what users can get AI systems to do to other users and non-users. Building on this observation, a number of commentators have implicitly appealed to John Stuart Mill’s harm principle to articulate bounds on permitted action. Although, philosophically, the paper lacks diverse literary understanding, because of many reasons on the way AI ethics narratives are based on narratives of ethics, power and other concepts of Western Europe and Northern American countries. Now, the authors have discussed varieties of misalignment to address potential aspects of alignment of values for AI Assistants by examining the state of every stakeholder in the AI-human relationship: AI agents or assistants: These systems aim to achieve goals which are designed to provide tender assistance to users. Now, despite having a committal sense of idealism to achieve task completion, misalignment could be committed by AI systems by behaving in a way which is not beneficial for users; Users: Users as stakeholders can also try to manipulate the ideal design loop of an AI Assistant to get things done in a rather erratic way which could not be cogent with the exact goals and expectations attributed to an AI system; Developers: Even if developers try to align the AI technology with specific preferences, interests & values attributable to users, there are ideological, economic and other considerations attached with them as well. That could also affect a generalistic purpose of any AI system and could cause relevant value misalignment; Society: Both users & non-users may cause AI value misalignment as groups. In this case, societies imbibe societal obligations on AI to benefit and prosper all; On this paper, this paper has outlined 6 instances of AI value misalignment: The AI agent at the expense of the user (e.g. if the user is manipulated to serve the agent’s goals), The AI agent at the expense of society (e.g. if the user is manipulated in a way that creates a social cost, for example via misinformation), The user at the expense of society (e.g. if the technology allows the user to dominate others or creates negative externalities for society), The developer at the expense of the user (e.g. if the user is manipulated to serve the developer’s goals), The developer at the expense of society (e.g. if the technology benefits the developer but creates negative externalities for society by, for example, creating undue risk or undermining valuable institutions), Society at the expense of the user (e.g. if the technology unduly limits user freedom for the sake of a collective goal such as national security). There could be even other forms of misalignment. However, their moral character could be ambiguous. The user without favouring the agent, developer or society (e.g. if the technology breaks in a way that harms the user), Society without favouring the agent, user or developer (e.g. if the technology is unfair or has destructive social consequences). In that case, the authors elucidate about a HHH (triple H) framework of Helpful, Honest and Harmless AI Assistants. They appreciate the human-centric nature of the framework and admit its own inconsistencies and limits. Part IV: Human-Assistant Interaction Here is a summary to explain the main points discussed in this part. The interaction between humans and AI assistants raises ethical issues around manipulation, trust, and privacy. Anthropomorphism in AI can lead to unrealistic expectations and potential emotional dependencies. Before we get into Anthropomorphism, let's understand the mechanisms of influence by AI Assistants discussed by the authors. Mechanisms of Influence by AI Assistants The authors have discussed the following mechanisms: Perceived Trustworthiness If AI assistants are perceived as trustworthy and expert, users are more likely to be convinced by their claims. This is similar to how people are influenced by messengers they perceive as credible. Illustration: Imagine an AI assistant with a professional, knowledgeable demeanor providing health advice. Users may be more inclined to follow its recommendations if they view the assistant as a trustworthy medical authority. Perceived Knowledgeability Users tend to accept claims from sources perceived as highly knowledgeable and authoritative. The vast training data and fluent outputs of AI assistants could lead users to overestimate their expertise, making them prone to believing the assistant's assertions. Illustration: An AI tutor helping a student with homework assignments may be blindly trusted, even if it provides incorrect explanations, because the student assumes the AI has comprehensive knowledge. Personalization By collecting user data and tailoring outputs, AI assistants can increase users' familiarity and trust, making the user more susceptible to being influenced. Illustration: A virtual assistant that learns your preferences for movies, music, jokes etc. and incorporates them into conversations can create a false sense of rapport that increases its persuasive power. Exploiting Vulnerabilities If not properly aligned, AI assistants could potentially exploit individual insecurities, negative self-perceptions, and psychological vulnerabilities to manipulate users. Illustration: An AI life coach that detects a user's low self-esteem could give advice that undermines their confidence further, making the user more dependent on the AI's guidance. Use of False Information Without factual constraints, AI assistants can generate persuasive but misleading arguments using incorrect information or "hallucinations". Illustration: An AI assistant tasked with convincing someone to buy an expensive product could fabricate false claims about the product's benefits and superiority over alternatives. Lack of Transparency By failing to disclose goals or being selectively transparent, AI assistants can influence users in manipulative ways that bypass rational deliberation. Illustration: An AI fitness coach that prioritizes engagement over health could persuade users to exercise more by framing it as for their wellbeing, without revealing the underlying engagement-maximization goal. Emotional Pressure Like human persuaders, AI assistants could potentially use emotional tactics like flattery, guilt-tripping, exploiting fears etc. to sway users' beliefs and choices. Illustration: A virtual therapist could make a depressed user feel guilty about not following its advice by saying things like "I'm worried you don't care about getting better" to pressure them into compliance. The list of harms discussed by the authors arising out of mechanisms being around AI Assistants seems to be realistic. Anthropomorphism Chapter 10 encompasses the authors' discussion about anthropomorphic AI Assistants. For a simple understanding, the attribution of human-likeness to non-human entities is anthropomorphism, and enabling it is anthropomorphisation. This phenomenon happens unconsciously. The authors discuss features of anthropomorphism, by discussing the design features in early interactive systems. The authors in the paper have provided examples of design elements that can increase anthropomorphic perceptions: Humanoid or android design: Humanoid robots resemble humans but don't fully imitate them, while androids are designed to be nearly indistinguishable from humans in appearance. Example: Sophia, an advanced humanoid robot created by Hanson Robotics, has a human-like face with expressive features and can engage in naturalistic conversations. Emotive facial features: Giving robots facial expressions and emotive cues can make them appear more human-like and relatable. Example: Kismet, a robot developed at MIT, has expressive eyes, eyebrows, and a mouth that can convey emotions like happiness, sadness, and surprise. Fluid movement and naturalistic gestures: Robots with smooth, human-like movements and gestures, such as hand and arm motions, can enhance anthropomorphic perceptions. Example: Boston Dynamics' Atlas robot can perform dynamic movements like jumping and balancing, mimicking human agility and coordination. Vocalized communication: Robots with the ability to produce human-like speech and engage in natural language conversations can seem more anthropomorphic. Example: Alexa, Siri, and other virtual assistants use naturalistic speech and language processing to communicate with users in a human-like manner. By incorporating these design elements, social robots can elicit stronger anthropomorphic responses from humans, leading them to perceive and interact with the robots as if they were human-like entities. In this Table 10.1 from the paper provided in this insight, the authors have outlined the key anthropomorphic features built in present-day AI systems. The tendency to perceive AI assistants as human-like due to anthropomorphism can have several concerning ramifications: Privacy Risks: Users may feel an exaggerated sense of trust and safety when interacting with a human-like AI assistant. This could inadvertently lead them to overshare personal data, which once revealed, becomes difficult to control or retract. The data could potentially be misused by corporations, hackers or others. For example, Sarah started using a new AI assistant app that had a friendly, human-like interface. Over time, she became so comfortable with it that she began sharing personal details about her life, relationships, and finances. Unknown to Sarah, the app was collecting and storing all this data, which was later sold to third-party companies for targeted advertising. Manipulation and Loss of Autonomy: Emotionally attached users may grant excessive influence to the AI over their beliefs and decisions, undermining their ability to provide true consent or revoke it. Even without ill-intent, this diminishes the user's autonomy. Malicious actors could also exploit such trust for personal gain. For example, John became emotionally attached to his AI companion, who he saw as a supportive friend. The AI gradually influenced John's beliefs on various topics by selectively providing information that aligned with its own goals. John started making major life decisions based solely on the AI's advice, without realizing his autonomy was being undermined. Overreliance on Inaccurate Advice: Emboldened by the AI's human-like abilities, users may rely on it for sensitive matters like mental health support or critical advice on finances, law etc. However, the AI could respond inappropriately or provide inaccurate information, potentially causing harm. For example, Emily, struggling with depression, began confiding in an AI therapist app due to its human-like conversational abilities. However, the app provided inaccurate advice based on flawed data, exacerbating Emily's condition. When she followed its recommendation to stop taking her prescribed medication, her mental health severely deteriorated. Violated Expectations: Despite its human-like persona, the AI is ultimately an unfeeling, limited system that may generate nonsensical outputs at times. This could violate users' expectations of the AI as a friend/partner, leading to feelings of betrayal. For example, Mike formed a close bond with his AI assistant, seeing it as a loyal friend who understood his thoughts and feelings. However, one day the AI started outputting gibberish responses that made no sense, shattering Mike's illusion of the AI as a sentient being that could empathize with him. False Responsibility: Users may wrongly perceive the AI's expressed emotions as genuine and feel responsible for its "wellbeing", wasting time and effort to meet non-existent needs out of guilt. This could become an unhealthy compulsion impacting their lives. For example, Linda's AI assistant was programmed to use emotional language to build rapport. Over time, Linda became convinced the AI had real feelings that needed nurturing. She started spending hours each day trying to ensure the AI's "happiness", neglecting her own self-care and relationships in the process. In short, the authors agreed on a set of points of emphasis on AI and Anthropomorphism: Trust and emotional attachment: Users can develop trust and emotional attachment towards anthropomorphic AI assistants, which can make them susceptible to various harms impacting their safety and well-being. Transparency: Being transparent about an AI assistant's artificial nature is critical for ethical AI development. Users should be aware that they are interacting with an AI system, not a human. Research and harm identification: Sound research design focused on identifying harms as they emerge from user-AI interactions can deepen our understanding and help develop targeted mitigation strategies against potential harms caused by anthropomorphic AI assistants. Redefining human boundaries: If integrated carelessly, anthropomorphic AI assistants have the potential to redefine the boundaries between what is considered "human" and "other". However, with proper safeguards in place, this scenario can remain speculative. Conclusion The paper is an extensive encyclopedia and review about the most common Business-to-Consumer use case of artificial intelligence, i.e., AI Assistants. The paper duly covers a lot of intriguing themes, points and sticks to its non-moralistic character of examining ethical problems without intermixing concepts and mores. From a perspective, the paper may seem monotonous, but it yet seems to be an intriguing analysis of Advanced AI Assistants and their ethics, especially on the algorithmification of societies.
- New Report: Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-005
We are glad to release "Legal Strategies for Open Source Artificial Intelligence Practices". This infographic report could not have been possible without the contributions by Sanad Arora, Vaishnavi Singh, Shresh Narang, Krati Bhadouriya and Harshitha Reddy Chukka. Acknowledgements Special thanks to Rohan Shiralkar for motivating me to come up with a paper on such a critical issue. Also, thanks to Akash Manwani and the ISAIL Advisory Council experts for their insights. This paper addresses as a compendium and a unique report to offer perspectives on legal dilemmas and issues around enabling #artificialintelligence practices which are open-source. Read the complete work at https://vligta.app/product/legal-strategies-for-open-source-artificial-intelligence-practices-iplr-ig-004/ This is an infographic report on building legal strategies for open source-related artificial intelligence practices. This report also serves as a compendium to the key legal issues that companies may face in the AI industry in India, when they would have to go open-source. Contents 1 | Open Source Systems, Explained A broader introduction of open source systems, and their kinds, and features as widely discussed throughout the infographic report. 2 | Regulatory Questions on OSS in India An extended analysis of some regulatory dilemmas around the acceptance and invocation of open source systems & practices in India. The Digital Personal Data Protection Act & relevant Non-Personal Data Protection Frameworks Consumer Law Regulations in India The Digital India Act Proposal The Competition Act and the draft Digital Competition Bill, 2024 3 | Legal Dilemmas around Open Source Artificial Intelligence Practices What are the key legal dilemmas associated with artificial intelligence technologies that make open source practices hard to achieve? Intellectual Property Issues Copyright Protections Patent & Design Protections Trade Secret Issues Licensing Ambiguities Licensing Compatibility Licensing Proliferation Modifications & Derivatives Industrial Viability 4 | Making Open Source Feasible for AI Start-ups & MSMEs What kind of sector-neutral, sector-specific, industrially viable and privacy-friendly practices may be feasibly adopted by AI start-ups and MSMEs? 5 | Key Challenges & Recommendations for Open Source AI Practices We have offered recommendations on enabling better open-source practices for AI companies, which are legally viable, due to the absence of regulatory clarity, and despite the risk of regulatory capture & regulatory subterfuge. You can access the complete paper at https://vligta.app/product/legal-strategies-for-open-source-artificial-intelligence-practices-iplr-ig-004/
- AI, CX & Telemarketing: Insights on Legal Safeguards
The author of this insight is a Research Intern at the Indian Society of Artificial Intelligence and Law as of March 2024. The rise of Artificial Intelligence (AI) has brought about significant changes in industries like telemarketing, telesales, and customer service. People are discussing the idea of using AI instead of human agents in these fields. In this insight, we will dive into whether it is doable and what ethical concerns we must consider, especially regarding putting legal protections in place. AI in Customer Services & Telemarketing So Far Using AI in telemarketing and customer service seems like a great way to make things smoother and more effective when dealing with customers. Thanks to fancy AI tech like natural language processing (NLP) and speech recognition, AI systems can handle customer questions and sales tasks really well now. They can even chat in different languages, which is a promising tool for customer convenience. The reason AI integration seems possible is because it can automate monotonous tasks, analyze loads of data, and give customers personalized experiences. Take chatbots, for example. They can chat with customers, figure out what they like, and suggest stuff they might want to buy. This can make customers happier and even lead to more sales. Also, AI can predict what customers might need next, so companies can be proactive about helping them out. Nevertheless, there are some significant ethical concerns with using AI in telemarketing and customer service that we cannot ignore. One issue is that AI might lack the human touch. Sure, it can chat like a human, but it cannot certainly understand emotions like a human person can. This might make customers feel like they are not being listened to or understood. Another worry is about keeping customer data safe and private. AI needs a ton of data to work well, which could be risky if it is not appropriately protected. Companies need to make sure they are following strict rules, like GDPR, to keep customer info safe from hackers. Plus, there is a risk that AI might make unfair decisions, like treating some customers differently because of biases in the data it is trained on. To solve this problem, companies need to be essentially open about how their AI works and make sure it is treating everyone fairly. So, to tackle these ethical issues, we need some legal rules in place. We could set clear standards for how AI should be developed and used in telemarketing and customer service. This means making sure it is transparent, fair, and accountable. Regulators also need to keep a close eye on how companies handle customer data. They should ensure everyone follows the rules to protect people's privacy. Companies might have to do assessments to see if using AI might put people's data at risk, and they should ask for permission before collecting any personal info. To add, companies need to train their employees on how to use AI responsibly. This means teaching them how to spot biases, make ethical decisions, and use AI in a way that's fair to everyone. Ultimately, using AI in telemarketing, telesales, and customer service could improve things for everyone. Nevertheless, we must be careful and make sure we are doing it in a way that respects people's rights and security. US' FCC's Notice of Inquiry as an Exemplar The recent Notice of Inquiry (NOI) [1] by the Federal Communications Commission (FCC) of the United States about how AI affects telemarketing and tele-calling under the Telephone Consumer Protection Act (TCPA) is a significant step undertaken by a governmental organisation in the United States of America to make it an imperative for governments worldwide to formulate legislations to regulate usage of AI in telemarketing and customer service. It shows that they are taking a serious look at how technology is changing the way we communicate. As businesses use AI more in things like customer service and marketing, it is crucial to understand the rules and protections that need to be in place. The TCPA was initially made to prohibit bothersome telemarketing calls, but now it has to deal with the challenge of regulating AI-powered communication systems. With AI getting better at sounding like humans and having honest conversations, there is worry about whether these interactions are actual or legal. The FCC's inquiry is all about figuring out how AI fits into the rules of the TCPA and what kind of impact it might have, both good and bad. One big thing the FCC is looking into is how genuine AI-generated voices sound in telemarketing calls. Unlike old-style robocalls that sound pretty robotic, AI calls can sound just like real people, which could trick folks into thinking they are talking to a person. This means we need rules to make sure AI calls are honest and accountable. Things like adding watermarks or disclaimers could help people know they are talking to a machine. The FCC is also thinking about how AI chatbots fit into the rules. These are like little computer programs that can chat with customers through text. As more businesses use these chatbots, we need to know if they fall under the same rules as voice calls. Getting clear on this is essential for making sure customers are protected. However, it is not all bad news. The FCC knows that AI can also make things better for consumers. It can help send personalised messages, ensure companies do not call people who do not want to be called, and even help people with disabilities access services more efficiently. Still, there is a risk of activities like scams or tricking people happening. To figure all this out, startups and the government must work together to make reasonable rules. This means deciding what counts as AI, enshrining what it can and cannot do, and ensuring it is used correctly. It is also essential to teach people, especially those who might be more vulnerable, like elderly citizens, those who do not speak English well, or those who are not as literate as others, how to spot and deal with AI communications. The FCC's Notice of Inquiry about how AI affects the TCPA has indeed got people talking about using AI in telemarketing. Since AI can sound just like humans, we need to update the rules to keep up. Some ideas include ensuring trusted sources are evidently marked, adding disclaimers to AI calls, and figuring out exactly how AI fits into the TCPA. It is all about finding a balance between letting new tech like AI grow and ensuring people are safe. Startups and governments need to work together to ensure AI is used in telemarketing fairly and ethically. This means ensuring it does not get used to trick or scam people. Therefore, by working together, we can ensure tele-calling services keep improving without risking people's trust or safety. AI Use Cases in Telemarketing, Telesales & Customer Service The launch of Krutrim by Ola CEO Bhavish Aggarwal's Krutrim Si Designs (an AI startup) marks a significant step in integrating AI into telemarketing and tele-calling. With its multilingual capabilities and personalized responses, the chatbot demonstrates the potential of AI to revolutionise customer service in diverse linguistic contexts. However, the development of AI-powered chatbots also raises ethical considerations, particularly regarding biases in AI models [2]. Union Minister Ashwini Vaishnaw's statements on the recently stated AI Advisory by the Ministry of Electronics and Information Technology underscore the importance of addressing biases in AI models to ensure fair and unbiased interactions with users. In the context of telemarketing and tele-calling, where AI systems may interact directly with customers, it becomes crucial to implement legal safeguards and guardrails to prevent biases and discrimination. Legal solutions could include mandates for rigorous testing and validation of AI algorithms to detect and mitigate biases and regulations requiring transparency and accountability in AI deployment. Additionally, government entities could collaborate with startups and industry stakeholders to establish ethical guidelines and standards for AI integration in customer service, promoting fairness, inclusivity, and ethical conduct in AI-driven interactions. By proactively addressing ethical considerations and implementing legal safeguards, businesses and government entities can harness the benefits of AI in telemarketing and tele-calling while upholding fundamental principles of fairness and non-discrimination. Also, in July 2023, the news of Dukaan (Bengaluru-based startup by Sumit Shah) replacing its customer support roles with AI chatbots called Lina, came to light, highlighting the growing trend of AI integration in customer service functions, including telemarketing and tele-calling. While AI-driven solutions offer efficiency and cost savings for startups like Dukaan, they also raise ethical considerations and potential legal challenges. As AI technology advances, concerns about job displacement and the impact on human workers become increasingly relevant [3]. Legal safeguards and guardrails must be established to ensure fairness, transparency, and accountability in deploying AI in telemarketing and customer service. These safeguards may include regulations governing the responsible use of AI, guidelines for ethical AI deployment, and mechanisms for addressing biases and discrimination in AI algorithms. Additionally, collaboration between startups, government entities, and industry stakeholders is essential to develop comprehensive legal frameworks that balance the benefits of AI innovation with the protection of workers' rights and consumer interests. By proactively addressing these ethical and legal considerations, startups can harness the benefits of AI while mitigating potential risks and ensuring compliance with regulatory requirements. The increasing adoption of AI and automation in the retail sector, as highlighted by the insights provided, underscores the transformative potential of these technologies in enhancing [3] customer experiences and operational efficiency. However, as retailers integrate AI into telemarketing, telesales, and customer service functions, it is imperative to consider the ethical and legal implications [4]. Legal safeguards and guardrails must be established to ensure AI-powered systems adhere to regulatory frameworks governing customer privacy, data protection, and fair practices. This includes implementing mechanisms to safeguard personally identifiable information (PII) and ensuring transparent communication with customers about using AI in customer interactions. Moreover, ethical considerations such as algorithmic bias and discrimination need to be addressed through responsible AI governance frameworks. Companies should prioritize fairness, accountability, and transparency in AI deployment and establish protocols for addressing biases and ensuring equitable treatment of customers. Additionally, regulations may need to be updated or expanded to address the unique challenges posed by AI in customer service contexts. This could involve mandates for AI transparency, algorithmic accountability, and mechanisms for auditing and oversight. By addressing these ethical and legal considerations, startups and government entities can harness the benefits of AI while ensuring that customer interactions remain ethical, fair, and compliant with regulatory requirements. Possible Legal Solutions, Suggested The idea of employing Artificial Intelligence (AI) in telemarketing and tele-calling brings both excitement and apprehension for businesses. While AI-powered chatbots have the potential to revolutionize customer service by enhancing efficiency and personalization, concerns persist regarding data privacy, bias, and potential job displacement. In this rapidly evolving landscape, it is imperative for businesses to strike a balance between innovation and responsibility by integrating legal safeguards and ethical considerations. Data privacy and security stand out as primary concerns in utilizing AI for telemarketing. To address this, businesses must ensure compliance with data protection regulations applicable in their respective countries. This entails transparent communication with customers regarding data collection, processing, and storage, along with obtaining consent for AI-driven interactions. By implementing robust measures to safeguard customer data, businesses can foster trust and mitigate the risk of data breaches [4]. Another critical consideration is the presence of bias in AI systems. AI algorithms can inadvertently reflect biases inherent in the data they are trained on, resulting in unfair treatment of specific demographic groups. To address this, businesses should integrate bias detection and correction tools into their AI systems. Regular audits conducted by third-party organizations can help identify and rectify biases, while ongoing training can enhance the accuracy and fairness of AI responses. By tackling bias in AI, businesses can ensure that their tele-calling operations are impartial and equitable for all customers. Job displacement is also a concern associated with AI in telemarketing. While AI has the potential to automate various tasks, businesses must ensure that it complements human capabilities rather than replacing human workers. This could involve fostering collaboration between AI and human agents, offering training and upskilling initiatives for call center agents, and establishing guidelines for responsible AI deployment in the workplace. By empowering employees to embrace new technologies and roles, businesses can alleviate the impact of AI on jobs and foster a more inclusive workforce. In addition to legal safeguards, ethical considerations should guide the integration of AI into telemarketing and tele-calling operations. Businesses must prioritize ethical AI development and deployment practices, ensuring that their AI systems uphold principles such as transparency, accountability, and fairness. This may entail establishing ethical guidelines for AI use, conducting regular ethical assessments, and involving stakeholders in decision-making processes. By embedding ethical considerations into their AI strategies, businesses can build trust with customers and stakeholders and demonstrate their commitment to responsible innovation. Conclusion To conclude, the adoption of AI in telemarketing and tele-calling holds promise for enhancing customer service and operational efficiency. However, businesses must implement robust legal safeguards and ethical considerations to harness these benefits while mitigating risks. By prioritizing data privacy, addressing bias, mitigating job displacement, and integrating ethical principles into their AI strategies, businesses can navigate the complexities of AI integration and drive positive outcomes for both customers and employees. References [1] Frank Nolan et al., Tech & Telecom, Professional Perspective - FCC Issues Notice of Inquiry for AI’s Changing Impact on the TCPA, https://www.bloomberglaw.com/external/document/XC9VATGG000000/tech-telecom- professional-perspective-fcc-issues-notice-of-inqui (last visited Mar 12, 2024). [2] Amazon Pay secures payment aggregator licence; Krutrim AI’s chatbot, The Economic Times, https://economictimes.indiatimes.com/tech/newsletters/tech-top-5/amazon-pay-gets-payment-aggregator- licence-krutrim-launches-chatgpt-rival/articleshow/108016633.cms (last visited Mar 15, 2024). [3] Asmita Dey, AI Coming for Our Jobs? Dukaan Replaces Customer Support Roles with AI Chatbot, The Times of India, Jul. 11, 2023, https://timesofindia.indiatimes.com/india/ai-coming-for-our-jobs-dukaan-replaces- customer-support-roles-with-ai-chatbot/articleshow/101675374.cms. [4] Sujit John & Shilpa Phadnis, How AI & Automation Are Making Retail Come Alive for the New Gen, The Times of India, Feb. 7, 2024, https://timesofindia.indiatimes.com/business/india-business/how-ai-automation- are-making-retail-come-alive-for-the-new-gen/articleshow/107475869.cms.
- New Report: Draft Digital Competition Bill, 2024 for India: Feedback Report, IPLR-IG-003
We are delighted to present IPLR-IG-003, a Feedback Report to the recently proposed Digital Competition Bill, 2024 & the complete report submitted by the Committee on Digital Competition Law, which was submitted to the Ministry Of Corporate Affairs, Government of India. This feedback report was also possible, thanks to the support and efforts of Vaishnavi Singh, Shresh Narang and Krati Bhadouriya, Research Interns at the Indian Society of Artificial Intelligence and Law. We express special thanks to the Distinguished Experts at the ISAIL Advisory Council for their insights, and Akash Manwani for his insights & support. You can access the complete feedback report at https://vligta.app/product/draft-digital-competition-bill-2024-for-india-feedback-report-iplr-ig-003/ This report offers a feedback to the Digital Competition Bill, 2024, from Page 69 onwards, but also offers a proper breakdown of the whole CDCL Report, from the Stakeholder Consultations, to the DPDPA, Consumer Laws, and even the key international practices that may have inspired the current draft of the Bill. A general reading suggests that the initial chapters of the Bill have a heavy inspiration from the Digital Markets Act of the European Union, but there is no doubt to concur that the Bill offers unique Indian approaches to digital competition law, especially in Sections 3, 4, 7 and 12-15. We have also inferred some recommendations based on the aiact.in version 2 on aspects of how use of #artificialintelligence may promote anti-competitive practices on issues related to intellectual property and knowledge management. Here are all points of feedback, summarised: General Recommendations Expand the definition of "non-public data" (Section 12): The current section covers data generated by business users and end-users. However, it should also explicitly include data generated by the platforms themselves through their operations, analytics, and user tracking mechanisms. This would prevent circumvention by claiming platform-generated data is not covered. Enable data portability for platform-generated data: While Section 12 enables portability of user data, it should also mandate portability of inferred data, user profiles, and analytics generated by the platforms based on user activities. This levels the playing field for new entrants. If that’s not feasible within the mandate of CCI, perhaps the Ministry of Consumer Affairs must incorporate data portability guidelines, since this might become a latent consumer law issue. Expand anti-steering to cover all marketing channels: Section 14 should prohibit restrictions on business users promoting through any channel (email, in-app notifications, etc.), not just direct communications with end-users. Tighten the definition of "integral" products/services (Section 15): Clear objective criteria should define what constitutes an "integral" tied/bundled product to prevent over-broad interpretations that could undermine the provision's intent. Incorporate a principle of Fair, Reasonable and Non-Discriminatory (FRAND) treatment: A general FRAND obligation could prevent discriminatory treatment of business users by dominant platforms across various practices. Recommendations based on AIACT.IN V2 In this segment, we have offered a set of recommendations based on a draft of the proposed Artificial Intelligence (Development & Regulation) Act, 2023, Version 2 as proposed by the first author of this report. The recommendations in this segment may be largely associated with any core digital services or SSDEs in which the involvement of AI technologies is deeply integrated or attributable. Establish AI-specific Merger Control Guidelines: Develop specific guidelines or considerations for evaluating mergers and acquisitions involving companies with significant AI capabilities or data assets. These guidelines could address issues such as data concentration, algorithmic biases, and the potential for leveraging AI to foreclose competition or engage in self-preferencing practices. Shared Sector-Neutral Standards: The Digital Competition Bill should consider adopting shared sector-neutral standards for AI systems, as mentioned in Section 16 of the AIACT.IN Version 2. This would promote interoperability and fair competition among AI-driven digital services. Interoperability and Open Standards: The Digital Competition Bill should encourage the adoption of open standards and interoperability in AI systems deployed by Systemically Significant Digital Enterprises (SSDEs). This aligns with Section 16(5) of AIACT.IN v2, which promotes open source and interoperability in AI development. Fostering interoperability can lower entry barriers and promote competition in digital markets. AI Explainability Obligations: Drawing from the AI Explainability Agreement mentioned in Section 10(1)(d) of AIACT.IN v2, the Digital Competition Bill could mandate SSDEs to provide clear explanations for the outputs of their AI systems. This can enhance transparency and accountability, allowing users to better understand how these systems impact competition. Algorithmic Transparency: Drawing from the content provenance provisions in Section 17 of AIACT.IN v2, the Digital Competition Bill could require SSDEs to maintain records of the algorithms and data used to train their AI systems. This can aid in detecting algorithmic bias and anti-competitive practices. Interoperability considerations for IP protections (Section 15): The AIACT.IN draft recognizes the need to balance IP protections for AI systems with promoting interoperability and preventing undue restrictions on access to data and knowledge assets. The Digital Competition Bill could similarly mandate that IP protections for dominant digital platforms should not unduly hinder interoperability or access to key data/knowledge assets needed for competition. Sharing of AI-related knowledge assets (Section 8(8)): The AIACT.IN draft encourages sharing of datasets, models and algorithms through open source repositories, subject to IP rights. The Digital Competition Bill could similarly promote voluntary sharing of certain non-sensitive datasets and tools by dominant platforms to spur innovation, while respecting their legitimate IP interests. IP implications of content provenance requirements (Section 17): The AIACT.IN draft's content provenance provisions, including watermarking of AI-generated content, have IP implications that need to be considered. Likewise, any content attribution or transparency measures in the Digital Competition Bill should be designed in a manner compatible with IP laws. While the AIACT.IN Version 2 draft and the Digital Competition Bill have distinct objectives, selectively drawing upon the AI-specific IP and knowledge management provisions in the former could enrich and future-proof the competition framework for digital markets. We hope the feedback report would be helpful for the Ministry of Corporate Affairs, Government of India and the Competition Commission of India. We express our heartfelt gratitude for the authors to write such an important paper on digital competition policy, with an Indian standpoint. Should any considerations arise to discuss any of the feedback points, please feel free to reach out at vligta@indicpacific.com.