top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

90 items found

  • AI & AdTech: Examining the Role of Intermediaries

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as of March 2024. In today's digital age, advertising has undergone a significant transformation with the integration of Artificial Intelligence (AI) technology. This high-tech wizardry has completely changed the game for businesses by transforming how they connect with their target audiences, manage their advertising budgets, and supercharge their marketing strategies on social media platforms. This insight delves into the profound impact of AI technology on advertising budgetary issues in social media platforms, exploring how intermediaries and third parties play a crucial role in leveraging AI for effective advertising campaigns. Additionally, scrutinising the pivotal role of intermediaries and third parties in shaping and optimising AI-driven advertising campaigns. The Evolution of Advertising in the Digital Era Advertising has evolved from traditional methods to digital platforms, with social media becoming a prominent channel for businesses to connect with their customers. The vast user base and engagement levels on platforms like Facebook, Instagram, Twitter, and LinkedIn have made them ideal spaces for targeted advertising. However, managing advertising budgets effectively on these platforms can be challenging without the right tools and strategies. AI technology has emerged as a game-changer in the advertising landscape, offering advanced capabilities for data analysis, audience targeting, ad personalization, and performance optimization. By harnessing the power of AI algorithms and machine learning models, businesses can make data-driven decisions to maximize the impact of their advertising campaigns while minimizing costs. Social media platforms have become central hubs for advertising, offering diverse audience demographics and sophisticated targeting options. As advertisers flock to these platforms, the need for efficient budget management becomes paramount. Dynamic Budget Allocation Artificial Intelligence (AI) is like a magic wand for advertisers, especially when it comes to managing budgets in the dynamic world of social media. With AI, advertisers get the superpower of adjusting budgets on the fly based on how well their ads are doing. If an ad is hitting the bullseye, AI suggests putting more money into it, but if something isn't quite clicking, it advises scaling back. This dynamic approach ensures that every penny spent on advertising is a wise investment, maximizing returns. But the AI magic doesn't stop there. Predictive analytics, powered by AI, takes the guesswork out of budget planning. By crunching numbers from past campaigns and spotting market trends, AI algorithms become crystal balls for advertisers. They predict how ads will perform in the future, helping businesses plan their budgets with precision. It's like having a financial advisor for your advertising dollars, guiding you to spend where it matters most. Now, while AI brings a treasure trove of benefits to budget management in social media advertising, it's not all smooth sailing. Businesses might face challenges along the way that can shake up their budget strategies. These challenges include: Ad Fraud and Click Fraud Ad fraud remains a significant concern in digital advertising, with malicious actors engaging in click fraud to inflate ad engagement metrics artificially. Businesses need to implement robust fraud detection mechanisms powered by AI to identify and mitigate fraudulent activities that can drain advertising budgets without delivering genuine results. Budget Overruns Advertisers face the risk of going over their budgets if they don't have effective monitoring and optimization strategies. AI tools can be a game-changer, offering real-time insights into how ads are performing and making automatic adjustments to keep spending within the planned limits. This helps avoid unexpected costs and ensures efficient campaign management. Competitive Bidding In highly competitive social media ad spaces, bidding wars can escalate costs and strain advertisers' budgets. AI-powered bidding strategies become invaluable in such scenarios. These tools optimize bid prices by considering factors like target audience, ad relevance, and the likelihood of conversion. This ensures that businesses achieve cost-effective results even in fiercely competitive environments. The Role of Intermediaries and Third Parties in AI-Driven Advertising Intermediaries and third parties are pivotal players in the world of AI-driven advertising, playing a vital role in making the most of AI technologies and improving advertising strategies on social media platforms. These entities offer specialized knowledge, tools, and resources that empower businesses to effectively use AI for their targeted advertising campaigns. In simpler terms, they act as valuable partners, helping companies navigate the complex landscape of AI-powered advertising on platforms like social media. Their expertise and resources contribute to the success of businesses in reaching their advertising goals through smart and targeted campaigns. A major advantage of using AI technologies and third-party data for marketers is the capability to improve customer targeting through precise segmentation and personalized experiences. Intermediaries play a crucial role in helping businesses turbocharge their audience segments with third-party data, allowing for highly personalized customer interactions across different channels. Through the strategic use of AI algorithms and third-party data, advertisers can pinpoint specific characteristics of their audience, enabling them to enhance personalization on a larger scale. This leads to more tailored and effective marketing efforts that resonate with individual customers, ultimately boosting engagement and satisfaction. AI's predictive modeling is a powerful tool for businesses to understand audience intent and focus on those with a higher likelihood of converting. By examining patterns in data, demographics, past behaviors, and characteristics, AI helps identify valuable customers and build lookalike audiences. Intermediaries play a key role in implementing predictive analytics strategies, aiding businesses in refining their marketing methods, boosting return on investment (ROI), and making well-informed decisions about brand partnerships and enhancing customer experiences. This collaboration ensures that businesses can optimize their marketing efforts, better connect with their target audience, and ultimately achieve more successful outcomes. Establishing a well-thought-out strategy for managing relationships with third-party middlemen is essential for enhancing performance, creating value, and minimizing risks within the broader business network. Businesses frequently collaborate with these intermediaries for functions such as logistics, sales, distribution, marketing, and human resources. These middlemen play a crucial role in helping companies handle risks associated with these partnerships, including adhering to regulations, managing financial risks, sustaining business operations in tough times, safeguarding reputation, addressing operational disruptions, countering cyber threats, and ensuring alignment with strategic objectives. By having a structured plan in place, companies can navigate challenges more effectively, capitalize on opportunities, and foster successful collaborations with their third-party partners. Intermediaries are valuable partners for businesses, supporting them in meeting regulatory requirements concerning AI use in advertising. They play a crucial role in ensuring compliance with data privacy regulations and operational resilience standards. These intermediaries aid companies in navigating the complexities of third-party dependencies in AI models. They offer guidance on protecting data privacy, understanding how AI models function, addressing issues related to intellectual property, minimizing risks tied to external dependencies, and strengthening operational resilience against potential cyber threats. In simpler terms, intermediaries help businesses stay on the right side of the law and operate securely when utilizing AI in their advertising practices. Kinds of Intermediaries and Third Parties Ad Agencies and Marketing Firms Ad agencies and marketing firms act as intermediaries, assisting organizations in navigating the complexities of AI-driven advertising. They offer expertise, resources, and specialized tools to optimize campaigns and enhance ROI. Data Analytics Providers Third-party data analytics providers play a pivotal role in interpreting vast amounts of consumer data. They offer insights that inform advertising strategies, helping organizations refine their targeting and messaging approaches. Ethical Considerations in Third-Party Involvement & Mitigation Measures While intermediaries and third parties offer valuable services, ethical considerations arise. Issues such as data privacy, transparency, and potential conflicts of interest require careful examination to ensure responsible and ethical advertising practices. Comprehensive Budget Planning It's important for organizations to plan their budget thoroughly when diving into AI-driven advertising. This means considering the initial investment in AI technologies, ongoing maintenance costs, and being prepared for potential changes in advertising performance. Taking a proactive approach to budget planning helps ensure financial stability and success in AI-powered campaigns. Continuous Monitoring and Adaptation Keeping a close eye on how advertising is performing and adapting strategies when needed is crucial. Regularly monitoring campaigns and adjusting strategies in response to algorithm changes is a proactive way for organizations to optimize their advertising efforts. This adaptability helps minimize the impact of uncertainties and keeps campaigns on track. Collaboration with Reputable Intermediaries Choosing trustworthy partners, such as ad agencies, marketing firms, and data analytics providers, is a must. Collaborating with reputable intermediaries ensures organizations receive expert guidance and ethical practices. This partnership increases the likelihood of achieving advertising goals and maintaining a positive reputation in the industry. Enhancing Data Intermediation for Trusted Digital Agency Data intermediaries play a crucial role in making data sharing smooth and trustworthy between individuals and technology platforms. They act like digital agents, allowing users to make decisions autonomously. To build trust, intermediaries establish reputation mechanisms, get third-party verification, and create assurance structures to minimize risks for both intermediaries and rights holders. This approach boosts confidence in interactions between humans and technology in the expanding data ecosystem, ensuring that information can be shared reliably and securely. Conclusion To conclude, intermediaries and third-party players are crucial in unlocking the full potential of AI technology for advertising on social media platforms. Their expertise spans audience segmentation, predictive modeling, risk management across extended enterprises, adherence to regulatory standards, bolstering operational resilience, and building trust through data intermediation. Through these vital contributions, these entities play a substantial role in ensuring the triumph of AI-driven advertising campaigns.

  • The New York Times vs OpenAI, Explained

    The author of this insight is a Research Intern at the Indian Society of Artificial Intelligence and Law. The New York Times Company has filed a lawsuit against Microsoft Corporation, OpenAI Inc., and various other entities associated with OpenAI, alleging copyright infringement. The lawsuit, filed in the United States District Court for the Southern District of New York, claims that OpenAI's Generative Pre-trained Transformer models (GPT), including GPT-3 and GPT-4, were trained using vast amounts of copyrighted content from The New York Times without authorisation. This explainer addresses certain facts and aspects of the lawsuit filed by The New York Times against OpenAI and Microsoft. Facts about The New York Times Company v. Microsoft Corporation, OpenAI Inc. et al. Plaintiff: The New York Times Company Defendants: Microsoft Corporation, OpenAI Inc., OpenAI LP, OpenAI GP, LLC, OpenAI LLC, OpenAI OpCo LLC, OpenAI Global LLC, OAI Corporation, OpenAI Holdings, LLC Jurisdiction: United States District Court, Southern District of New York The United States District Court, Southern District of New York has subject matter jurisdiction as provided under 28 U.S.C. § 1338(a). The United States District Court, Southern District of New York has territorial jurisdiction as the defendants Microsoft Corporation, OpenAI Inc. either themselves or through their subsidiaries, agents have been found as provided under 28 U.S.C. §1400(a). The United States District Court, Southern District of New York is the proper venue as 28 U.S.C. §1391(b)(2) entitles the plaintiff (The New York Times Company, in this case) to file suit as a substantial portion of property (The copyrighted material of The New York Times, Company) in this case is situated. Allegations made by The New York Times Company against the defendants, summarised The New York Times Company alleges that Microsoft Corporation, OpenAI Inc. et al. were unauthorised to use and copied the content of The New York Times Company in the following manner – #1 - Defendants were unauthorised to reproduce the work of the plaintiffs to train Generative AI 17 U.S.C. §106(1) entitles the owner of the copyright to reproduce the copyrighted work in copies or phonorecords. The plaintiff alleges that the defendants violated their right recognised under 17 U.S.C. §106(1) as the defendants GPT Models are based on Large Language Models (hereinafter, LLMs). The plaintiffs allege that the pre-training stage of the LLM requires “collecting and storing text content to create training datasets and processing the content through the GPT models”, therefor the defendants used Common Crawl, a copy of the internet which has 16 million records of content from The New York Times Company. The plaintiffs allege that the defendants copied the content of The New York Times Company without license and providing compensation. #2 - The GPT Models reproduced derivatives of the copyrighted content of The New York Times Company The plaintiff alleges that the defendants GPT Models have memorised the copyrighted content of The New York Times Company and thereafter reproduce the content memorised verbatim. The plaintiffs attached outputs from GPT-4 highlighting the reproduction of the following articles- As Thousands of Taxi Drivers Were Trapped in Loans, Top Officials Counted the Money by Brian M. Rosenthal How the U.S. Lost Out on iPhone Work by Charles Duhigg & Keith Bradsher #3 - Defendants GPT Models displayed the copyrighted content of The New York Times Company which was behind a paywall The plaintiff, The New York Times Company alleges that the defendants GPT models displayed the copyrighted content in the following ways: (1) by allegedly showing copies of content from The New York Times Company which have been memorised by the GPT Models, (2) by showing search results of the content which are similar to the copyrighted material. The plaint highlights a user’s prompt which requires ChatGPT to type the content of the article : Snow Fall: The Avalanche at Tunnel Creek verbatim. The plaint also highlighted ChatGPT reproducing Pete Wells review of Guy Fieri’s American Kitchen & Bar when prompted by a user. #4 - Defendants disseminating current news by retrieving copyrighted material from of The New York Times Company The plaintiff alleges that the defendants GPT models use “grounding” techniques. The grounding techniques involve the receiving a prompt from the user, using the internet to get copyrighted content from The New York Times Company and then the LLM stitches the additional words required to respond to the prompt. To provide evidence, the plaint highlighted the reproduction of Catherine Porter’s article, ‘To Experience Paris Up Close and Personal, Plunge Into a Public Pool’ After reproducing the content, the defendants GPT model does not provide a link to access the website of The New York Times Company. The plaint further highlights ChatGPT reproducing Hurbie Meko’s article, ‘The Precarious, Terrifying Hours After a Woman Was Shoved Into a Train.” Based on the allegations made pertaining to unauthorised reproduction of copyrighted content, reproduction of derivatives of copyrighted content, reproducing copyrighted content which was behind a paywall and disseminating current news by retrieving copyrighted material from The New York Times Company, the plaintiff alleges that the defendants have inflicted the following injuries upon the plaintiff: Count 1: Copyright Infringement against all defendants 17 U.S.C. §501(a) holds that anyone who violates the exclusive rights of the copyright owner as provided by sections 106 through sections 122 …. Is an infringer of copyright. The New York Times Company alleges that all defendants through their GPT Models distributed copyrighted material belonging to The New York Times Company and therefore violated the right of The New York Times Company to reproduce the copyrighted work as recognised by 17 U.S.C. §106(1). The New York Times Company also alleges that all the defendants violated 17 U.S.C. §106(1) by storing, processing and reproducing the copyrighted content of The New York Times Company to train their LLM. The New York Times Company further alleges that the GPT Models have memorised the copyrighted content and therefor reproduces the content  in response to a prompt of an user over which The New York Times Company has a copyright, an act which violates 17 U.S.C. §106(1). Count 2: Vicarious Copyright Infringement against Microsoft Corporation, OpenAI Inc., OpenAI GP, LLC, OpenAI LP, OAI Corporation, OpenAI Holdings LLC, OpenAI Global LLC The New York Times Company alleges that defendant, Microsoft Corporation had directed, controlled and profited from the violating rights of the New York Times Company. The New York Times further alleges that OpenAI Inc., OpenAI GP, LLC, OpenAI LP, OAI Corporation, OpenAI Holdings LLC, OpenAI Global LLC directed, controlled and profited from the copyright infringement of the GPT Model. Count 3: The New York Times Company alleges that Microsoft Corporation has assisted the other defendants to infringe copyright of the New York Times Company by: Helping the other defendants to build a dataset to collect copyrighted material of The New York Times Company Processing and reproduction of the content over which The New York Times Company had a copyright Providing the computational resources necessary to operate the GPT models Count 4: All other defendants are allegedly liable as the actions taken by each one of them contribute to the infringement of copyright of The New York Times Company. The defendants have allegedly: Developed the LLM Model which has memorised and reproduces the content over which The New York Times Company has a copyright Built a training model for the development of the LLM model The New York Times Company also alleges that the defendants were fully aware that the GPT model can memorise, reproduce and distribute copyrighted content. Count 5: Removal/Alteration of Copyright Management Information against All Defendants The plaintiff, The New York Times Company alleges that the defendants violated 17 U.S.C. §1202(b)(1) as they removed/altered copyright management information as copyright notice, title, identifying information and terms of use were removed. The copyrighted material was then used to train the LLM. The defendants further allege that the aforementioned acts of removing copyright notice, title, identifying information and terms of use were done intentionally and knowingly to facilitate infringement of the copyrighted material. Count 6: Competition deemed to be unfair by Common Law owing to Misappropriation of the Copyrighted Material against all defendants The plaintiffs allege that the defendants have copied the content over which the plaintiff had a copyright and without the consent of the plaintiff the defendants trained their LLM That, the defendants removed tags which would indicate that the plaintiff had a copyright over the content and the aforementioned act of the plaintiff has caused monetary loss to The New York Times Company. Relief Sought by The New York Times Company In light of the allegations made against Microsoft Corporation, OpenAI Inc. et. al. the plaintiff seeks the following: Compensation in the form of statutory damages, compensatory damages, disgorgement and other relief permitted by the law of equity. An injunction enjoining the defendants from infringing the copyrighted content of The New York Times Company. A court order demanding the destruction of GPT Models which were built on the content over which The New York Times Company had a copyright. Attorney’s fees. Additional Allegations and Context Fair Use and Training AI Models: OpenAI has argued that the utilisation of copyrighted material for AI training can be viewed as transformative use, potentially qualifying for protection under the fair use doctrine. This argument is central to the ongoing debate about the extent to which AI can utilise existing copyrighted works to create new, generative content. OpenAI's Response to the Lawsuit: OpenAI has publicly responded to the lawsuit, asserting that the case lacks merit and suggesting that The New York Times may have manipulated prompts to generate the replicated content. OpenAI has also mentioned its efforts to reduce content replication from its models and highlighted The New York Times' refusal to share examples of this reproduction before filing the lawsuit. Impact on AI Research and Development The lawsuit raises significant questions about the future of AI research and development, particularly regarding the balance between copyright protection and the necessity for AI models to access a wide range of data to learn and tackle new challenges. OpenAI has stressed the importance of accessing "the enormous aggregate of human knowledge" for effective AI functioning. The case is being closely monitored as it could establish precedents for how AI companies utilise copyrighted content. Potential Implications of the Lawsuit Precedent-Setting Case This lawsuit is one of the first instances where a major media organisation is taking legal action against AI companies for copyright infringement. The outcome of this case could establish a legal precedent for how copyrighted content is employed to train AI models. Innovation vs. Copyright Protection The case underscores the tension between fostering innovation in AI and safeguarding the rights of copyright holders. The court's decision could have far-reaching implications for both AI advancement and the protection of intellectual property. Conclusion and Next Steps The case is currently pending in the United States District Court for the Southern District of New York. The court's rulings on various counts of copyright infringement, vicarious and contributory copyright infringement, and unfair competition will be pivotal in determining the lawsuit's outcome. The lawsuit might prompt other copyright holders to evaluate how their content is utilised by AI companies and could result in additional legal actions or calls for legislative amendments to address the use of copyrighted material in AI training datasets. Both parties may continue to explore potential solutions, which could include licensing agreements, the development of AI models that do not rely on copyrighted content, or the establishment of industry standards for the ethical utilization of data in AI.

  • Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024

    This is a feedback report developed to offer inputs to a coveted paper published by the Economic Advisory Council to the Prime Minister (EAC-PM) of India, entitled “ A Complex Adaptive System Framework to Regulate Artificial Intelligence ” authored by Sanjeev Sanyal, Pranav Sharma, and Chirag Dudani. You can access the complete feedback report here . This report provides a detailed examination of the EAC-PM paper "A Complex Adaptive System Framework to Regulate Artificial Intelligence." It delves into the core principles proposed by the authors, including instituting guardrails and partitions, ensuring human control, promoting transparency and explainability, establishing distinct accountability, and creating a specialized, agile regulatory body.Through a series of infographics and concise explanations, the report breaks down the intricate concepts of complex adaptivity and its application to AI governance. It offers a fresh perspective on viewing AI systems as complex adaptive systems, highlighting the challenges of traditional regulatory approaches and the need for adaptive, responsive frameworks. Key Highlights: In-depth analysis of the EAC-PM paper's recommendations for AI regulation. Practical feedback and policy suggestions for each proposed regulatory principle. Insights into the unique characteristics of AI systems as complex adaptive systems. Exploration of financial markets as a real-world example of complex adaptive systems. Recommendations for a balanced approach fostering innovation and responsible AI development. Whether you are a policymaker, researcher, industry professional, or simply interested in the future of AI governance, this report provides a valuable resource for understanding the complexities involved and the potential solutions offered by a complex adaptive systems approach. Download the " Artificial Intelligence Governance using Complex Adaptivity: Feedback Report " today and gain a comprehensive understanding of this critical topic. Engage with the thought-provoking insights and contribute to the ongoing dialogue on responsible AI development.Stay informed, stay ahead in the era of AI governance. About the EAC-PM Paper   The paper proposes a novel framework to regulate Artificial Intelligence (AI) by viewing it through the lens of a Complex Adaptive System (CAS). Traditional regulatory approaches based on ex-ante impact analysis are inadequate for governing the complex, non-linear and unpredictable nature of AI systems.The paper conducts a comparative analysis of existing AI regulatory approaches across the United States, United Kingdom, European Union, China, and the United Nations. It highlights the gaps and limitations in these frameworks when dealing with AI's CAS characteristics.To effectively regulate AI, the paper recommends a CAS-inspired framework based on five guiding principles: Instituting Guardrails and Partitions: Implement clear boundary conditions to restrict undesirable AI behaviours. Create "partitions" or barriers between distinct AI systems to prevent cascading systemic failures, akin to firebreaks in forests. Ensuring Human Control via Overrides and Authorizations: Mandate manual override mechanisms for human intervention when AI systems behave erratically. Implement multi-factor authentication protocols requiring consensus from multiple credentialed humans before executing high-risk AI actions. Transparency and Explainability: Promote open licensing of core AI algorithms for external audits. Mandate standardized "AI factsheets" detailing system development, training data, and known limitations. Conduct periodic mandatory audits for transparency and explainability. Distinct Accountability: Establish predefined liability protocols and standardized incident reporting to ensure accountability for AI-related malfunctions or unintended outcomes. Implement traceability mechanisms throughout the AI technology stack. Specialized, Agile Regulatory Body: Create a dedicated regulatory authority with a broad mandate, expertise, and agility to respond swiftly to emerging AI challenges. Maintain a national registry of AI algorithms for compliance and a repository of unforeseen events. The paper draws insights from the regulation of financial markets, which exhibit CAS characteristics with emergent behaviours arising from diverse interacting agents. It highlights regulatory mechanisms like dedicated oversight bodies, transparency requirements, control chokepoints, and personal accountability measures that can inform AI governance.

  • [Draft] Artificial Intelligence Act for India, Version 2

    The Artificial Intelligence (Development & Regulation) Bill, 2023 (AIACT.In) Version 2, released on March 14, 2024, builds upon the framework established in Version 1 while introducing several new provisions and amendments. This draft legislation proposed by our Founder, Mr Abhivardhan, aims to promote responsible AI development and deployment in India through a comprehensive regulatory framework. Please note that draft AIACT.IN (Version 2) is an Open Proposal developed by Mr Abhivardhan and Indic Pacific Legal Research, and is not a draft legislation proposed by any Ministry of the Government of India. You can access and download the Version 2 of the AIACT.IN by clicking below. Key Features of Artificial Intelligence Act for India [AIACT.In] Version 2 Categorization of AI Systems: Version 2 introduces a detailed categorization of AI systems based on conceptual, technical, commercial, and risk-centric methods of classification. This stratification helps in identifying and regulating AI technologies according to their inherent purpose, technical features, and potential risks. Prohibition of Unintended Risk AI Systems: The development, deployment, and use of unintended risk AI systems, as classified under Section 3, is prohibited in Version 2. This provision aims to mitigate the potential harm caused by AI systems that may emerge from complex interactions and pose unforeseen risks. Sector-Specific Standards for High-Risk AI: Version 2 mandates the development of sector-specific standards for high-risk AI systems in strategic sectors. These standards will address issues such as safety, security, reliability, transparency, accountability, and ethical considerations. Certification and Ethics Code: The IDRC (IndiaAI Development & Regulation Council) is tasked with establishing a voluntary certification scheme for AI systems based on their industry use cases and risk levels. Additionally, an Ethics Code for narrow and medium-risk AI systems is introduced to promote responsible AI development and utilization. Knowledge Management and Decision-Making: Version 2 emphasizes the importance of knowledge management and decision-making processes for high-risk AI systems. The IDRC is required to develop comprehensive model standards in these areas, and entities engaged in the development or deployment of high-risk AI systems must comply with these standards. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to address the spatial aspects of AI systems. The IDRC is tasked with establishing consultative mechanisms for the identification, protection, and enforcement of intellectual property rights related to AI. Comparison with AIACT.In Version 1 Expanded Scope: Version 2 expands upon the regulatory framework established in Version 1, introducing new provisions and amendments to address the evolving landscape of AI development and deployment. Detailed Categorization: While Version 1 provided a basic categorization of AI systems, Version 2 introduces a more comprehensive and nuanced approach to classification based on conceptual, technical, commercial, and risk-centric methods. Sector-Specific Standards: Version 2 places a greater emphasis on the development of sector-specific standards for high-risk AI systems in strategic sectors, compared to the more general approach taken in Version 1. Knowledge Management and Decision-Making: The importance of knowledge management and decision-making processes for high-risk AI systems is highlighted in Version 2, with the IDRC tasked with developing comprehensive model standards in these areas. This aspect was not as prominently featured in Version 1. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to AI systems, whereas Version 1 did not delve into the specifics of intellectual property protections for AI. Detailed Description of the Features of AIACT.IN Version 2 Significance of Key Section 2 Definitions Section 2 of AIACT.IN provides essential definitions that signify the legislative intent of the Act. Some of the key definitions are: Artificial Intelligence: The Act defines AI as an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. This broad definition encompasses various subcategories of technical, commercial, and sectoral nature, as set forth in Section 3. AI-Generated Content: This refers to content, physical or digital, that has been created or significantly modified by an artificial intelligence technology. This includes text, images, audio, and video created through various techniques, subject to the test case or use case of the AI application. Algorithmic Bias: The Act defines algorithmic bias as inherent technical limitations within an AI product, service, or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results. This includes technical limitations that emerge from the design, development, and operational stages of AI. Combinations of Intellectual Property Protections: This refers to the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of AI systems. Content Provenance: The Act defines content provenance as the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history. This includes the source data, models, and algorithms used to generate the content, as well as the individuals or entities involved in its creation, modification, and distribution. Data: The Act defines data as a representation of information, facts, concepts, opinions, or instructions in a manner suitable for communication, interpretation, or processing by human beings or by automated or augmented means. Data Fiduciary: A data fiduciary is any person who alone or in conjunction with other persons determines the purpose and means of processing personal data. Data Portability: Data portability refers to the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary. Data Principal: The data principal is the individual to whom the personal data relates. In the case of a child or a person with a disability, this includes the parents, lawful guardian, or lawful guardian acting on their behalf. Data Protection Officer: A data protection officer is an individual appointed by the Significant Data Fiduciary under the Digital Personal Data Protection Act, 2023. Digital Office: A digital office is an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode. Digital Personal Data: Digital personal data refers to personal data in digital form. Digital Public Infrastructure (DPI): DPI refers to the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including digital identity systems, digital payment systems, data exchange platforms, digital registries and databases, and open application programming interfaces (APIs) and standards. Knowledge Asset: A knowledge asset includes intellectual property rights, documented knowledge, tacit knowledge and expertise, organizational processes, customer-related knowledge, knowledge derived from data analysis, and collaborative knowledge. Knowledge Management: Knowledge management refers to the systematic processes and methods employed by organizations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of AI systems. IDRC: IDRC stands for IndiaAI Development & Regulation Council, a statutory and regulatory body established to oversee the development and regulation of AI systems across government bodies, ministries, and departments. Inherent Purpose: The inherent purpose refers to the underlying technical objective for which an AI technology is designed, developed, and deployed, encompassing the specific tasks, functions, and capabilities that the AI technology is intended to perform or achieve. Insurance Policy: Insurance policy refers to measures and requirements concerning insurance for research and development, production, and implementation of AI technologies. Interoperability Considerations: Interoperability considerations are the technical, legal, and operational factors that enable AI systems to work together seamlessly, exchange information, and operate across different platforms and environments. Open Source Software: Open source software is computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose. National Registry of Artificial Intelligence Use Cases: The National Registry of Artificial Intelligence Use Cases is a national-level digitized registry of use cases of AI technologies based on their technical & commercial features and inherent purpose, maintained by the Central Government for the purposes of standardization and certification of use cases of AI technologies. These definitions provide a clear understanding of the scope and intent of AIACT.IN, ensuring that the Act effectively addresses the complexities and challenges associated with the development and regulation of AI systems in India. Here is a list of some FAQs (frequently asked questions, that are addressed in detail. Here is how you can participate in the AIACT.IN discourse: Read and understand the document: The first step to participating in the discourse is to read and understand the AIACT.IN Version 2 document. This will give you a clear idea of the proposed regulations and standards for AI development and regulation in India. To submit your suggestions to us, write to us at vligta@indicpacific.com. Identify key areas of interest: Once you have read the document, identify the key areas that are of interest to you or your organization. This could include sections on intellectual property protections, shared sector-neutral standards, content provenance, employment and insurance, or alternative dispute resolution. Provide constructive feedback: Share your feedback on the proposed regulations and standards, highlighting any areas of concern or suggestions for improvement. Be sure to provide constructive feedback that is backed by evidence and data, where possible. Engage in discussions: Participate in discussions with other stakeholders in the AI ecosystem, including industry experts, policymakers, and researchers. This will help you gain a broader perspective on the proposed regulations and standards, and identify areas of consensus and disagreement. Stay informed: Keep up to date with the latest developments in the AI ecosystem, including new regulations, standards, and best practices. This will help you stay informed and engaged in the discourse, and ensure that your feedback is relevant and timely. Collaborate with others: Consider collaborating with other stakeholders in the AI ecosystem to develop joint submissions or position papers on the proposed regulations and standards. This will help amplify your voice and increase your impact in the discourse. Participate in consultations: Look out for opportunities to participate in consultations on the proposed regulations and standards. This will give you the opportunity to share your feedback directly with policymakers and regulators, and help shape the final regulations and standards. You can even participate in the committee sessions & meetings held by the Indian Society of Artificial Intelligence and Law. To participate, you may contact the Secretariat at executive@isail.co.in.

  • USPTO Inventorship Guidance on AI Patentability for Indian Stakeholders

    The United States Patent and Trademark Office (USPTO) has recently issued guidance that seeks to clarify the murky waters of AI contributions in the realm of patents, a move that holds significant implications not just for American innovators but also for Indian stakeholders who are deeply entrenched in the global innovation ecosystem.As AI continues to challenge the traditional notions of creativity and inventorship, the USPTO's directions may serve as a beacon for navigating these uncharted territories. Let's see. For Indian researchers, startups, and multinational corporations, understanding and adapting to these guidelines is not just a matter of legal compliance but a strategic imperative that could define their competitive edge in the international market. In this insight, we will delve into the nuances of the USPTO's guidance on AI patentability, exploring its potential impact on the Indian landscape of innovation. We will examine how these directions might shape the future of AI development in India and what it means for Indian entities to align with global standards while fostering an environment that encourages human ingenuity and protects intellectual property rights. Through this lens, we aim to offer a comprehensive analysis that resonates with the ethos of Indian constitutionalism and the broader aspirations of India's technological advancement. The Inventorship Guidance for AI-Assisted Inventions This guidance, which went into effect on February 13, 2024, aims to strike a balance between promoting human ingenuity and investment in AI-assisted inventions while not stifling future innovation. We must remember that the Guidance did refer the DABUS cases in which Stephen Thaler's petitions on declaring an AI to be an inventor were denied. The USPTO's guidance emphasises that AI-assisted inventions are not categorically unpatentable, but rather, the human contribution to an innovation must be significant enough to qualify for a patent when AI also contributed. The guidance provides instructions to examiners and stakeholders on determining the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems.The issue of inventorship in patent law for AI-created inventions remains of particular importance to companies that develop and use AI technology. While AI has unquestionably created novel and nonobvious results, the question of whether AI can be an "inventor" under U.S. patent law remains unanswered. The USPTO's guidance reiterates that only a natural person can be an inventor, so AI cannot be listed as an inventor. However, the guidance does not provide a bright-line test for determining whether a person's contribution to an AI-assisted invention is significant enough to qualify as an inventor. The ability to obtain a patent on an invention is a critical means for businesses to protect their intellectual property and maintain a competitive edge. Also, the requirement that an "inventor" be a natural person might not be at odds with the reality of AI-generated inventions. As the conversation around AI inventorship unfolds, companies should be aware of alternative ways to protect their AI-generated inventions, such as using trade secrets. The USPTO's guidance on AI patentability is a significant step towards providing clarity to the public and USPTO employees on the patentability of AI-assisted inventions. The USPTO has provided examples in their guidance to illustrate the application of the guidance. Let's understand the examples provided by them: AI-generated drug discovery: In this example, a researcher uses an AI system to analyze a large dataset of chemical compounds and identify potential drug candidates. The AI system suggests a novel compound that the researcher synthesizes and tests, confirming its efficacy. The guidance indicates that the researcher would be considered the inventor, as they made a significant contribution to the conception of the invention by selecting the dataset, designing the AI system, and interpreting the results. AI-generated materials design: In this example, a materials scientist uses an AI system to design a new material with specific properties. The AI system suggests a novel material composition, which the scientist then fabricates and tests, confirming its properties. The guidance indicates that the scientist would be considered the inventor, as they made a significant contribution to the conception of the invention by defining the problem, selecting the AI system, and interpreting the results. AI-generated image recognition: In this example, a software engineer uses an AI system to develop an image recognition algorithm. The AI system suggests a novel neural network architecture, which the engineer then implements and tests, confirming its performance. The guidance indicates that the engineer would be considered the inventor, as they made a significant contribution to the conception of the invention by defining the problem, selecting the AI system, and implementing the suggested architecture. The guidance is open to comments until May 13, 2024, and may change, but in the meantime, inventors seeking patent protection for their AI-assisted inventions should consider carefully documenting the human contribution on a claim-by-claim basis, including the technology used, the nature and details of the AI system's design, build, and training, and the steps taken to refine the AI system's outputs. Implications for Indian Research Institutions The USPTO's guidance on AI patentability could have significant implications for Indian research institutions, which are at the forefront of AI innovation. The recent memorandum of understanding between the USPTO and the Indian Patent Office at Kolkata to cooperate on IP examination and protection could facilitate collaboration and intellectual property sharing between Indian researchers and global partners. This agreement could pave the way for joint research projects, knowledge exchange, and capacity building in the field of AI.Moreover, the growing partnership between the US and India in scientific research could further strengthen collaboration in AI. The US National Science Foundation and Indian science agencies have agreed to launch 35 jointly funded projects in space, defense, and new technologies, including AI. This initiative could encourage higher-education institutions in both countries to collaborate on AI research and development, leading to new discoveries and innovations.However, regulatory bureaucracy and visa processing delays could pose challenges to scientific collaboration between India and the US. To overcome these obstacles, Indian research institutions could assign a designated individual to manage joint programs and projects with US partners, as suggested by Heidi Arola, assistant vice-president for global partnerships and programmes at Purdue University. Choosing the right institutional partner with compatible goals is also crucial for successful collaboration. Impact on Indian Startups and Entrepreneurs The USPTO's guidance on AI patentability presents both challenges and opportunities for Indian startups and entrepreneurs seeking international patents. The guidance emphasises the need for a significant human contribution to the conception or reduction to practice of the invention, which could make it more difficult for AI-focused startups to secure patents. However, the guidance also provides clarity on the patentability of AI-assisted inventions, which could help startups navigate the patent application process more effectively.Clarity in AI patentability could also affect investment and growth in the Indian startup ecosystem. Investors may be more likely to fund startups with a clear path to patent protection, leading to increased innovation and economic growth. Moreover, the USPTO's initiatives to increase participation in invention, entrepreneurship, and creativity, such as the Patent Pro Bono Program and the Law School Clinic Certification Program, could provide valuable resources and support to Indian startups and entrepreneurs. Relevance for Indian Industry and Multinational Corporations Indian industries and multinational corporations operating in India must navigate patent filings in light of the USPTO's guidance on AI patentability. The guidance emphasizes that AI cannot be an inventor, coinventor, or joint inventor, and that only natural persons can be named as inventors in a patent application. This could have significant implications for companies developing AI-based inventions, as they must ensure that human contributors are properly identified and credited.Moreover, the potential need for harmonization of patent laws to facilitate cross-border innovation and protect intellectual property could affect Indian industries and multinational corporations. The USPTO's Intellectual Property Attaché Program, which has offices and IP experts located full-time in New Delhi, could provide valuable assistance to U.S. inventors, businesses, and rights holders in resolving IP issues in the region. However, Indian companies may also need to engage with local IP offices and legal counsel to develop an overall IPR protection strategy and secure and register patents, trademarks, and copyrights in key foreign markets. Understanding Readiness on AI Patentability for India As the world continues to focus on AI's potential, Indian regulators may not require to respond to the USPTO's guidance and the broader global discourse on AI inventorship by clarifying the patent eligibility framework for AI-related inventions in India, for now. The reason is obvious. In a recent response in the Rajya Sabha, a Minister of State (MoS) of the Ministry of Commerce and Industry reiterated that AI-generated works, including patents and copyrights, can be protected under the current IPR regime. This statement, while seemingly obvious, holds significance for India's position in the global AI landscape. Under international copyright law, only individuals, groups of individuals, and companies can own the intellectual properties associated with AI. The MoS's statement aligns with this principle, indicating that India is open to nurturing AI innovations within the existing legal framework. This position could be interpreted as an invitation for investment and economic opportunities in the AI sector, potentially positioning India as a safe and reasonable hub for AI development. However, it is crucial for governments to carefully observe and address attempts by big companies to promote anti-competitive AI regulations. Creating a separate category of rights for AI-generated works could lead to challenges in compensating for and justifying contributions to the intellectual property, as well as the associated economic ramifications. Andrew Ng, a prominent figure in the AI community, has expressed concerns about big companies pushing for anti-competitive AI regulations. He notes that while the conversation around AI has become more sensible, with fears of AI extinction risk fading, some large corporations are still advocating for regulations that could stifle innovation and competition in the AI sector. One of the specific points made by Ng is the ongoing fight to protect open-source AI. Open-source AI refers to the practice of making AI software, algorithms, and models freely available for anyone to use, modify, and distribute. This approach fosters collaboration, accelerates innovation, and democratises access to AI technology. However, some big companies may seek to impose restrictions on open-source AI through regulations, potentially limiting its growth and impact. An example of the importance of open-source AI can be seen in the development of popular AI frameworks like TensorFlow and PyTorch, which have become essential tools for AI researchers and developers worldwide. These open-source projects have enabled rapid progress in AI by allowing researchers to build upon each other's work and share new ideas more easily. Furthermore, recent research from the University of Copenhagen suggests that achieving Artificial General Intelligence (AGI) may not be as imminent as some believe. The study argues that current AI advancements are not directly leading to the development of AGI, which is the hypothetical ability of an AI system to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to a human being. This research underscores the importance of maintaining a competitive and innovative AI landscape, as the path to AGI remains uncertain and may require ongoing collaboration and breakthroughs. There is another perspective for the Indian ecosystem to consider: R&D and innovation appetite. The insight shared by Amit Sethi, a Professor at the Indian Institute of Technology, Bombay, highlights a significant issue in India's AI landscape. Despite the ongoing AI funding summer in India, the best AI talent in the country is still primarily focused on fine-tuning existing AI models rather than developing cutting-edge AI technologies. This situation poses several challenges for India's AI aspirations. India's AI funding summer, which has seen significant investments in AI startups like Krutrim AI and RagaAI, is yet to produce credible AI use cases. The demand for generative AI services is rising, but the Indian AI ecosystem needs to mature to deliver on this potential. Nandan Nilekani, the visionary behind India's Aadhaar, emphasizes the importance of developing micro-level or smaller AI use cases instead of attempting to create large models like OpenAI. However, the challenge lies in identifying and standardizing AI model weights for smaller, limited-application use cases that can work effectively over the long term. India's tech policy on AI, including the IndiaAI initiative, cannot succeed without prioritizing local capabilities. The Indian tech ecosystem must focus on nurturing homegrown companies to create wealth and intellectual property. An American semiconductor company CEO also emphasized that India needs to capitalize on the AI revolution through homegrown companies rather than relying on multinational corporations. Some major Indian companies are developing AI use cases that are becoming knock-offs or heavily reliant on models built by OpenAI, Anthropic, and others. This dependence on external AI models should be avoided to foster genuine innovation in the Indian AI landscape. Suggestions for Indian Stakeholders Indian stakeholders, including research institutions, startups, and industries, should prepare for possible changes in patent law and international intellectual property norms by: Staying informed about the latest developments in AI patentability, both in India and globally. Ensuring that AI-related inventions meet the fundamental legal requirements of novelty, inventive step, and industrial application. Focusing on integrating AI features into practical applications to demonstrate a technical contribution or technical effect. Providing clear and definitive empirical determinations of technical contributions and technical effects in patent applications. Engaging with policymakers and patent offices to advocate for a balanced approach to AI patentability that protects the rights of inventors while fostering innovation. Conclusion Understanding the USPTO's AI patentability guidance is crucial for Indian stakeholders, as it could significantly impact the growth of AI-related inventions in the country. By proactively engaging with global patentability standards and adapting to changes in patent law, Indian stakeholders can support innovation in India's research, startup, and industry sectors. As the world continues to grapple with the challenges and opportunities presented by AI inventorship, India has the potential to emerge as a leader in AI-related patent filings and contribute to the global discourse on AI patentability.

  • The French, Italian and German Compromise on Foundation Models of GenAI

    The author is pursuing law studies at National Law University, Odisha and is a former Research Intern at the Indian Society of Artificial Intelligence and Law. Almost every economy around the world is trying to curate a model that would put guardrails on the AI technology that is rapidly developing all around the world and is being increasingly used by people. Like other economies, the European Union (EU) is trying to lead the world in developing AI technology and in coming up with an efficient and effective way to regulate it. In mid-2023, the EU passed one of the first major laws to regulate AI which was a model that would aid policymakers. The European Parliament, for example, had passed a draft law, the EU AI Act, which would impose restrictions on the technology’s riskiest uses. Unlike the United States, which has taken up the challenge to create such a model quite recently, the EU has been trying to do so for more than two years. They took it up with greater urgency after the  release of ChatGPT in 2022. On 18 November 2023, Germany, France, and Italy reached an important pact on AI regulation and released a joint non-paper that countered some basic approaches undertaken by the EU AI Act. They suggested alternate approaches that they claim would be more feasible and efficient. The joint non-paper underlines that the AI Act must aim to regulate the application of AI and not the technology itself because innate risks lie in the former and not in the latter. The joint non-paper highlights some key areas in which they beg to differ from the point of view of the AI Act passed by the Parliament. The highlights are: Fostering innovation while balancing responsible AI adoption within the EU The joint paper pushes for mandatory self-regulation for foundation models and advocate for stringent control over AI's foundational models, aiming to enhance accountability and transparency in the AI development process. While the EU AI Act targets only major AI producers, the joint paper advocates for a universal adherence to avoid compromising trust in the security of smaller EU companies. Immediate sanctions for defaulters of codes of conduct are excluded but a future sanction system is proposed. The focus is on regulating the application of AI and not the AI technology itself. Therefore, the development process of AI models should not be subject to regulation. What are Foundation Models of AI? Foundation models, also called general purpose AI, are AI systems which can be used to conduct a wide range of tasks that pertain to various fields such as understanding language, generating text and images, and conversing in natural language. This can be done so without majorly modifying and fine-tuning them. They can be used through several innovative methods. They are deep learning neural networks that change the approach adopted by machine learning. Data scientists use foundation models as starting points of developing AI instead of starting from scratch. This makes the process fast and cost-effective. The European Union AI Act The EU AI Act is a comprehensive framework of law that governs the sale and use of AI in EU. It sets consistent standards for AI systems across EU. It tries to address the risks of AI through obligations and standards that intend to safeguard the safety and fundamental rights of citizens in the EU and globally. It works as a part of a wider legal and policy framework and regulates different aspects of the digital economy which includes General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act. It moves away from a “one law fixes all” approach to this emerging AI regime. Risk-based Approach of the AI Act versus the joint pact of FR, IT and DE One of the key aspects of the AI Act is to regulate foundation models that support a wide range of AI applications. The most prominent AI companies like OpenAI, Google DeepMind, and Meta develop such foundation models. The AI Act aims to regulate these models by running safety tests and apprise the governments of their results in order to ensure accountability and transparency and to mitigate risks. The recent U.K. AI Safety Summit focused on risks associated with the most advanced foundation models. So basically, it is the technology that is regulated in order to manage risks. The joint non-paper aims to change this narrative of regulating the technology to regulating the application of the technology. Developers of foundation models would have to publish the kinds of testing done to ensure their model is safe. No sanctions would be applied initially to companies who did not publish this information as per the code of conduct, but the non-paper suggests a future sanction system could be set up in future. The three countries oppose the “two-tier” approach to foundation model regulation, originally proposed by the EU because it would impose stricter regulations on the most capable models expected to have the largest impact. It will hit on the legs of innovation and hamper the growth of AI technologies. French company Aleph Alpha and German company Mistral AI, which are the most prominent AI companies of Europe, are opposed to this approach of risk management. The discourse on how regulation should happen and adopting an optimal strategy would require a close look at the pros of cons of each approach i.e. regulation of the technology (EU’s two tier approach) and regulation of the applications of the technology (Franco-German-Italian view). Regulating the technology itself Pros: Directly regulating the technology establishes uniform and consistent standards which all applications can adopt. This gives clarity in compliance. Technological regulation can prevent creation and use of harmful and malicious AI applications. Cons: Strict regulations on the technology will hamper and stifle innovation by limiting the exploration of potentially beneficial applications. Technological advancements happen much faster than prescriptive regulations. The very purpose of regulations will be defeated if it fails to regulate the advanced versions of the technology. Regulating the application of technology Pros: When regulation is application-specific, it allows for a flexible approach where rules of risk mitigation can be tailored according to different applications and their uses instead of a one size fits all approach which is impractical. Regulations based on application can help to focus on responsible use of the technology and on measures of accountability in case potential risks manifest themselves. It still permits smooth flow of innovation. Cons: A risk of ignoring certain aspects of AI technology remains if the regulation is solely focused on applications, which may leave for misuse or unintended effects. Desirably, a balanced approach that combines elements of both technology-focused and application-focused regulation can be most effective. This can be done by curating rules and standards for the technology itself and also framing regulations specific to the potential risks associated with their application. But a case must be made which opines that regulation of the application of technology is a better way to go because, in a world where use of technology in the day to day life has become the norm for many, stifling innovation, which is a direct result of regulating technology, can be a bad idea. Technological advancements are required because everyone has the psychological tendency to get their work done easily and quickly, especially when society is evolving to increase employment and involvement of people in skilled work. Focus must be on ensuring the large masses of people benefit from the boons of the AI technology while ensuring that incentives for bad actors are minimised through regulation of the use of AI tools. This is to be done by increasing accountability on its use, by giving proper guidance on how to use it efficiently and ethically and how to prevent potential harms that can arise from uninformed or irresponsible use. The Model Cards requirement under mandatory self-regulation The non-paper proposes regulating specific applications rather than foundation models, aligning more with the risk-based approach. It requires defining model cards as a means to achieve such an approach. Defining model cards is a mandatory element of self-regulation. This means foundation model developers would have to define model cards, including technical documentation that presents information about trained models in an accessible way, following best practices within the developers’ community. Defining model cards promotes the principle of ‘transparency of AI’. Model cards require inclusion of limits on intended uses, potential limitations, biases, and security assessments. But it only advises users to make decisions of purchasing or not. But when it comes to recognising the transparency, accountability, and responsible AI criteria, then many users might find it highly complex to comprehend due to the technical nature of AI applications. They will not be able to adequately interpret the information on model cards. Model cards are more accessible to developers and researchers with a high level of education in AI. So there arises an imbalance of power between developers and users regarding the understanding of AI. Standardization of information remains an anomaly if model cards are used. Providing a high volume of information in model cards may confuse users. Maintaining a balance between transparency and simplicity is crucial. Users may not be aware of the existence of model cards or may not take the time to review them, especially in cases where AI systems are very complex. The model card requirement may lack feasibility because there is no scope of external monitoring over its elements. It is inflexible to the pace in which technology develops and so its information can get outdated, resulting in stifling innovation by binding new technologies with outdated compliances and information.

  • OpenAI's Qualia-Type 'AGI' and Cybersecurity Dilemmas

    The author of this insight is pursuing law at National Law University, Odisha and a former research intern at the Indian Society of Artificial Intelligence and Law. OpenAI CEO Sam Altman was fired from its Board of Directors for a short spell in November 2023. Along with him, another member of the Board, Grog Brockman, was also fired. Both the spokespersons of OpenAI and these two people refused to provide any reasons for this when they were reached out to. However, it came to light that several researchers and staff of OpenAI had written a letter to the Board, before the firing, warning of a powerful artificial intelligence discovery that they said could threaten humanity. OpenAI was initially created as a non-profit organisation whose mission was “to ensure that artificial general intelligence benefits all of humanity.”[1] Later, in 2019, it opened up a for-profit branch. This was a cause of concern because it was anticipated that this for-profit wing will dilute the original mission of OpenAI to develop AI for the benefit of humanity and will rather act for profit, which can often lead to non-adherence to ethical growth of the technology. Sam Altman and Grog Bockman were in favour of strengthening this wing while the other 4 Board members were against giving too much share power to it and instead, wanted to stick to developing AI for human benefit rather than to achieve business goals. It was cited by OpenAI that Sam Altman was not consistent with his communication with the rest of the Board regarding the development of a long-anticipated breakthrough - Q* AI model which is a Artificial General Intelligence (AGI) model that can surpass all existing AI developments and can achieve tasks and goals way beyond what we can imagine AI to do currently. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. It was presumed that CEO Altman had knowledge of advanced developments regarding AGI within OpenAI. It was reported in the media that he had concealed facts from the Board of Directors, leading them to firing him. With several debates going around about this in the AI community, including within OpenAI employees who protested against his firing, Altman’s position was restored and structural changes within OpenAI were suggested that includes the involvement of people like Satya Nadella. But the concern is that, with its advanced abilities, Q* AGI could be problematic for being opaque. Any presumed form of AGI would naturally work on reinforcement learning at least based on the current scientific knowledge we have. In machine learning (ML), a drawback is the vast amount of data that models require for training. The more the complexity in the model, the more data it requires. Still, the data may not be reliable. It may have false or missing values or may be collected from untrustworthy sources. Reinforcement Learning overcomes the problem of data acquisition by almost completely removing the need for data. Reinforcement learning is a branch of ML that trains a model to come to an optimum solution for a problem by taking decisions by itself. This feature is what gives AGI its strong capabilities. It has vast computing capabilities which can help it to predict most future events and phenomena including the fluctuations in stock and investment markets, advanced weather conditions, election outcomes and much more. It can use its mathematical algorithms and other complex technical elements to foresee how the human mind will think and traverse, thus gaining outcome knowledge and taking decisions that can alter and influence many things. One of the biggest apprehensions behind the development of Q* technology was that it could pose a cybersecurity risk to national governments and their classified data. Can QUALIA crack Advanced Encryption Standards (AES) which uses bitcoin-like blockchain and encryption technology to protect classified information and documents? How can major cyber security breaches affect us? How to safeguard against such cyber security breaches? These are some questions one must consider. Some Considerations We depend a lot on encryption for securing our data. It might be perceivable that the encryption safeguards that we rely on keep our data secure. That is not true entirely. As discussed before, tech like Q* has the ability to break AES as well. It may have accomplished a feat that was once considered impossible, to break modern encryption. The recent LLaMA leak on 4chan suggests that Q* can solve AES-192 and AES-256 encryption using a ciphertext attack. With AES compromised, the entire digital economy can fall apart. Government secrets, healthcare data, banking data, etc., can be exposed. The NSA has previously shown interest in breaking encryption through their Project Tundra which is similar to the alleged capabilities of Q*. This raises questions about the ethical implications of such AI advancements by state and non-state actors. Recommendations Standards and Certifications There needs to be implementation of mandatory legislation that requires nations and specific organizations to have minimum cyber security standards in place. There should be a self-regulatory set of standards to help organizations develop their cyber security measures. States must establish a Computer Incident Response Team, a national Network and Information Systems (NIS) authority and a national NIS strategy. Companies must adopt state of the art security approaches that are appropriate to manage the risks posed to their systems. Another element of standards and certification can be a regulation on a set of standards for electronic identification and trust services for electronic transactions. Regulation of Encryption standards Sensitive data can be protected by ensuring efficient encryption measures. Data must be classified based on its sensitive nature and significance. Investing on equally encrypting all types of data is unnecessary. More sensitive data requires a greater and stronger level of encryption with added layers of security. When and how encryption should be applied is a major consideration here. A multi-factor authentication is recommended to add an extra layer of security, even if an attacker gains access to encrypted data keys. It is recommended that the best industry practices are adopted while doing so. End-to-End encryption is a best practice to protect data throughout its entire lifecycle, from creation to storage to transmission. Strong and widely accepted encryption algorithms that are fully updated must be used and there should be periodic checks of upgradation requirements. Conducting regular audits and assessments is necessary and there must be a supervisory body that ensures these regular checks. Regulating reinforcement learning Regulation of reinforcement learning by AI must be done which involves establishing guidelines and frameworks in order ensure responsible and ethical use. Transparency in the development and deployment of RL algorithms is crucial. RL developers should create a manual which provides information about the algorithms' goals, training data, and decision-making processes. This should be done, especially where RL is used in critical applications that can affect society. Liability mechanisms must be in place for holding developers and organizations accountable for the actions of RL algorithms. Frameworks must be developed for comprehensively defining the rights and liabilities in case losses occur and harm is caused by RL-based AI systems. When personal data of individuals are involved, privacy concerns emerge. Measures must be implemented to ensure compliance with data protection regulations and safeguard user privacy. Since billions of people are currently using AI tools in various aspects of their lives, it is necessary for them to have a basic knowledge about RL technology. Definitely policymakers, developers, and the general public must understand the benefits and potential risks associated with RL so that they can make informed choices about their use and create effective policies. It is a good idea to collaborate with international organizations and regulatory bodies to establish consistent global standards for RL. Cyber security insurance Cyber insurance is important with the development of AI because there will be a new set of risks that tradition insurance policies may not cover. Some risks can be data breaches, property damage, business interruption, or even physical harm to humans. It is quite unpredictable what kind of risks AGI models may pose. Also, malicious actors will misuses these models who may try to steal, corrupt, or manipulate them for their own purposes and insurance must definitely cover resulting losses. AI systems may also fail unintentionally due to faulty assumptions, design flaws, or unexpected situations that may produce unsafe or undesirable outcomes. This is another area where cyber insurance can help cover the costs and liabilities associated with these potential failures and provide guidance and support for preventing them. Legal regulation Existing legal instruments may not be enough to cover and address the risks that will accompany security breaches by AGI. Integrating AI security requirements into existing data protection laws is necessary and every national parliament must develop and committee consisting of AI and legal experts who will draft stringent laws to prevent any cyber security breach by AGI models once they come to use. References [1] https://openai.com/about

  • The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023

    This is Infographic Report IPLR-IG-001, in which we have addressed the concept and phenomenon of multipolarity in the context of India’s geopolitical and policy realities per se. You can also purchase the report at the VLiGTA App. Here is a sneak peek of the report. Here is the table of contents of the report: 1 | Understanding Multipolarity What is multipolarity, and how has this concept evolved for India in political, economic and social aspects. 2 | The Transformative Role of Multipolarity How multipolarity as a policy phenomenon (and a geopolitical phenomenon changed the way we look at global issues and problems. Global Dynamics and Politics Diversifying Power & Competence Dynamics Old Multilateralism vs New Multilateralism Multi-Alignment as the New Normal Plurilateralism and ‘Minilateralism’ The Rise of Specialised Non-State Actors The World is Post-Ideological Crisis and Realism The Language and Outlook of ‘Polycrisis’ Principled Realism The Weaponization of Everything Technology and Modernity The Penetrable and Irreversible Role of Digital Technologies The Big Tech and the Red Tech New Technological & Economic Modernity Economic Strategies Economic and Technological Regionalism Enabling Strategic Hedging 3 | Emerging Use Cases in a Multi-polar World A set of proposed use cases in law, policy and economics which could steer the volatility of the multipolar world. Policy Development and Implementation The Role of Civilisation Science to Solve Complex Policy Problems Economic and Environmental Strategies A New Industrial Policy and the Circular Economy The Bundling & Unbundling of Soft Power International Law and Diplomacy The Rise of Multi-aligned thinking in International Law The Standardisation of Modern Diplomacy by Middle Powers at a micro-level Regulatory Trends and Sovereignty The Rise and Rise of Regulatory Sovereignty & Subterfuge The Rise of Soft Law & Self-Regulation You can download the complete infographic report, here. The multipolar world both as a phenomenon and a concept, have been hard to understand. Nevertheless, through this infographic report, we have discussed various aspects of a multipolar world, by focusing on certain trends andBoth a compelling phenomenon and a nuanced concept, the multipolar world often eludes easy understanding. However, in our meticulously crafted infographic report, we delve deep into the multifaceted nature of a multipolar world. We shine a light on prevailing trends and tackle complex problems from a structural perspective. Should you find yourself curious or in need of further discussion or help in any project or initiative as a professional or a business, we warmly invite you to reach out to us at vligta@indicpacific.com. Your inquiries are not just welcome, they're anticipated!

  • New AI Strategy and an Artificial Intelligence (Development & Regulation) Bill for India: Proposal

    India's first AI policy was presented to the world in 2018. The policy, developed by NITI Aayog, the Government of India's key policy think tank, envisioned India as the next “garage” for AI start-ups and their innovations. The focus on responsible AI has also been a priority of the G20 India Presidency. India's Council Chairpersonship of the Global Partnership on Artificial Intelligence (GPAI) in 2023 reflects the Government of India's commitment to the field of AI as an industry. However, nearly 4-5 years have elapsed since the release of the 2018 AI policy. The technology landscape has undergone significant changes during this period. In my opinion, the current policy is no longer adequate or appropriate for the post-COVID technology market. The rise of generative AI and Artificial Intelligence Hype has also been a challenge. This has created uncertainty for investors and entrepreneurs, hindering innovation. Many use cases and test cases of generative AI and other forms of AI applications remain scattered and uncoordinated. There is no clear consensus on how to regulate different classes of AI technologies. While there have been some international declarations and recommendation statements through multilateral bodies/groups like UNESCO, ITU, OECD, the G20, the G7, and the European Union, even the UN Secretary General has stressed the need for UN member-states to develop clear guidelines and approaches on how to regulate artificial intelligence in his 2023 UN General Assembly address. This proposal submitted by Indic Pacific Legal Research addresses those key technology, industry and legal-regulatory problems and trends, and presents a point-to-point proposal to reinvent and develop a revised National Strategy on Artificial Intelligence. The proposal consists of a set of law & policy recommendations, with a two-fold approach: The Proposal for a Revised National Strategy for Artificial Intelligence The Proposal for the Artificial Intelligence (Development & Regulation) Act, 2023 In the Annex to this Proposal, we have provided additional Recommendations on Artificial Intelligence Policy based on the body of research developed by Indic Pacific Legal Research and its member organizations, including, the Indian Society of Artificial Intelligence and Law. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in Background To provide a concise overview of the state of 'AI Ethics' both globally and in India, it is crucial to focus on three key domains: (1) technology development and entrepreneurship, (2) industry standardization, and (3) legal and regulatory matters. Our organization has actively contributed to this field by producing significant reports and publications that highlight critical issues related to AI regulation and address the prevailing hype around AI. These contributions are detailed below for your further review and consideration. 2020 Handbook on AI and International Law [RHB 2020 ISAIL] Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches, ISAIL-TR-001 Regularizing Artificial Intelligence Ethics in the Indo-Pacific, GLA-TR-002 2021 Handbook on AI and International Law [RHB 2021 ISAIL] Regulatory Sandboxes for Artificial Intelligence: Techno-Legal Approaches for India, ISAIL-TR-002 Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 Promoting Economy of Innovation through Explainable AI, VLiGTA-TR-003 Technology Development and Entrepreneurship Investors express apprehension regarding the widespread adoption of AI applications and the absence of technological neutrality required for ensuring their long-term sustainability across various products and services. In order to foster an environment conducive for MSMEs and emerging start-ups to embark on AI research and the development of AI solutions, it is imperative to provide them with subsidies. Currently, India faces a deficiency in the requisite ecosystem for AI endeavours. Even prominent semiconductor firms like NVIDIA and major technology entities such as Reliance and TCS have advocated for government support in semiconductor investments and the establishment of robust computing infrastructure to benefit local start-ups. Industry Standardisation As prominent companies actively establish their own Responsible AI guidelines and self-regulatory protocols, it becomes imperative for India to prioritize the adoption of industry standards for the classification and categorization of specific use cases and test cases. We had previously proposed this approach in the context of Generative AI applications in a prior document. The application of AI technology in Indian urban and rural areas, spanning various sectors, naturally involves elements of reference and inference unique to the region. However, it is noteworthy that the predominant discourse on 'AI ethics' has been primarily confined to major cities such as New Delhi and a select few metropolitan centers. In order to facilitate the development of AI policies, AI diplomacy, AI entrepreneurship, and AI regulations – the four essential facets of India's AI landscape, it is imperative to ensure the active participation and equitable recognition of stakeholders from across the country. Distinguished industry and policy organizations, although representing the concerns of larger players including prominent names, are fulfilling their expected role. Nonetheless, relying solely on these entities to devise, propose, and advocate solutions tailored to the requirements of our MSMEs and emerging start-ups could potentially hinder the establishment of industry-wide standards. Therefore, the Ministry of Electronics and Information Technology (MeiTY) should engage in thoughtful collaboration with the Ministry of Commerce & Industry to address the issue of gatekeeping within the AI sector across the four domains of AI policy, AI diplomacy, AI entrepreneurship, and AI regulation. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in Legal and Regulatory Issues Many use cases and test cases of AI applications as products and services, across industry sectors lack transparency in terms of their commercial viability and safety on even basic issues like data processing, privacy, consent and right of erasure (dark patterns). At the level of algorithmic activities and operations, there is a lack of sector-specific standardisation, which could be advantageous for Indian regulatory authorities and market players in driving policy interventions & innovations at a global level. Nevertheless, the best countries can do is to have their regulators enforce existing sector-specific regulations to test and enable better AI regulation standards, starting from data protection & processing to the issue of algorithmic activities & operations. In a global context, it's worth noting that think tanks, as well as prominent AI ethics advocates and thought leaders in Western Europe and Northern American nations, exhibit comparatively lesser interest in the G20's efforts to advance Responsible AI ethics standards. Their attention appears to be primarily drawn to the Responsible AI principles and solutions emerging from the G7 Hiroshima, a perspective that is duly acknowledged. However, it is noteworthy that a significant number of AI ethicists and industry figures in Western Europe and Northern America seem to be overlooking the valuable contributions and viewpoints that India offers in the realm of AI Ethics. Moreover, it is essential to recognize that vital stakeholders responsible for advancing discussions on AI ethics and policy within South East Asia (comprising ASEAN nations) and Japan have similarly overlooked the ongoing AI policy discourse in India. Given India's dedication to establishing the Indo-Pacific Quad—a partnership encompassing India, Australia, the United States, and Japan—with the aim of fostering collaboration on pivotal technologies and regulatory matters, it is imperative for the Government of India to take significant steps to facilitate cooperation with dedicated and relevant AI ethics industry leaders and thought leaders in South East Asia. This collaborative effort can play a crucial role in advancing the shared objectives of the Quad. The discourse surrounding AI and Law in India has largely remained unchanged without any notable developments or transformative shifts. The predominant topics of discussion have primarily revolved around issues related to data protection rights, notably exemplified by the introduction of the Digital Personal Data Protection Act, 2023. Additionally, considerations have also extended to address concerns related to information warfare and sovereignty and develop a civil & criminal liability regime for digital intermediaries, a notable instance being the introduction of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Nevertheless, it is laudable to observe that at the level of the Council of Ministers, there exists a discernible and unwavering commitment to driving forward these discussions. This unwavering intent reflects a dedicated approach towards addressing the intricate convergence of AI and legal aspects in the Indian context. Indeed, legislative advancements in areas like digital sovereignty, digital connectivity, drones, dark patterns and data protection & consent have been both responsive and aligned with the needs of the Indian legal landscape. On numerous intricate facets of law and policy, there is no pressing urgency for regulatory interventions in India. However, a notable observation is the absence of original thinking and innovative insights focused on technology law and policy within the country. The discourse surrounding AI and Law within India tends to be confined to addressing three primary issues: Digital sovereignty Data protection law Responsible AI With the exception of the first two concerns, it becomes apparent that documents published by various entities involved in AI policy have been somewhat inadequate in fostering an informed, industry-specific approach towards regulating and nurturing a thriving AI sector in India. Despite the Government's expressed commitment to encouraging policy inclusivity, a significant hurdle has been the prevalence of gatekeeping practices across the landscape of law and policy influencers and thought leaders. Regrettably, many of these discussions tend to gain recognition and significance only when conducted in a handful of major metropolitan areas, thus limiting the diversity and inclusivity of perspectives. Numerous AI companies in India have yet to establish standardized self-regulatory frameworks aimed at fostering market integrity. This situation can be attributed to a confluence of factors. o First, the proliferation of use cases is essential to stimulate the adoption of self-regulatory practices and measures. o Second, even if the commercial need for self-regulation is acknowledged, the absence of significant advancements in the AI and Law discourse in India for nearly 4-5 years has resulted in a lack of clarity concerning the country's stance on four critical dimensions: AI policy, AI diplomacy, AI entrepreneurship, and AI regulation. This lack of clarity contributes to regulatory uncertainty, akin to the challenges faced by the Web3 and gaming industries in India. o Third, this lack of clarity in policy and regulation creates an environment of uncertainty, similar to the issues faced by the Web3 and gaming industries in India. o Fourth, gatekeeping practices further compound the complexity of the discourse and hinder the engagement of diverse voices. This sentiment is echoed by key commercial players across strategic & non-strategic and emerging sectors in India, highlighting the need for a more inclusive and open dialogue. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in The Proposal to Reinvent Indian AI Strategy Proposal for a New Artificial Intelligence Strategy for India We suggest that in a reinvented AI strategy for India, the four pillars of India's position on Artificial Intelligence must be AI policy, AI diplomacy, AI entrepreneurship and AI regulation. These are the most specific commitments in the four key areas that could be achieved in 5-10 years. The rationale and benefits of adopting each of the points in the policy proposal are explained on a point-to-point basis. AI Policy #1 Strengthen and empower India’s Digital Public Infrastructure to transform its potential to integrate governmental and business use cases of artificial intelligence at a whole-of-government level. #2 Transform and rejuvenate forums of judicial governance and dispute resolution to keep them effectively prepared to address and resolve disputes related to artificial intelligence, which are related to issues ranging from those of data protection & consent to algorithmic activities & operations and corporate ethics. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in AI Diplomacy #3 Focus on the socio-technical empowerment and skill mobility for businesses, professionals, and academic researchers in India and the Global South to mobilize and prepare for the proliferation of artificial intelligence & its versatile impact across sectors. #4 Enable safer and commercially productive AI & data ecosystems for startups, professionals and MSMEs in the Global South countries. #5 Bridge economic and digitalcooperation with countries in the Global South to promote the implementation of sustainable regulatory and enforcement standards, when the lack of regulation on digital technologies, especially artificial intelligence AI Entrepreneurship #6 Develop and promote India-centric, locally viable commercial solutions in the form of AI products & services. #7 Enable the industry standardization of sector-specific technical & commercial AI use cases. #8 Subsidize & incentivize the availability of compute infrastructure, and technology ecosystems to develop AI solutions for local MSMEs and emerging start-ups. #9 Establish a decentralized,localized & open-source data repository for AI test cases & use cases and their training models, with services to annotate & evaluate models and develop a system of incentives to encourage users to contribute data and to annotate and evaluate models. #10 Educate better and informed perspectives on AI-related investments on areas such as: (1) research & development, (2) supply chains, (3) digital goods & services and (4) public-private partnership & digital public infrastructure. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in #11 Address and mitigate the risks of artificial intelligence hype by promoting net neutrality to discourageanti-competitive practices involving the use of AI at various levels and stages of: (1) research & development, (2) maintenance, (3) production, (4) marketing & advertising, (5) regulation, (6) self-regulation, and (7) proliferation. AI Regulation #12 Foster flexible and gradually compliant data privacy and human-centric explainable AI ecosystems for consumers and businesses. #13 Develop regulatory sandboxes for sector-specific use cases of AI to standardize AI test cases & use cases subject to their technical and commercial viability. #14 Promote the sensitization of the first order, second order and third order effects of using AI products and services to B2C consumers (or citizens), B2B entities and even inter and intra-government stakeholders, which includes courts, ministries, departments, sectoral regulators and statutory bodies at both standalone & whole-of-government levels. #15 Enable self-regulatory practices to strengthen the sector-neutral applicability of the Digital Personal Data Protection Act, 2023 and its regulations, circulars and guidelines. #16 Promote and maneuver intellectual property protections for AI entrepreneurs & research ecosystems in India. Read the Complete Proposal by downloading the full document: You can also download the complete proposal from here. Reminder: the AI Act and the New Strategy can now be read at aiact.in | artificialintelligenceact.in Any suggestions and feedback on the points of proposal can be communicated at vligta@indicpacific.com.

  • The Practicability of Explainable Artificial Intelligence

    The author is a former Research Intern at the Indian Society of Artificial Intelligence and Law. Introduction to XAI Explainable Artificial Intelligence (XAI) stands at the forefront of the dynamic landscape of artificial intelligence, emphasizing the fundamental principles of transparency and understandability within AI systems. Abbreviated as XAI, this concept embodies a spectrum of methods and techniques utilized in artificial intelligence technology, specifically designed to make the outcomes generated by AI solutions interpretable for human experts. Its primary distinction lies in direct contrast to the opaqueness of "black box" AI systems, renowned for their opaque and inscrutable internal mechanisms. XAI's core essence revolves around providing a window into the intricate inner workings of AI, primarily focusing on interpretability and predictability. It achieves this by offering various forms of explanations, such as decision rules, white-box models, decision trees, graphs, prototypes, textual explanations, and numerous other methods. Explainable Artificial Intelligence (XAI) operates across diverse tiers of interpretability, each contributing to a deeper understanding of AI systems. At the core is Global Interpretability, focusing on the comprehensive comprehension of an entire model. This level delves into uncovering the fundamental logic that governs the model, shedding light on how input variables intertwine to shape predictions and decisions. In contrast, Local Interpretability zeroes in on individual predictions, seeking to elucidate the rationale behind specific decisions made for distinct instances. Model-Specific Interpretability explores how different types of models function. For instance, it helps in understanding why decision trees are generally more interpretable than neural networks due to their straightforward structure and decision-making process. Lastly, Model-Agnostic Interpretability broadens its scope, offering techniques that explain predictions across diverse machine learning models, irrespective of their complexity or type. Through these diverse approaches, XAI enables the justification of algorithmic decision-making processes, empowering users to identify, rectify, and exert control over system errors. One of the pivotal strengths of XAI lies in its capacity to uncover learned patterns within AI systems. These revelations not only aid in justifying decisions but also significantly contribute to knowledge discovery within the realm of AI. By unveiling these learned patterns, XAI offers a pathway to comprehending and leveraging the insights gleaned from AI systems, fostering a more informed and empowered approach to utilizing artificial intelligence. Setting the Context As AI continues its pervasive integration across diverse societal domains, the legal landscape governing AI regulations is pivoting towards the advocacy of Responsible and Ethical AI. This approach champions principles centred on fairness and transparency within AI systems. However, as the era of autonomous systems unfolds and endeavours towards shaping comprehensive legal frameworks, a conspicuous gap within the Responsible AI approach becomes apparent. This discrepancy stems from the challenge of imposing a universal ethical standard across all sectors. Diverse functions and varying levels of automation in different AI applications render it impractical to expect, for instance, a large language model engaged in content generation to adhere to the same ethical standards as a medical device performing intricate procedures on humans. The inherent risks, autonomy levels, and degrees of automation vastly differ between these scenarios. Thus, it becomes imperative to comprehend the decision-making processes of autonomous systems and formulate regulations that are not only effective but also tailored to the distinct needs of each domain. As the proposed Digital India Act of 2023 sets the stage for an impending AI regulatory framework, it becomes crucial to recognize the imperative need for the integration of existing Responsible and Ethical AI principles with Explainable AI. This integration is pivotal in crafting a robust and comprehensive regulatory framework that accounts for transparency, accountability, and domain-specific considerations. Application of XAI on Different Products Drug Discovery The integration of artificial intelligence (AI) and machine learning (ML) technologies has led to a significant transformation in the field of drug discovery. However, as these AI and ML models grow increasingly complex, the demand for transparency and interpretability within these models becomes more pronounced. This necessity has given rise to the emergence of eXplainable Artificial Intelligence (XAI), a novel approach aimed at addressing this specific issue by providing a clearer and more understandable insight into the predictions generated by machine learning models. In recent years, XAI has garnered growing interest and attention, especially concerning its application to the field of drug discovery. One of the primary advantages of employing XAI in drug discovery is its ability to ensure interpretability. Understanding why a particular compound is predicted to be effective or not is crucial in the realm of drug development, significantly enhancing the efficiency of designing and creating new drugs. Furthermore, as AI and ML models become more intricate, the need for increased transparency is essential, and XAI effectively meets this need by rendering the decision-making processes of these models more transparent. Additionally, the application of XAI in drug discovery extends across various crucial aspects, including target identification, compound design, and toxicity prediction. This broad application highlights the relevance and effectiveness of XAI in multiple stages of the drug development process. Fraud Detection Within the domain of fraud detection, one facet of XAI involves employing transparent techniques such as decision trees and Bayesian models. These methods inherently offer interpretability by outlining clear rules that govern their decisions, ultimately making the decision-making processes more understandable for human investigators. Another critical dimension of XAI in fraud detection revolves around making complex models, like neural networks and deep learning algorithms, more "explainable". This pursuit involves the development of specific methods tailored to interpret the decisions made by these intricate models, thereby shedding light on the reasoning behind their predictions. The explanations provided by XAI in the context of fraud detection play a pivotal role in aiding investigators to discern how and why AI arrived at specific conclusions. For instance, these insights might uncover instances where healthcare providers bill for services not rendered or overbill for services beyond appropriate reimbursement rates. Furthermore, the integration of fraud detection models within the operational workflows of insurance companies optimizes the identification and verification steps, reducing operational costs and increasing processing efficiency. These models efficiently sift through legitimate claims, streamlining the workload for fraud investigators and allowing them to focus on more suspicious cases. Self-Driving Cars Explainable AI (XAI) stands as a critical linchpin in the advancement and wider acceptance of autonomous vehicles (AVs) and self-driving cars. Its role is fundamental in rendering AVs more comprehensible, reliable, and socially embraced. Here's how XAI contributes to this transformative process: Firstly, it fosters trust and transparency by providing clear insights into the decision-making processes of AI-driven autonomous vehicles, crucial for users to understand and trust this advanced technology. Additionally, XAI ensures regulatory compliance, assisting AVs in explaining their decisions to align with the diverse legal requirements across different jurisdictions. XAI's contribution extends further by enhancing both the safety and transparency of autonomous driving technology, garnering support from regulatory bodies and actively engaged stakeholders. This support is pivotal in bolstering public confidence in these technological advancements, which constantly make intricate real-time decisions. The application of XAI within the realm of Autonomous Vehicles encompasses several key areas. For instance, it is instrumental in explaining semantic segmentation predictions derived from the input frames observed by AVs, aiding in understanding how the vehicle's perception system identifies and categorizes objects, crucial for safe navigation. Moreover, XAI plays a vital role in various dimensions of AV systems, including perception, planning, and control, ensuring an understanding of how the vehicle perceives and manages objects within its environment for safe navigation and operation. Recommendations The development and implementation of AI frameworks in India are of critical importance as artificial intelligence becomes increasingly integrated into various aspects of society. Here are comprehensive suggestions to fortify the incorporation of eXplainable AI (XAI) and its ethical application: Emphasis on Explainability With AI playing an expanding role in our daily lives, prioritizing the integration of explainability within AI systems becomes paramount. Policymakers should consider mandating explainability in AI regulations, thereby encouraging the development of transparent and easily understandable AI systems. This move will pave the way for greater trust and comprehension among users. Collaborative Frameworks Policymakers need to foster collaboration between AI developers, subject matter experts, and policymakers to formulate guidelines specifically tailored for XAI implementation in various sectors. This collaborative endeavour will ensure that XAI is effectively applied while meeting the specific requirements and standards of different domains. Validation and Training Enhancement When integrating explainable AI, validating methods and explanations in a user-friendly format is crucial. Shifting focus from merely evaluating explanations to incorporating explanation quality metrics into the training process is essential. This approach ensures that future XAI models are not only accurate but also proficient in providing understandable explanations, enhancing their usability and transparency. Legal Framework and Policy Integration The Indian government should consider establishing a comprehensive legal framework governing the deployment of XAI technologies. This framework should encompass regulations overseeing AI applications and effectively address potential hurdles that may arise. Additionally, it is crucial to consciously prioritize the development of 'XAI' and related concepts such as 'Differential Privacy' by implementing methodologies like 'Federated Learning' within policy documentation. Implementing these recommendations will not only ensure the ethical and responsible deployment of AI technologies but also encourage the integration of transparent and accountable AI systems within India's regulatory framework. Conclusion The evolution of Explainable AI (XAI) has marked a pivotal shift in the landscape of artificial intelligence, emphasizing transparency and understandability within AI systems. Its diverse spectrum of interpretability levels, from the global to the specific, provides crucial insights into the decision-making processes of AI, enabling profound understanding and trust. As AI increasingly integrates into society, prioritizing explainability within these systems becomes imperative. Policymakers must mandate the integration of explainability into AI regulations, fostering transparent and easily comprehensible AI systems. Collaborative efforts among developers, experts, and policymakers are essential to tailor guidelines for sector-specific XAI implementation. Validating methods and explanations in a user-friendly format is crucial when integrating XAI, necessitating a shift from evaluating explanations to including explanation quality metrics in the training process. Moreover, a comprehensive legal framework governing the deployment of XAI technologies should be established, encompassing regulations overseeing AI applications and addressing potential hurdles.

  • Europe's Dilemma in the Virtual Battlefield: Navigating Cyberspace and AI Shifts

    The author, Dr Cristina Vanberghen is a Senior Expert, European Commission and a Distinguished Expert at the Advisory Council of the Indian Society of Artificial Intelligence and Law. Artificial Intelligence is defining a new international order. Cyberspace is reshaping the geopolitical map and the global balance of power. Europe, coming late to the game, is struggling to achieve strategic sovereignty in an interconnected world characterized by growing competition and conflicts between States. Do not think that cyberspace is an abstract concept. It has a very solid architecture composed of infrastructure (submarine and terrestrial cable, satellites, data centers etc), a software infrastructure (information systems and programs, languages and protocols allowing data transfer and communication between the Internet Protocol (TCP/IP), and a cognitive infrastructure which includes massive exchange of data, content, exchanges of information beyond classic “humint”. Cyberspace is the fifth dimension: an emerging geopolitical space which complements land, sea, air and space, a dimension undergoing rapid militarization and in consequence deepening the divide between distinct ideological blocs at the international level. In this conundrum, the use and misuse of data – transparency, invisibility, manipulation, deletion – has become a new form of geopolitical power, and increasingly a weapon of war. The use of data is shifting the gravitational center of geopolitical power. This geopolitical reordering is taking place not only between states but also between technological giants and States. The Westphalian confidence in the nation state is being eroded by the dominance of these giants which are oblivious to national borders, and which develop technology too quickly for states to understand, let alone regulate. What we are starting to experience is practically an invisible war characterized by data theft, manipulation or suppression, where the chaotic nature of cyberspace leads to a mobilization of nationalism, and where cyberweapons - now part of the military arsenal of countries such as China, Israel, Iran, South Korea, the United States and Russia – increases the unpredictability of political decision-making power. The absence of common standards means undefined risks, leading to a level of international disorder with new borders across which the free flow of information cannot be guaranteed. There is a risk of fragmentation of networks based on the same protocols as the Internet but where the information that circulates is now confined to what government or the big tech companies allow you to see. Whither Europe in this international landscape? The new instruments for geopolitical dominance in today’s world are AI, 5 or 6G, quantum, semiconductors, biotechnology, and green energy. Technology investment is increasingly based on the need to encounter Chinese investment. In August 2022, President Joe Biden signed the Chips and Science Act granting 280 billion US$ to the American Tech industry, with 52.7 billion US$ being devoted to semiconductors. Europe is hardly following suit. European technological trends do not reflect a very optimistic view of its technological influence and power in the future. With regard to R&D invested specifically in Tech, the share of European countries’ investments, relative to total global R&D in Tech, has been declining rapidly for 15 years. Germany went from 8% to 2%; France from 6% to 2%. The European Union invests five times less in private R&D in Tech than the United States. Starting from ground zero 20 years ago, China has now greatly overtaken Europe and may catch up with the US . The question we face is whether given this virtual arms race, each country will continue to develop its own AI ecosystem with its own (barely visible) borders, or whether mankind can create a globally shared AI space anchored in common rules and assumptions. The jury is out. In the beginning, the World Wide Web was supposed to be an open Internet. But the recent trend has been centrifugal. There are many illustrations of this point: from Russian efforts to build its own Internet network to Open AI threatening to withdraw from Europe; from Meta withdrawing its social networks from Europe due to controversies over user data, to Google building an independent technical infrastructure. This fragmentation advances through a diversity of methods, ranging from content blocking to corporate official declarations. But could the tide be turning? With the war in Ukraine we have seen a rapid acceleration of use of AI, along with growing competition from the private sector, and this is now triggering more calls for international regulation of AI. And of course, any adherence to a globally accepted regulatory and technological model entails adherence to a specific set of values and interests. Faced with this anarchic cyberspace, instead of increasing non-interoperability, it will be better to set up a basis for an Internationalized Domain Name (IDN), encompassing also the Arabic, Cyrillic, Hindi, and Chinese languages, and avoiding linguistic silos. Otherwise, we run the clear risk of undermining the globality of the Internet by a sum of national closed networks. And how can we ensure a fair technological revolution? If in the beginning military research was at the origin of technological revolution, we are now seeing that emerging and disruptive technologies (EDTs), not to mention with dual-use technologies including artificial intelligence, quantum technology or biotechnology are mainly being developed by Big Tech, and sometimes by start-ups. It is the private sector that is generating military innovation. To the point that private companies are becoming both the instruments and the targets of war. The provision by Elon Musk of Starlink to the Ukrainian army is the most recent illustration of this situation. This makes it almost compulsory for governments to work in lockstep with the private sector, at the risk of missing the next technological revolution. The AI war At the center of AI war is the fight for standardization, which allows a technological ecosystem to operate according to common, interoperable standards. The government or economic operator that writes the rules of the game will automatically influence the balance of power and gain a competitive economic advantage. In a globalized world, we need however not continued fragmentation or an AI arms race but a new international Pact. Not however a Gentlemen’s Pact based on goodwill because goodwill simply does not exist in our eclectic, multipolar international (dis)order. We need a regulatory AI pact that, instead of increasing polarization in a difficult context characterized by a race for strategic autonomy, war, pandemics, climate change and other economic crises, reflects a common humanity and equal partnerships. Such an approach will lead to joint investment in green technology and biotechnologies with no need of national cyberspace borders. EU AI Act Now the emergence of ChatGPT has posed a challenge for EU policymakers in defining how such advanced Artificial Intelligence should be addressed within the framework of the EU's AI regulation. An example of a foundation model is ChatGPT developed by OpenAI which has been widely used as a foundation for a variety of natural language processing tasks, including text completion, translation, summarization, and more. It serves as a starting point for building more specialized models tailored to specific applications. According to the EU AI Act, these foundations models must adhere to transparency obligations, providing technical documentation and respecting copyright laws related to data mining activities. But we shall take into consideration that the regulatory choices surrounding advanced artificial intelligence, exemplified by the treatment of models like ChatGPT under the EU's AI regulation, carry significant geopolitical implications. The EU's regulatory stance on this aspect will shape its position in the global race for technological leadership. A balance must be struck between fostering innovation and ensuring ethical, transparent, and accountable use of AI. It is this regulatory framework that will influence how attractive the EU becomes for AI research, development, and investment. Stricter regulations on high-impact foundational models may impact the competitiveness of EU-based companies in the global AI market. It could either spur innovation by pushing companies to develop more responsible and secure AI technologies or potentially hinder competitiveness if the regulatory burden is perceived as too restrictive. At international level the EU's regulatory choices would influence the development of international standards for AI. If the EU adopts a robust and widely accepted regulatory framework, it may encourage other regions and countries to follow suit, fostering global cooperation in addressing the challenges associated with advanced AI technologies. The treatment of AI models under the regulation can have implications for data governance and privacy standards. Regulations addressing data usage, transparency, and protection are critical not only for AI development but also for safeguarding individuals' privacy and rights. The EU's AI regulations would have impact its relationships with other countries, particularly those with differing regulatory approaches. The alignment or divergence in AI regulations could become a factor in trade negotiations and geopolitical alliances. Last but least, the regulatory decisions will reflect the EU's pursuit of strategic technological autonomy. By establishing control over the development and deployment of advanced AI, the EU intends to reinforce its strategic autonomy and reduce dependence on non-European technologies, ensuring that its values and standards are embedded in AI systems used within its borders. The EU AI Act can influence to the ongoing global dialogue on AI governance. It may influence discussions in international forums, where countries are working to develop shared principles for the responsible use of AI. The EU's regulatory choices regarding advanced AI models like ChatGPT are intertwined with broader geopolitical dynamics, influencing technological leadership, international standards, data governance, and global cooperation in the AI domain. We have noticed that a few days before the discussion on the final format of EU AI Act, the OECD made an adjustment to its definition of AI, in anticipation of the European Union's AI regulation demonstrate a commitment to keeping pace with the evolving landscape of AI technologies. The revised definition of AI by the Organisation for Economic Co-operation and Development (OECD) appears to be a significant step in aligning global perspectives on artificial intelligence. The updated definition, designed to embrace technological progress and eliminate human-centric limitations, demonstrates a dedication to staying abreast of AI's rapid evolution. The G7 At international level, we can notice that the G7 also reached urgent Agreement on AI Code of Conduct! In a significant development, the G7 member countries have unanimously approved a groundbreaking AI Code of Conduct. This marks a critical milestone as the principles laid out by the G7 pertain to advanced AI systems, encompassing foundational models and generative AI, with a central focus on enhancing the safety and trustworthiness of this transformative technology. In my view, it is imperative to closely monitor the implementation of these principles and explore the specific measures that will be essential to their realization. The success of this Code of Conduct greatly depends on its effective implementation. These principles are established to guide behavior, ensure compliance, and safeguard against potential risks. Specifically, we require institutions with the authority and resources to enforce the rules and hold violators accountable. This may involve inspections, audits, fines, and other enforcement mechanisms but also educating about these principles, their implications, and how to comply with them is essential. It will be essential to ensure regular monitoring of compliance and reporting mechanisms that can provide insights into the effectiveness of the regulations. Data collection and analysis are crucial for making informed decisions and adjustments. Periodic reviews and updates are necessary to keep pace with developments. Effective implementation often necessitates collaboration among governments, regulatory bodies, industry stakeholders, and the public. Transparent communication about these principles is crucial to build trust and ensure that citizens understand the rules. As the AI landscape evolves, it becomes increasingly vital for regulators and policymakers to remain attuned to the latest developments in this dynamic field. Active engagement with AI experts and a readiness to adapt regulatory frameworks are prerequisites for ensuring that AI technologies are harnessed to their full potential while effectively mitigating potential risks. An adaptable and ongoing regulatory approach is paramount in the pursuit of maximizing the benefits of AI and effectively addressing the challenges it presents. Conclusions First, the ideological differences between countries on whether and how to regulate AI will have broader geopolitical consequences for managing AI and information technology in the years to come. Control over strategic resources, such as data, software, and hardware has become important for all nations. This is demonstrated by discussions over international data transfers, resources linked to cloud computing, the use of open-source software, and so on. Secondly, the strategic competition for control of cyberspace and AI seems at least for now to increase fragmentation, mistrust, and geopolitical competition, and as such poses enormous challenges to the goal of establishing an agreed approach to Artificial Intelligence based on respect for human rights. Thirdly, despite this, there is a glimmer of light emerging. To some extent values are evolving into an ideological approach that aims to ensure a human rights-centered approach to the role and use of AI. Put differently, an alliance is gingerly forming around a human rights-oriented view of socio-technical governance, embraced, and encouraged by like-minded democratic nations: Europe, the USA, Japan, India. These regions have an opportunity to set the direction through greater coordination in developing evaluation and measurement tools that contribute to credible AI regulation, risk management, and privacy-enhancing technologies. Both the EU AI Act and the US Algorithmic Accountability Act of 2022 or US Act for example, require organizations to perform impact assessments of their AI systems before and after deployment, including providing more detailed descriptions on data, algorithmic behavior, and forms of oversight. India is taking the first steps in the same direction. The three regions are starting to understand the need to avoid the fragmentation of technological ecosystems, and that securing AI alignment at the international level is likely to be the major challenge of our century. Fourthly, undoubtedly, AI will continue to revolutionize society in the coming decades. However, it remains uncertain whether the world's countries can agree on how technology should be implemented for the greatest possible societal benefit or what should be the relationship between governments and Big Tech. Finally, no matter how AI governance will be finally designed, the way in which it is done must be understandable to the average citizen, to businesses, and practising policy makers and regulators today confronted with a plethora of initiatives at all levels. Al regulations and standards need to be in line with our reality. Taking AI to the next level means increasing the digital prowess of global citizens, fixing the rules for the market power of tech giants, and understanding that transparency is part of the responsible governance of AI. The governance of AI of tomorrow will be defined by the art of finding bridges today! If AI research and development remain unregulated, ensuring adherence to ethical standards becomes a challenging task. Relying solely on guidelines may not be sufficient, as guidelines lack enforceability. To prevent AI research from posing significant risks to safety and security, there's a need to consider more robust measures beyond general guidance. One potential solution is to establish a framework that combines guidelines with certain prescriptive rules. These rules could set clear boundaries and standards for the development and deployment of AI systems. They might address specific ethical considerations, safety protocols, and security measures, providing a more structured approach to ensure responsible AI practices. However, a major obstacle lies in the potential chaos resulting from uncoordinated regulations across different countries. This lack of harmonization can create challenges for developers, impede international collaboration, and limit the overall benefits of AI research and development. To address this issue, a global entity like the United Nations could play a significant role in coordinating efforts and establishing a cohesive international framework. A unified approach to AI regulation under the auspices of the UN could help mitigate the competition in regulation or self-regulation among different nations. Such collaboration would enable the development of common standards that respect cultural differences but provide a foundational framework for ethical and responsible AI. This approach would not only foster global cooperation but also streamline processes for developers, ensuring they can navigate regulations more seamlessly across borders. In conclusion, a combination of guidelines, prescriptive rules, and international collaboration, potentially spearheaded by a global entity like the United Nations, could contribute to a more cohesive and effective regulatory framework for AI research and development, addressing ethical concerns, safety risks, and fostering international collaboration.

  • Impact of Artificial Intelligence on the US Entertainment Industry

    The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of November 2023. Artificial intelligence, commonly referred to as AI, is a burgeoning technological force poised to reshape various industries. The entertainment sector stands at the forefront of this transformation, harnessing AI's capabilities to enhance customer experiences, streamline daily operations, and deliver personalized content to its audience. As of 2021, the market valuation of AI in the media and entertainment industry had already surged to an impressive $10.87 billion, reflecting the increasing integration of AI in this dynamic field. The integration of Artificial Intelligence (AI) into the US entertainment industry presents a transformative landscape, combining innovation with a multitude of legal and ethical challenges. This discussion has explored the extensive range of AI's effects on the industry, from its potential to revolutionize creative processes to the complexities of labour displacement, intellectual property rights, bias, and privacy. These multifaceted legal concerns, though intricate, are essential considerations for industry stakeholders, policymakers, and entertainment professionals as they navigate the AI-driven future. Setting the Context As previously discussed, the pervasive influence of AI in the media and entertainment industry demands a closer examination of recent developments that have given rise to serious legal concerns. At the forefront of this technological surge is Generative AI (GenAI), a formidable innovation with the potential to significantly reshape the entertainment landscape. Its applications span from Hollywood, where it generates textual content in the form of stories, scripts, advertisements, and reviews, to the creation of dynamic and static images for marketing campaigns. While GenAI promises to enhance creativity, personalization, and operational efficiency, it also raises a multitude of concerns regarding its impact on human professionals within the arts and entertainment industry. One noteworthy incident occurred in May 2023 when the Writers Guild of America (WGA) initiated a strike, expressing apprehension about the utilization of content produced through artificial intelligence. This action underscores the growing unease among creative professionals in response to the encroaching influence of AI. The controversies surrounding the industry are not limited to labour disputes alone. Tom Hanks, for instance, had his AI-generated likeness featured in a dental advertisement without his consent, which further underscores the prevalent apprehension among a diverse array of creative professionals. The controversial portrayal of Harrison Ford, an 81-year-old actor, as a youthful Indiana Jones in 2023 serves as a striking illustration of AI's capabilities. However, this is just the tip of the iceberg. AI's capacity to replicate songs by renowned artists such as Drake and The Weeknd has triggered significant concerns. These concerns are further exacerbated when a TikToker, who used their work to train an AI model, releases a song closely resembling their earlier creation. Additionally, the ongoing dispute surrounding the definition of 'fair use' in AI model training and its potential repercussions on compensation adds complexity to the landscape. The situation has prompted numerous artists to resort to legal action against technology companies and producers, with the Screen Actors Guild taking up the battle to secure control over digital replicas of performers, that can potentially be used by the studios for an indefinite duration, replacing the actors themselves. This underscores the widespread anxieties among a diverse range of stakeholders. A palpable sense of unease looms large as individuals grapple with the looming spectre of potential job displacement by AI or, perhaps more disconcerting, the prospect of insufficient compensation for the duplication of their creative and intellectual endeavours. Various Use Cases of AI in the Media and Entertainment Industry AI's integration into the Media and Entertainment industry brings benefits such as personalization, enhanced production efficiency, advanced audience analysis, improved decision-making, cost reduction, and refined content classification and categorization. The transformative force of AI revolutionizes these sectors, paving the way for more efficient and effective operations, and ultimately enhancing the overall user experience. AI's impact on the Music industry is profound, as it is revolutionising various facets. AI-generated music employs algorithms to craft unique compositions by analyzing existing pieces. Simultaneously, music recommendation systems harness AI's capabilities to tailor playlists according to users' preferences. Furthermore, AI plays a crucial role in audio mastering, enhancing accessibility and efficiency, and elevates music production by scrutinizing and improving sound quality. In the Film Industry, AI's transformative power extends to scriptwriting through the generation of fresh scripts and the evaluation of existing ones. The technology also automates pre-production tasks, offering location suggestions and logistical organization. AI ventures into predicting a film's potential success and streamlining the editing process, enhancing trailer creation and contributing to the art of film production. Gaming enthusiasts witness AI's magic through enhanced game design and gameplay, where realistic non-player characters and procedural content are generated. This leads to personalized game recommendations and real-time adjustments in difficulty levels, heightening the gaming experience's engagement and dynamism. The Advertising sector leverages AI for precise audience targeting, segmentation, and predictive analytics, improving ad placements and personalized content recommendations. AI's content generation capabilities save time and costs, while its data analysis of social media drives effective campaign strategies, facilitating cross-channel marketing by integrating data from multiple advertising platforms. Book Publishing sees AI automate manuscript submission and evaluation, predict a manuscript's market potential, assist in editing and proofreading, provide design recommendations, optimize printing and distribution processes, and refine marketing and promotion strategies.n In Storytelling, AI aids in content creation, analyzing datasets to enhance character development and plot structures. Additionally, it personalizes content recommendations for music and video streaming services and elevates online advertising by targeting specific audiences. Existing and Emerging Legal Issues with AI in the Entertainment Field The entertainment industry is subject to a multitude of laws and regulations that govern various aspects of its operations. These encompass contract laws, which oversee relationships between parties and include critical elements like production rights, distribution agreements, talent agreements, and non-compete agreements. Furthermore, intellectual property laws play a pivotal role in safeguarding the rights of both employers and employees within the industry. In the United States, the legal framework is enriched with legislation that is directly applicable to the Entertainment Industry. The Fair Labor Standards Act, for instance, ensures that workers in the private sector receive at least the minimum wage, enforces recordkeeping practices, and mandates overtime pay when applicable. Simultaneously, the Occupational Safety and Health Act (OSHA) focuses on providing a safe working environment, while the Americans with Disabilities Act (ADA) promotes equal opportunities for individuals with disabilities. Additionally, the National Labor Relations Act (NLRA) holds significance, among others. With the emerging integration of AI in the industry, here are some of the legal issues that require deliberation within the existing framework: Labour Displacement and Disputes The surge of artificial intelligence (AI) technology has triggered a series of labour disputes that have sent ripples through the entertainment industry. A case in point is the Writers Guild of America (WGA), an organization that finds itself at the epicentre of this evolving debate. While the WGA recognizes the potential benefits of AI tools, it is simultaneously advocating for critical safeguards. These safeguards are pivotal to ensure that the utilization of AI tools does not undermine the hard-earned credits and compensation of its human members. The concerns raised by the Writers Guild of America (WGA) reverberate throughout the entertainment industry, transcending the confines of the realm of scriptwriters. This sentiment is conspicuously mirrored by prominent figures like the Screen Actors Guild and esteemed actors such as Tom Hanks. This collective unease stems from the growing belief that AI could potentially supplant human involvement in the creative process. As AI technologies continue to advance, becoming increasingly proficient at generating content, the artistic and creative domains of the arts and entertainment industry are grappling with an existential quandary. While AI undoubtedly promises enhanced efficiency and novel creative possibilities, it simultaneously raises formidable challenges related to labour rights, job security, and the equitable allocation of creative recognition. . Intellectual Property of Artists In the midst of AI's creative renaissance, a profound intellectual property quagmire has emerged, casting its shadow primarily on artists and musicians. The phenomenon of Generative AI has birthed a novel conundrum – the generation of songs sung by renowned artists without their knowledge or consent. For these artists, this surreptitious AI-driven creation poses an unanticipated and formidable threat. It raises complex issues of ownership and rights that have yet to be definitively resolved. The very act of AI-generated artistry blurs the lines between creativity and automation. Artists who unwittingly become part of AI-generated works find themselves at an ethical and legal crossroads. The intricate web of intellectual property concerns, encompassing questions of copyright infringement, rightful attribution, and the uncharted territory of unlicensed data used in AI training, forms a tapestry of legal intricacies that require diligent exploration and resolution. This scenario sheds light on the pressing need to evolve intellectual property laws to accommodate this new dimension of AI-generated artistry. Employment Rights of AI Professionals The rising prominence of AI technology in the entertainment industry doesn't merely reshape the creative landscape but also poses significant concerns regarding the rights of professionals working in both traditional and AI-related roles. The Writers Guild of America's initiative to embrace AI tools while securing assurances against adverse impacts on credit and compensation represents a pivotal development in this regard. These evolving employment rights concerns go beyond the traditional employee-employer relationship. As AI integrates into the creative and production processes, it introduces a paradigm where AI professionals work alongside their human counterparts. This intricate coexistence calls for the delineation of roles, contributions, and credit, striking a delicate balance between technological innovation and human creativity. Furthermore, the assurance that the utilization of AI won't compromise the status and remuneration of traditional creative professionals lays the foundation for a harmonious fusion of human and AI contributions within the industry. The evolving dynamics of the entertainment workforce underscore the importance of continually reassessing and adapting employment rights to accommodate this shifting landscape. Bias and Privacy Concerns Associated with AI The integration of artificial intelligence (AI) into the entertainment industry brings forth a multitude of ethical considerations. These concerns span across various dimensions, including data privacy, algorithmic bias, content homogenization, and the preservation of human creativity. Data privacy is a central issue, with the responsible handling of user data becoming a vital ethical and legal obligation, especially in an industry where consumer data is a valuable asset. Algorithmic bias poses another challenge, as AI systems can perpetuate biases present in their training data, leading to discriminatory content recommendations and unequal representation. The potential for AI-generated content raises concerns about the homogenization of creative output and its impact on the diversity and originality of the entertainment industry. Beyond this, there's an overarching concern regarding the loss of human creativity and the need to establish ethical frameworks for decision-making in content curation, recommendation, and production. These ethical considerations underscore the profound transformation occurring in the entertainment industry. They necessitate not only compliance with data protection regulations but also a commitment to upholding the values and integrity of the industry. The challenge is to strike a balance between technological advancement and the preservation of human creativity and ethical principles. In an era where AI plays an increasingly significant role in content creation and personalization, addressing these ethical concerns becomes essential to ensure the enduring quality and uniqueness of the entertainment industry. Recommendations to Enhance AI Integration in the Entertainment Industry The seamless integration of artificial intelligence (AI) into the entertainment sector requires a multifaceted approach that involves policymakers, industry stakeholders, and ethical considerations. Here are comprehensive recommendations for policymakers and stakeholders to facilitate this integration while ensuring fairness, accountability, and ethical integrity: Comprehensive Legal Framework with Effective Dispute Resolution Mechanism In the context of AI's increasing role in the entertainment industry, a comprehensive legal framework is imperative to address the unique challenges it presents. This legal framework should be designed to provide clear guidelines while also incorporating mechanisms for efficient dispute resolution. It must encompass the following key components to ensure the highest standards of legality and ethics in AI-driven entertainment: Intellectual Property Rights and Authorship Attribution: One crucial aspect of this legal framework is the clear definition of ownership and rights associated with content generated by AI. Additionally, it should establish mechanisms for attributing authorship when AI plays a role in content creation. This ensures that both human creators and AI systems receive the recognition they deserve, fostering a fair and equitable environment. Data Privacy and Strict Regulations: The framework must lay out explicit processes for collecting, processing, and safeguarding user data within AI-driven applications. Enforcing stringent regulations is essential to protect the privacy of personal information. These regulations should prioritize transparency in data usage and user consent, bolstering individuals' trust in AI applications. Ethical Guidelines for AI Usage: To promote responsible and ethical AI deployment in entertainment, ethical guidelines should be an integral part of the legal framework. These guidelines should encompass matters such as deepfake mitigation, the preservation of content diversity, and safeguards against deceptive or harmful applications of AI. By adhering to these ethical considerations, the entertainment industry can ensure it upholds ethical standards and aligns with societal values. Conflict Resolution Mechanisms: An effective legal framework requires well-structured procedures for resolving legal disputes related to AI. This includes clear guidelines for determining liability in cases involving disputes over AI-generated content. To expedite fair and efficient conflict resolution, a specialized AI dispute resolution panel may be established. Comprising experts in AI and entertainment law, this panel could be tasked with assessing issues related to authorship attribution, data privacy breaches, and ethical compliance. Such mechanisms would further enhance transparency, accountability, and legal clarity in the rapidly evolving AI-driven entertainment landscape. This comprehensive legal framework, together with efficient dispute-resolution mechanisms, will empower the entertainment industry to navigate the intricate and ever-changing AI landscape while safeguarding the rights and interests of all stakeholders. It forms the cornerstone of a fair, legally sound, and ethically responsible environment for AI-driven entertainment. Fair Compensation Guarantee equitable compensation for creators whose work contributes to AI models. This may entail the creation of new licensing models or the adaptation of existing ones to account for the unique dynamics of AI-generated content. Ensuring that artists and content creators receive their fair share is crucial to sustaining a vibrant and creative entertainment industry. Consider adopting: Royalty-Based Compensation: One effective approach to fair compensation in the AI-driven entertainment industry is the exploration of a royalty-based compensation model. Under this model, creators would receive a percentage of the revenue generated by AI models that utilize their work. This approach aligns incentives, fostering a collaborative environment where creators have a vested interest in the success of AI applications. By sharing in the financial benefits derived from AI, creators are not only rewarded for their initial contributions but also motivated to continuously engage with AI systems. This, in turn, encourages a cycle of innovation and creativity that benefits both the creators and the AI-driven entertainment industry as a whole. Furthermore, a royalty-based system ensures that creators receive an ongoing and fair share of the economic value generated by their contributions, promoting a sustainable and equitable ecosystem. Transparent Compensation Standards: To facilitate fair compensation and minimize disputes, it is essential to establish transparent and universally accepted compensation standards. These standards should take into account the value and impact of creators' contributions to AI models. By quantifying and documenting the significance of each contribution, a transparent compensation system ensures that creators are fairly rewarded for their work. This approach provides a clear and equitable basis for determining compensation, reducing ambiguity and potential disagreements. Moreover, transparent compensation standards contribute to building trust and fostering positive relationships between creators and AI-driven entertainment platforms. It also encourages creators to actively participate in AI initiatives, knowing that their contributions will be fairly recognized and rewarded. Overall, transparent compensation standards are an integral component of a sustainable and harmonious AI-driven entertainment industry. They set the stage for creative collaboration and innovation by offering creators the confidence that their work is valued and adequately compensated. Job Transition Support Recognize the potential for job displacement due to AI integration and offer robust support mechanisms for affected workers. This might involve retraining programs, job transition support, and initiatives to help workers adapt to new roles within the evolving industry. Ensuring that workers are not left behind is crucial for a just and seamless AI integration. By proactively implementing these recommendations, policymakers and industry stakeholders can steer AI integration in the entertainment sector towards a future that is both innovative and ethically sound. These measures would not only protect the rights and interests of creators but also ensure that the industry remains a vibrant and creative space for all involved Conclusion The integration of Artificial Intelligence (AI) in the US entertainment industry marks a transformative shift. While AI offers innovative possibilities, it brings forth a range of legal and ethical challenges. Labour displacement and job disputes, intellectual property rights, bias, and privacy are central concerns. These multifaceted issues require careful consideration by industry stakeholders, policymakers, and entertainment professionals as they navigate the evolving AI-driven landscape. The recommendations include establishing a clear legal framework, ensuring fair compensation for creators, fostering transparency, mitigating bias, offering job transition support, enhancing privacy protection, engaging stakeholders, and establishing ethical guidelines. These measures pave the way for an innovative and ethically sound future for the entertainment industry, maintaining its uniqueness and vibrancy in an era where AI plays an increasingly significant role. Vigilance and adaptability remain essential as AI continues to shape the entertainment landscape.

Search Results

bottom of page