Search Results
111 results found
- India's Draft Digital Personal Data Protection Rules, 2025, Explained
Sanad Arora, Principal Researcher is the co-author of this Insight. The Draft Digital Personal Data Protection (DPDP) Rules, released on January 3, 2025 , represent an essential step towards making in the digital age simple. These rules aim to enhance the protection of personal data while addressing the challenges posed by emerging technologies, particularly artificial intelligence (AI). As AI continues to evolve and integrate into various sectors, ensuring that its deployment aligns with ethical standards and legal requirements is paramount. The DPDP rules seek to create a balanced environment that fosters innovation while safeguarding individual privacy rights. Figure 1: Draft DPDP Rules (January 3, 2025 version, explained and visualised). This chart, meticulously created by Abhivardhan and Sanad Arora, as a part of the explainer. Download the chart below for free. Overview of Key Rules Notice Requirements (Rule 3) Data Fiduciaries must provide clear, comprehensible notices to Data Principals about their personal data processing. Notices must include: An itemized description of the personal data being processed. The specific purposes for processing. Information about the services or goods related to the processing. Consent Manager Registration (Rule 4) Consent Managers must apply to the Data Protection Board for registration and comply with specified obligations. They serve as intermediaries between Data Principals and Data Fiduciaries, facilitating consent management. Data Processing by Government (Rule 5) This rule governs how government authorities process personal data when issuing benefits or licenses. Examples include issuing driving licenses and subsidies. Security Safeguards (Rule 6) Data Fiduciaries are required to implement reasonable security measures to protect personal data, such as: Encryption Access controls Monitoring for unauthorized access Specific challenges for Micro, Small, and Medium Enterprises (MSMEs) regarding cybersecurity are highlighted. Breach Notification (Rule 7) In case of a data breach, Data Fiduciaries must inform affected Data Principals and the Data Protection Board within specified timeframes. Notifications must detail the nature, extent, timing, and potential impacts of the breach. Data Retention and Erasure (Rule 8) Personal data must be erased if not engaged with by the Data Principal within a specified timeframe. Notification of impending erasure must be provided at least 48 hours in advance. Additional Provisions Rights of Data Principals (Rule 13) Data Principals can request access to their personal data and its erasure. Consent Managers must publish details on how these rights can be exercised. Cross-Border Data Transfer (Rule 14) Transfers of personal data outside India are subject to compliance with Central Government orders and specific security provisions. Exemptions for Certain Processing Activities (Fourth Schedule) Certain classes of Data Fiduciaries, such as healthcare institutions, may be exempt from specific consent requirements when processing children's data if necessary for health services or educational benefits. Some Introspective Questions Below is a detailed legal analysis of the critical areas that require further examination and policy throughput. Please note that this is not an official feedback published by Indic Pacific Legal Research for the Ministry of Electronics & Information Technology, Government of India Notice Requirements (Rule 3) Clarity and Comprehensibility The rules mandate that notices provided by Data Fiduciaries must be clear, comprehensible, and understandable independently of other information. This raises several legal considerations: Definition of Comprehensibility : What specific standards will be used to determine whether a notice is comprehensible? Will there be guidelines or metrics established by the Data Protection Board? Consequences of Non-Compliance : What penalties or corrective measures will be enforced if a notice fails to meet these standards? Itemized Descriptions The requirement for itemized descriptions of personal data and processing purposes necessitates: Standardization of Notices : The need for uniformity in how notices are presented could lead to the development of templates or guidelines that Data Fiduciaries must adhere to. Impact on Consent Withdrawal : How will the ease of withdrawing consent be operationalized? Will there be specific processes that must be followed to ensure compliance? Registration and Obligations of Consent Managers (Rule 4) Conditions for Registration Consent Managers must meet specific conditions to register, including technical, operational, and financial capacity. Legal analysis should focus on: Assessment Criteria : What specific criteria will the Data Protection Board use to evaluate an applicant's capacity? Ongoing Compliance : How will ongoing compliance with these conditions be monitored and enforced? Procedural Safeguards The opportunity for Consent Managers to be heard by the Board is a procedural safeguard that requires scrutiny: Nature of Hearings : What will the process look like for these hearings? Will there be formal procedures in place? Data Processing by Government (Rule 5) Legal Basis for Processing This rule governs government data processing when issuing benefits or services. Key considerations include: Alignment with Privacy Principles : How will government data processing align with individual privacy rights under the DPDP Act? Transparency in Public Spending : What mechanisms will be in place to ensure transparency regarding how public funds are used in data processing activities? Security Safeguards (Rule 6) Practicality for MSMEs The security measures required from Data Fiduciaries pose significant challenges, particularly for Micro, Small, and Medium Enterprises (MSMEs): Cost-Benefit Analysis : A thorough examination of the costs associated with implementing these safeguards versus the potential costs of data breaches is essential. Support Mechanisms : What support or resources can be provided to MSMEs to help them comply with these security requirements? Breach Notification (Rule 7) Timeliness and Content The obligations surrounding breach notifications necessitate a detailed examination: Best Practices for Breach Management : What best practices should organizations adopt to ensure timely and accurate breach notifications? Liability Implications : What are the potential liabilities for organizations that fail to comply with breach notification requirements? Erasure of Personal Data (Rule 8) Engagement Metrics The criteria defining when personal data must be erased raise questions about user engagement metrics: Tracking Engagement : How will organizations track user engagement effectively? What tools or systems will be necessary? Notification Processes : The requirement to notify Data Principals before erasure poses questions about communication strategies and compliance timelines. Rights of Data Principals (Rule 13) Implementation Mechanisms A thorough examination of how Data Principals can exercise their rights is needed: Technical and Organizational Measures : What specific measures must Data Fiduciaries implement to ensure timely responses to access and erasure requests? Response Times : What constitutes a reasonable response time, and how does this align with international best practices? Cross-Border Data Transfer (Rule 14) Compliance with Government Orders The provisions governing cross-border data transfers require careful consideration: Legal Basis for Transfers : Understanding the legal bases required for transferring personal data outside India, including consent mechanisms, will provide clarity on operational challenges. Impact on International Business Operations : How will these rules affect businesses operating internationally, particularly regarding compliance burdens? Data Localization in the Draft DPDP Rules The Draft Digital Personal Data Protection (DPDP) Rules underscore the importance of data localization, which mandates that certain categories of personal data pertaining to Indian citizens must be stored and processed within India. While this requirement is pivotal for enhancing data security and privacy, it also presents challenges and implications for businesses operating in the digital space. Current Framework and Implications Definition and Scope of Data Localization : Data localization aims to ensure that personal data related to Indian citizens is stored within the country, thereby enhancing governmental control over data privacy and security. The rules specify that Significant Data Fiduciaries (SDFs) must adhere to conditions regarding the transfer of personal data outside India, which may include obtaining explicit consent from Data Principals or complying with directives from the Central Government. Challenges in Implementation : Ambiguity in Guidelines : The current draft lacks comprehensive guidelines detailing how organizations can effectively achieve compliance with localization requirements. This ambiguity could lead to varied interpretations and inconsistent practices across different sectors. Operational Burden : For multinational companies, the requirement to localize data may result in increased operational complexity and costs. Organizations may need to invest significantly in local infrastructure or face penalties for non-compliance, potentially impacting their business models. Impact on Innovation : Critics argue that stringent localization mandates could hinder innovation by restricting access to global data resources and collaboration opportunities. Companies may struggle to leverage cloud computing and other technologies that depend on cross-border data flows. The Case for Data Localization Despite the challenges associated with data localization, the concept remains a critical consideration for several reasons: Enhanced Data Sovereignty : By mandating that personal data be stored within national borders, countries can exert greater control over their citizens' information. This can lead to improved accountability and facilitate legal recourse in cases of data breaches or misuse. Improved Security Measures : Localizing data can mitigate risks associated with international data transfers, such as exposure to foreign surveillance or differing legal standards for data protection. It allows governments to enforce local laws more effectively. Public Trust : Implementing robust localization policies can foster public trust in digital services by assuring citizens that their personal information is protected under local laws and regulations. Conclusion to the Overall Analysis The Draft Digital Personal Data Protection (DPDP) Rules represent a significant advancement in the establishment of a comprehensive data protection framework in India. The focus on data localization, consent management, and the rights of Data Principals reflects an increasing awareness of the necessity for robust privacy protections in a digital age. While these rules pose challenges, particularly regarding compliance and operational implications for businesses, they also create opportunities to enhance data security and foster public trust. As stakeholders engage in the consultation process initiated by the Ministry of Electronics and Information Technology (MeitY), it is vital to consider the implications of confidentiality in feedback submissions. The commitment to holding submissions in fiduciary capacity ensures that individuals and organizations can provide their insights without fear of disclosure or repercussion. This confidentiality is crucial for promoting open dialogue and collecting diverse perspectives that can inform the finalization of the rules. However, it is essential to acknowledge that undisclosed versions of the draft DPDP rules have been leaked in bad faith, potentially manipulating the tech policy discourse in India. Such actions undermine the integrity of the consultation process and could skew stakeholder perceptions and discussions surrounding these critical regulations. Submissions Held in Fiduciary Capacity The assurance that submissions will be held in fiduciary capacity by MeitY is a reasonable aspect of this consultation process. By ensuring that feedback remains confidential, stakeholders can express their views freely without hesitation. This approach encourages a more honest and constructive discourse around the challenges and implications of the DPDP Rules. Anonymity Encourages Participation : The ability to submit comments without attribution allows for a broader range of voices to be heard, including those from smaller organizations or individuals who might otherwise feel intimidated by potential backlash. Consolidated Feedback Summary : The promise to publish a consolidated summary of feedback received—without attributing it to specific stakeholders—further enhances transparency while protecting individual contributions. This summary can serve as a valuable resource for understanding common concerns and suggestions, ultimately aiding in refining the rules. Feedback can be submitted through an online portal set up by MeitY specifically for this purpose. The link for submitting feedback is available at MyGov DPDP Rules 2025 Portal . After submission, keep an eye on updates from MeitY regarding any further consultations or changes made based on stakeholder feedback.
- Character.AI, Disruption, Anger and Intellectual Property Dilemmas Ahead
The author is currently a Research Intern at the Indian Society of Artificial Intelligence and Law. Made with Luma AI. What is Character.AI and What is the Mass Deletion Event? Imagine having your personal Batman, Superman, Iron Man, or even Atticus Finch- someone you can interact with at any moment. Character.AI has turned this dream into reality for man, especially within fandom communities. Character.AI is an artificial intelligence (AI) platform through which users interact with and create AI-powered chatbots, based on either fictional or real people. Since its launch in 2021, the platform has gained a significant traction among the fandom communities and it has become a go-to platform to explore interactions with their favorite, often fictional, characters. However, the platform’s user base isn't just limited to fandom communities, it also extends over to people interested in history, philosophy, literature, and people with other niche interests. Character.AI also enjoys an advantage that is available only to very few platforms, and that is: a diverse user-base . This includes everyone from serious interests, to simple casual explorers. Users from fandom communities saw the platform as a new way to engage with their favorite characters. Character.AI also enjoys a good demographic advantage, where the majority of Character.AI users are located in the United States, Brazil, India, Indonesia and then the United Kingdom. However Character.AI also has been surrounded in its fair share of controversies, including the latest one where they carried out a mass deletion drive involving copyrighted characters, and raising concerns over copyright infringement, platform liability, and platform ethics in the context of AI generated content. Overview of Character.AI ’s platform and user base Character.AI ’s core value proposition lies in enabling users to interact with AI-powered chatbots designed to stimulate lifelike conversation. These chatbots reflect diverse personalities, conversational styles and traits unique to the character upon which the chatbot was trained, making the platform particularly popular for role-playing with favorite characters, and story-telling. At the heart of it all, Character.AI is a conversational AI platform that hosts a wide range of chatbots, and gives the users the ability to either interact with existing characters or create their own, and customize the characters personalities, and responses. Character.AI boasts a diverse user base with a chunk of them falling within the 18-23 age group. The composition of its user demographics is visually represented in the following figure: Figure 1: Age distribution of Character.AI Visitors The platform hosts a wide range of characters, including historical figures,celebrities, fictional characters, and even dungeon masters. This makes it accessible to people belonging to different age groups. It is also quite evident that the majority of its user base stems from the 18-24 age group. Also the combined user base of the age group of people belonging under the age group of 44 years make up for 89.84 percent of its user base. Summary of the mass deletion of copyrighted characters In the month of November 2024, Character.AI carried out a mass deletion drive of AI chatbots that were based on copyrighted characters from various franchises, including "Harry Potter,"Game of Thrones," and "Looney Tunes." The company announced that the deletions were the result of the Digital Millennium Copyright Act (DMCA) as well as copyright law. However, the company did not explain why they did this or whether they were proactively engaged in a dialogue with the copyright holders, vis-a-viz Warner Bros Discovery. Interestingly, users were not officially notified about these deletions but only came to know about the situation through a screenshot that was circulating online. The removals were countered with a strong backlash from the user community, in particular those within fandom cultures that have invested time and emotional energy in interacting with these AI characters and of people who share similar interests that have put a lot of enthusiasm and effort into their interactions with these AI characters. The removal of popular familiar figures such as Severus Snape, who had clocked 47.3 million user chats, has caused the fandom community to be in turmoil and has at the same time made people doubt the future of Character.AI and its relationship with copyrighted content. Initial user reactions and impact on the fandom community The initial reactions from users highlighted their frustrations, disappointment, discontentment, anger and upset. Some users considered migrating to different AI platforms, as the deletions have sparked discussions about the balance between copyright protection and creative expression within AI Platforms. Many users expressed their disappointment over a lack of prior notice regarding the deletion drive. One user remarked : “at least a prior notice would be nice. This allows us to archive or download the chats at the very least. Also, I earnestly hope you finally listen to your community. Thank you!”. While others criticized the unprofessionalism of the situation where the platform communicates the news two days after the deletion drive that has already occurred. While some users also acknowledged and in some ways already knew the potential reasons behind the deletion drive- recognizing the need for Warner Bros. Discovery to protect their IP’s from potential controversies- they were mostly concerned about the lack of transparent communication and the absence of any heads up. Copyright Law and AI-Generated Content The mass deletion on Character.AI highlights the complex legal issues in dealing with copyright law and AI-generated content. The use of copyrighted characters in AI chatbots raises concerns around copyright infringement, fair use, and the responsibilities of the AI platform regarding intellectual property rights. Analysis of copyright infringement claims in AI-generated chatbots Intellectual Property laws and particularly the copyright law, essentially grants exclusive rights to copyright holders , including the right to reproduce, distribute, licence, and create derivative works, based on their original creative works. The emergence of AI-chatbots and conversational AI in general presents a complex conundrum, where they potentially infringe upon these exclusive rights when they reproduce the exclusively protected elements of those characters, personalities, appearances, storylines, conversational styles, ideologies, and simply put those characters in their entirety. However, dealing with copyright infringements in the realm of AI-generated content is not an easy legal problem to overcome. Since matters pertaining to this realm are still pending in courts , and there are limited precedents to establish a responsible discourse. All of this gets even more complicated by the fact that the Large Language Models (LLMs) which power these AI systems, do not simply copy and present the content. Instead, they analyze vast data points to learn patterns to generate works inspired no by a single copyright holder, but everyone. Courts will need to consider factors such as the extent to which the AI chatbot copies protected elements of the copyrighted characters, the purpose, and the potential impact on the market for the original work. This mind map below gives a comprehensive examination of the fair use legal arguments with respect to AI Training. Figure 2: Analysis of Fair Use in AI Training, using a mind map. Discussion of the Digital Millennium Copyright Act (DMCA) Implications The Digital Millennium Copyright Act (DMCA) provides a safe harbor framework that protects online from liability for copyright infringement by their users, provided that certain conditions are met. These Conditions are illustrated for your reference in Figure 3.The DMCA also carries significant implications for platforms like Character.AI , requiring them to establish mechanisms for addressing infringement claims. This includes responding to takedown notices from copyright holders and proactively implementing measures to prevent potential infringements. However, the applications of the DMCA to AI-Generated content remains underdeveloped, leaving unanswered questions about how the notice and takedown systems can effectively address the unique challenges posed by the future of AI-generated content. Figure 3: DMCA Safe Harbour Compliances, depicted. Platform Liability and Content Moderation This mass deletion on Character.AI raises pertinent questions about the legal duties of AI platforms to moderate content and prevent harm. As these AI chatbots become ever more capable and able to produce increasingly lifelike, immersive experiences, it poses a tremendous challenge on such platforms as ensuring the safety of users, protecting intellectual property rights, and living up to various legal and ethical standards. Exploration of Character.AI ’s legal responsibilities as a platform. Character.AI , like other online platforms, bears a legal responsibility towards the users and society at large. These include protecting user privacy, preventing harm, and complying with the law of the land. Policies and guidelines in the terms of service of Character.AI deal with the dos and don'ts regarding user behaviour, content, and intellectual property rights. However, the specific legal obligations and the extent to which platforms should be held liable for content generated by their users or the actions of their chatbots are still evolving. The recent lawsuit against Character.AI involving an issue such as a wrongful death case regarding a teenager’s suicide after forming a deep emotional attachment with a 'Daenerys Targaryen'-inspired Chatbot, underscores the potential risks of conversational AI and specifically, character based conversational AI. The lawsuit alleges negligence, wrongful death, product liability, and deceptive trade practices, claiming that Character had a responsibility to inform users of dangers related to the service, particularly a dangerous threat to children. Aside from the legal responsibilities, Character.AI also grapples with ethical issues involving bias within the training data, ensuring prevention of black-boxisiation of their conversational AI models, and establishing accountability for actions and impacts of AI systems. These ethical concerns are critical in their own right and must be addressed as proactively we seek to innovate. Here's an evaluation of proactive vs. reactive content moderation strategies as depicted in the figure below. Figure 4: Comparison of Reactive and Proactive Content Moderation Comparison with other AI platforms approaches to copyrighted content Different AI platforms have adopted differing approaches towards copyright content management. Some of the platforms strictly enforce the policies against the use of the copyrighted characters, whereas others have taken a more permissive approach, allowing for users to create and interact with AI chatbots based on copyrighted characters under certain conditions. For example, Replika and Chai focused on the creation of novel AI companions rather than replicating pre-existent characters to minimise the issue of copyright. NovelAI on the other hand has implemented features, such as the ability for users to generate content that is based on copyrighted works but within limitations and safeguards to avoid copyright violations. User Rights and Expectations in AI Fandom Spaces In the complex scheme of things where copyrighted content is utilized to train large language models(LLMs), that is merely a derivative of the original work, and where users further refine these models through prompting to get a more personalized experience and interact with those that they couldn't interact with in real life. Thus, a new dynamic emerges , one where there are unreasonable set expectations. This dynamic becomes even more critical especially when companies are not doing their part in making their users aware about the limitations of their conversational AI models that the users want to experience. Users then invest significant time, creativity and emotional energy in fine tuning and interacting with these models. All the interactions that people have had with the models has helped it to be better and improved. They have contributed to the success of those chatbots and also helped in creating personalized experiences for others. The Initial reaction to the abrupt deletion of chatbots by the platforms has highlighted the basic expectations of core users, particularly the need to have some form of control or say over the deletion of those chatbots and the data generated during interactions and receiving prior notice, so that they could exercise their ability to archive conversations before they are removed. It is crucial to understand here that it’s not just about the energy they have spent in crafting the personalized conversations they had with the chatbots, but also the comfort they sought, the ideas they had, and the brainstorming they did with those chat bots. Examination of user generated content ownership in AI environments One question and a major concern of the users of conversational AI, for the future technology law jurisprudence is whether users of chats based on LLMs are also in-part copyright holders of the chats between them and the characters they are interacting with. Since platforms like Character.AI allow for users to have private, personalized conversations, that are often unique to the input prompts, and also given that the users now can share their chats with others, giving it the status of published works, complicating the issue of ownership even further. Character AI’s Terms Of Services (TOS) provide that users retain ownership of their characters and by extension, the generated content. However the platform reserves a broad and a sweeping license to use this content for any purpose, including commercial use. This convenient arrangement gives rise to the potential for Character.AI to commercially benefit from user generated content, without compensation or recognition of not only the User generated derivative content but as a matter of fact, the original copyrighted works itself. Discussion of user expectations for persistence of AI characters When it comes to deletion of characters, the TOS of Character.AI is broad and sweeping. It states that Character.AI reserves the right to terminate accounts and delete content for any and multiple reasons, including inactivity or violation of the TOS, often without prior notice. The lack of transparency into content moderation has an overpowering impact and consequence particularly when there can be severe emotional consequences for those who rely on these characters for emotional and mental support. The ethical implications of this opaque policy can be amplified in the context of fandom, where fans tend to be generally dependent on the parasocial relationships they tend to enjoy with their fictional characters. In addition to that the TOS also provides for the following provision: “You agree that Character.AI has no responsibility or liability for the deletion or failure to store any data or other content maintained or uploaded to the Services”. The provision of these terms only exacerbates the asymmetry between the control, influence and certainty which the users expect and the powers that company wants to exercise unquestionably. These terms not only neglect the user rights, but also fail to address the ethical concerns like transparency and fair moderation. Analysis of potential terms of service and user agreement issues. Character.AI ’s terms of services provide for several contentious provisions and they include as depicted in the figure below: Figure 5: Character.AI 's contentious policies, depicted. These provisions of the TOS raises several legal and policy concerns, including the broad and sweeping disregard of user expectations only highlights the need for a more balanced approach that protects user rights while still allowing for innovation and the responsible use of Conversational AI. This is even more pertinent especially in the context of conversational AI systems where users rely on platforms for emotional validation, support, and interactions. And where the consequences could be of a higher magnitude for the user than the other way around. Ethical Considerations in AI-Powered Fandom Interactions Exploration of parasocial relationships with AI characters One significant concern that has emerged since the advent of conversational AI and especially personalized and personality based conversational AI is the development of Parasocial Relationships . Parasocial relationships refer to one sided attachments and connections where individuals develop emotional attachments to fictional and media personalities.The development of emotional bond and attachments to these is an even more common occurrence in the fandom spaces. Within fandom communities, where people are already emotionally invested in their favorite characters and universe, for them, such relationships come on par with the reality they live in, sometimes exceeding real-life relationships. The introduction of Conversational AI , further intensifies these relationships and dynamics, since the interactions only become personalized, interactive, and more so real-wordly. Character.AI has the option to call your personal 'Batman', 'Harvey Specter', 'Harley Quinn' and a random 'mentorship coach'. Imagine interacting with them, and feeling intimately close to the figures you admire through this feature. The increasing sophistication of AI characters and their ability to mimic human-like conversations, only blurs the lines between the real and simulated worlds. It all would become real for people and has real world consequences. AI companies and their developers have an ethical responsibility to ensure transparency about the limitations of AI characters, and ensure that they do not mislead users about their capabilities or simulate emotions that those systems cannot experience. Minors and Elderly then become the vulnerable populations of manipulative conversational AI systems that if unchecked, creates a risk of people living in distorted realities, and alienated worlds that they have created for themselves, or simply put the AI systems manipulated them to be in. Discussion of potential psychological impacts on users, especially minors The psychological implications of excessive and early exposure and introduction to conversational AI are significant, particularly for children. Similar to social media’s impact, these systems could hinder the development of social skills and the ability to build meaningful, real-world relationships. This incorporation will only hurt their prospects of becoming mature and reasonable adults that can navigate the challenges in complex human dynamics. Research suggests that users and particularly children may be vulnerable to the “Empathy Gap” of AI chatbots. Children are likely to treat AI characters as friends and misinterpret their responses due their limited understanding of technologies they are interacting with. Studies have also suggested that interactions with AI systems increase loneliness, sleep disturbances, alcohol consumption, and depression. Also, early introduction to AI systems with limited awareness and in the absence of effective regulatory and support mechanisms would promote unhealthy behaviours that are not only detrimental to their human interactions, but also mental and physical health and also emotional intelligence. This could have second order effects into their careers and real world interactions where they might have unreasonable expectations from humans to do as they say and expect. (something which LLMs are known to do). Ethical implications of AI characters mimicking real or fictional personas AI Characters that mimic real life or fictional personalities raises a whole range of ethical dilemmas that humans truly are not ready to understand the consequences of. Issues related to identity, authenticity, consent, life like conversational mimicking, manipulation need a nuanced understanding in the backdrop of disagreements even on the definitions of what actually is AI? For example, the use of AI to create personas of real people, without their explicit consent can be seen as a gross violation of their privacy. Additionally, actors or creators associated with the original characters might face unintended consequences such as a displaced sense of attachment, love, anger, pain, and distress onto them. Creating real world consequences and unintended second order effects that are hard to mitigate. There is a potential for misrepresentation, and manipulation by AI characters is equally troubling. Technologies like deep fakes already have illustrated the potential for misinformation, reputational damage and legal consequences for those whose AI personas committed or abetted the said manipulation. Additionally it is also true that fictional personas may reinforce unsuitable and inappropriate narratives or behaviors, since which the chat bots were trained on. For example, an AI character that is based on fictional antagonists could reinforce the negative stereotypes or behaviors, when the users interacting with it are not aware about how the technology functions and in absence of required safeguards to protect the interacting users. To address these risks, companies developing these AI characters must themselves adopt widely accepted ethical standards. It is crucial to educate users about the limitations of AI systems and to implement transparent practices that are important to prevent harm. Intellectual Property Strategies for Media Companies in the AI Era The Rise of AI has presented media companies that seek to protect their intellectual property portfolio, while embracing innovation with challenges and opportunities. Traditional IP frameworks need to be reimagined and redesigned to address the unique set of challenges that AI-generated content and AI powered fandom brings to the table. It is crucial to highlight that AI systems have an asymmetrical advantage over the IP right holders whose creative works are often utilized to train theri LLMs. While these LLMs and the companies that train them rapidly ideate, scale, and distribute the fruits of their LLMs, the decision and analysis of the core issues that are central to shaping of future discourse is tied up in court for a significant while, to add onto the stagnant nature of policy making is also the hesitance of govt. to rapidly adopt effective policies and legislations, aiming to avoid completely stifling innovation. The IP owners of those exclusive works face a slower process of defending their rights through courts. They are also un-equipped with appropriate strategies that enforces their rights over their creative works. The incentive structures for AI companies encourages them to quickly develop and scale their products, and enjoy revenue sources from the commercialisation of these LLMs, often leaving IP holders scrambling to even claim rights over their own creative works. Meanwhile, governments often are completely hesitant and do not want to stifle innovation or potential helpful use cases of these systems, yet they do not move beyond the whole whac-a-mole approach to shaping policy discourse around AI and Law. Analysis of Warner Bros. Discovery’ approach to protecting IP Warner Bros. discovery is a media and entertainment company that faces the challenge of protecting its vast and matured IP portfolio in the age of AI. The company’s approach involves a combination of legal strategies, measures, and proactive interaction with AI platforms. The rapid ideation, scaling and implementation advantage of AI companies necessitate for media and creative works copyright holders to incorporate a variety of measures that are of ex ante and post ante nature. A key component to their approach involves monitoring AI platforms, and communities for unauthorized use of Intellectual property in training chatbots, taking legal measures against infringements, negotiating licensing opportunities, and exploring the future world of media entertainment. In the present context , Warner Bros. Discovery has seemed to have devised a proactive strategy to deal with infringements in the digital environment. Thus, mitigating the need for litigation-less enforcement of their claim over their IP rights. Warner Bros Discovery and other media and entertainment companies have a once in a decade opportunity to collaborate with AI platforms to develop tools and technologies that protect their Intellectual Property Portfolio, all the while furthering innovation; curbing misinformation, unauthorised access; dealing with ethical concerns and also enabling AI platforms to put in place appropriate compliance measures that further reduce their liabilities. These collaborations could give headway to develop industry standards and best practices for IP protection at a stage where these technologies are still developing. The unprecedented collaborations could also assist in educating the public about the potential misinformation, consent, unauthorized access and setting user expectations. Media and Entertainment companies could assist AI platforms in explaining the terms of services, privacy policies and user agreements in a story format, with the help of AI characters, this would foster a more healthy and effective approach to dealing with the ethical concerns that have been raised time and again by various stakeholders that are shaping the discourse around AI systems and content creation. Exploration of Licensing Models for AI Character Creation Recent cases, such as the Dow Jones and NYP Holdings v. Perplexity AI and Bartz v. Anthropic , have iterated a significant turning point in the potential relationship between AI companies and owners of creative works, upon which LLMs are trained. In both cases, the owners of exclusive intellectual property have expressed their willingness to potentially collaborate and explore licensing strategies that provide for fair compensation for the use of their works in Training LLMs. This marks a change in approach that IP holders want to exercise to earn an additional source of revenue, and also highlights the fact that they are not reluctant in the usage of their copyrighted content, but are only concerned about the piracy of the content of which they are the sole IP holder. There are various licensing strategies that the AI companies and media entertainment companies could potentially explore as a default. These include exclusive licenses, Non-exclusive licenses, revenue sharing models, and usage based licenses. These models of licensing could be explored, and incorporated, depending on the context for the usage of copyrighted content by the AI companies. The pros and cons of these models are explained hereinafter the form of a mind map: Figure 6: Licensing Models and their types, depicted. Conclusion and Recommendations To conclude, the potential collaborations between IP holders and AI platforms is going to shape how users and owners of creative works view the incentive structures, and what other forms of entertainment are yet to be explored. The 'Tabooisation' of AI systems in the creative work fields will only be detrimental to the media company. Instead, if they choose to embrace a future that is already here and is here to stay, Media companies then would be able develop interactive narratives, personalized experiences, postscript bites, and other new entertainment forms that work in collaboration and not in isolation from AI systems. Here are some mind maps, which reflect some suggestions for balancing copyright protection & innovation in the case of AI use. Figure 7: Suggestions for Balancing Copyright Protection and Innovation in AI, depicted. Figure 8: The Author's Proposed Guidelines for Ethical AI Character Creation and Interaction
- Beyond AGI Promises: Decoding Microsoft-OpenAI's Competition Policy Paradox
Explore Escher-inspired environments where AI elements navigate complex geometric spaces with policy cards, blending surreal architecture with futuristic aesthetics. Made with Luma AI. The strategic recalibration between Microsoft and OpenAI presents a compelling case study in digital competition policy, marked by two significant developments: OpenAI's potential removal of its AGI (Artificial General Intelligence) mandate and Microsoft's formal designation of OpenAI as a competitor in its fiscal reports. This analysis examines the implications of these interrelated events through three critical lenses: competition policy frameworks, market dynamics, and regulatory governance. The first dimension of this analysis explores the competitive framework assessment, delving into the complexities of vertical integration in AI markets and the unique dynamics of partnership-competition duality in technological ecosystems. This section examines how traditional antitrust frameworks struggle to address scenarios where major technology companies simultaneously act as investors, partners, and competitors. The second component focuses on regulatory implications, evaluating the adequacy of current competition policies in addressing AI-driven market transformations. It assesses existing regulatory oversight mechanisms and explores potential policy reforms needed to address the unique challenges posed by AI development partnerships and their impact on market competition. The final segment examines market structure dynamics, analysing how the evolution of AI development funding models affects corporate governance and innovation. This section particularly focuses on how the tension between public benefit missions and commercial imperatives shapes the future of AI enterprise structures and market competition. Examining the Key Events in the MSFT-OpenAI Relationship Two pivotal events have reshaped the Microsoft-OpenAI relationship, highlighting evolving dynamics in the AI industry. The AGI Clause Reconsideration OpenAI is discussing the removal of a significant contractual provision that currently restricts Microsoft's access to advanced AI models once Artificial General Intelligence (AGI) is achieved. This clause , originally designed to prevent commercial misuse of AGI technology, defines AGI as a "highly autonomous system that outperforms humans at most economically valuable work". Microsoft's Competitive Designation In a notable shift, Microsoft has officially listed OpenAI as a competitor in its annual report, specifically in AI, search, and news advertising sectors. This designation places OpenAI alongside traditional competitors like Amazon, Apple, Google, and Meta, despite Microsoft's substantial $13 billion investment in the company. Financial Context The timing of these developments is significant: OpenAI recently closed a $6.6 billion funding round, achieving a $157 billion valuation The company is exploring restructuring its core business into a for-profit benefit corporation Sam Altman acknowledged that the company's initial structure didn't anticipate becoming a product company requiring massive capital These events reflect a complex relationship where Microsoft serves as both OpenAI's exclusive cloud provider and now, officially, its competitor. Competitive Framework and Market Structure Analysis The Microsoft-OpenAI relationship exemplifies a new paradigm in digital market competition, characterised by complex interdependencies and strategic ambiguity. Vertical Integration Dynamics The relationship demonstrates unprecedented vertical integration patterns, where Microsoft simultaneously acts as OpenAI's largest investor($13 billion), exclusive cloud provider, and declared competitor. Figure 1: Competition-Partnership Matrix Map This creates a unique market structure where: Microsoft integrates OpenAI's technology across its product stack Both entities compete for direct enterprise customers Cloud services and AI capabilities overlap increasingly Search market competition intensifies with SearchGPT's introduction Market Power Distribution Figure 2: Market Power Dynamics The evolving dynamics reveal a shifting power balance in the AI ecosystem: Traditional competition frameworks struggle to categorise this relationship Both companies maintain strategic independence while leveraging shared resources Market opportunities drive expansion into overlapping territories Product differentiation becomes crucial for maintaining distinct identities Structural Evolution Figure 3: AI Industry Structure Evolution The relationship's transformation reflects broader market structure changes: The partnership model has evolved from pure collaboration to "coopetition" Both companies are developing independent capabilities while maintaining interdependence Microsoft's development of in-house AI models (MAI-1) indicates strategic hedging OpenAI's direct-to-consumer products suggest market independence aspirations Resource Allocation Dynamics The competition-collaboration balance creates unique resource allocation patterns: Computational resources flow through Microsoft's Azure platform Financial investments create mutual dependencies Talent and innovation capabilities remain distinct Market access and customer relationships overlap increasingly This complex framework challenges traditional antitrust approaches and necessitates new competition policy tools that can address the nuanced reality of modern tech partnerships. Conclusion & Recommendations The Microsoft-OpenAI case demonstrates that current competition frameworks require substantial recalibration to address emerging AI market dynamics. Several specific considerations emerge: Regulatory Architecture Requirements Competition authorities need specialized tools for evaluating AI partnerships where competitive boundaries are fluid Traditional market share metrics prove inadequate when assessing AI market power Vertical integration assessments must consider both immediate and potential future competitive impacts Data access and computational resource control require distinct evaluation metrics Market-Specific Considerations The definition of "essential facilities" in AI markets must extend beyond traditional infrastructure to include: Training data access mechanisms Computational resource availability Model architecture knowledge API access conditions Market power assessment should incorporate both current capabilities and future development potential Competition policy must balance innovation incentives with market access concerns Policy Implementation Framework Immediate regulatory priorities: Establishing clear guidelines for AI partnership disclosures Developing metrics for assessing AI market concentration Creating mechanisms for monitoring technological dependencies Setting standards for competitive access to essential AI resources Long-term considerations: Evolution of partnership structures in AI development Impact of AGI development on market competition Balance between open-source and proprietary AI development Global coordination of AI competition policies Recommendations Competition authorities should develop: Dynamic assessment tools for evaluating AI partnerships Frameworks for monitoring technological lock-in effects Mechanisms for ensuring competitive API access Standards for evaluating AI market concentration Policy frameworks must remain adaptable to technological evolution while maintaining competitive safeguards The Microsoft-OpenAI relationship thus serves as a crucial precedent for developing nuanced competition policies that can effectively govern the unique dynamics of AI market development while ensuring sustainable innovation and fair competition.
- CCI's Landmark Ruling on Meta's Privacy Practices
The Competition Commission of India's (CCI) recent press release announcing a substantial penalty of Rs. 213.14 crore on Meta marks a significant milestone in the regulation of digital platforms in India. This decision, centered on WhatsApp's 2021 Privacy Policy update, underscores the growing scrutiny of data practices and market dominance in the digital economy. The CCI's action reflects a proactive approach to addressing anti-competitive behaviours in the tech sector, particularly concerning data sharing and user privacy.This policy insight examines the implications of the CCI's decision, which goes beyond mere financial penalties to impose behavioural remedies aimed at reshaping Meta's data practices in India. The order's focus on user consent, data sharing restrictions, and transparency requirements signals a shift towards more stringent regulation of digital platforms. It also highlights the intersection of competition law with data protection concerns, setting a precedent that could influence regulatory approaches both in India and globally. As a draft of Digital Competition Bill was proposed in March 2024, this CCI action provides valuable insights into the regulator's perspective on digital market dynamics and its readiness to enforce competition laws in the digital sphere. The decision raises important questions about the balance between fostering innovation in the digital economy and protecting user rights and market competition. Detailed Breakdown of the CCI Press Release Penalty and Legal Basis The Competition Commission of India (CCI) has imposed a substantial penalty of Rs. 213.14 crore on Meta for abusing its dominant market position. This penalty is based on violations of multiple sections of the Competition Act: Section 4(2)(a)(i): Imposition of unfair conditions Section 4(2)(c): Creation of entry barriers and denial of market access Section 4(2)(e): Leveraging dominant position in one market to protect position in another Relevant Markets and Dominance The CCI identified two key markets in its investigation: OTT messaging apps through smartphones in India Online display advertising in India Meta, through WhatsApp, was found to be dominant in the OTT messaging app market and held a leading position in online display advertising. Privacy Policy Update and Its Implications The case centres on WhatsApp's 2021 Privacy Policy update, which: Mandated users to accept expanded data collection terms Required sharing of data with other Meta companies Removed the previous opt-out option for data sharing with Facebook Presented these changes on a "take-it-or-leave-it" basis Anti-Competitive Practices Identified The CCI concluded that Meta engaged in several anti-competitive practices: Imposing unfair conditions through the mandatory acceptance of expanded data collection and sharing Creating entry barriers for rivals in the display advertising market Leveraging its dominant position in OTT messaging to protect its position in online display advertising Remedial Measures The CCI has ordered several behavioral remedies to address these issues: Data Sharing Prohibition: WhatsApp is prohibited from sharing user data with other Meta companies for advertising purposes for 5 years. Transparency Requirements: WhatsApp must provide a detailed explanation of data sharing practices in its policy. User Consent: Data sharing cannot be a condition for accessing WhatsApp services in India. Opt-Out Options: Users must be given opt-out options for data sharing, including: An in-app notification with an opt-out option A prominent settings tab to review and modify data sharing choices All future policy updates must comply with these requirements. Significance of the Ruling This decision by the CCI is significant for several reasons: It addresses the intersection of data privacy and competition law It challenges the business model of major tech companies that rely on data sharing across platforms It sets a precedent for regulating "take-it-or-leave-it" privacy policies by dominant platforms It demonstrates the CCI's willingness to take strong action against anti-competitive practices in the digital economy Based on the CCI's order against Meta and WhatsApp, we can analyze its implementability, effectiveness, and implications for the Draft Digital Competition Bill's legislative perspective: Implementability and Enforcement Specific and Actionable Directives: The CCI's order includes clear, implementable directives such as: Prohibiting data sharing for advertising purposes for 5 years Mandating detailed explanations of data sharing practices Requiring opt-out options for users These specific measures demonstrate that the CCI can craft enforceable remedies for digital markets. Temporal Scope: The 5-year prohibition on data sharing for advertising purposes shows the CCI's willingness to impose long-term structural changes in business practices. User Interface Changes: Requiring WhatsApp to provide opt-out options through in-app notifications and settings demonstrates the CCI's ability to mandate specific changes to digital platforms' user interfaces. Where the Order Shows Teeth Substantial Financial Penalty: The ₹213.14 crore fine is significant and sends a strong message to digital platforms operating in India. Behavioural Remedies: Going beyond fines, the order mandates specific changes in WhatsApp's data practices and user interface, directly impacting Meta's business model. Broad Market Impact: By addressing both the OTT messaging and online display advertising markets, the CCI demonstrates its ability to tackle complex, multi-sided digital markets. Future Compliance: The order extends to future policy updates, ensuring ongoing compliance and preventing easy workarounds. Reflections on the Draft Digital Competition Bill's Legislative Perspective Ex-Ante Approach: While this action is ex-post, it signals the CCI's readiness to adopt a more proactive, ex-ante approach as proposed in the Digital Competition Bill. Focus on Data Practices: The order's emphasis on data sharing and user consent aligns with the bill's focus on regulating data practices of large digital platforms. User Choice and Transparency: The remedies ordered reflect the bill's intent to promote user choice and transparency in digital markets. Complementary Enforcement: This action under existing laws demonstrates how the proposed ex-ante framework could complement current ex-post enforcement, potentially addressing concerns more swiftly and effectively. Technical Expertise: The detailed analysis in the order suggests the CCI is developing the technical expertise needed to regulate digital markets effectively, as emphasised in the proposed bill. Conclusion In conclusion, the CCI's order against Meta and WhatsApp demonstrates that the regulator has the capability and willingness to implement and enforce significant measures against large digital platforms, even under the current legal framework. This action likely strengthens the case for the proposed ex-ante regulatory approach in the Digital Competition Bill, showing that the CCI is prepared to take on a more proactive role in shaping fair competition in India's digital markets.
- Book Review: Taming Silicon Valley by Gary Marcus
This is a review of " Taming Silicon Valley: How Can We Ensure AI Works for Us ", authored by Dr Gary Marcus. To introduce, Dr Marcus is Emeritus Professor of Psychology and Neural Science at New York University, US. He is a leading voice in the global artificial intelligence industry, especially the United States. One may agree or disagree with his assessments of the Generative AI use cases, and trends. However, his erudite points must be considered to understand how AI trends around the Silicon Valley are documented, and understood, beyond the book’s intrinsic focus on industry & policy issues around artificial intelligence. The book, at its best, gives an opportunity to dive into the introductory problems in the global AI ecosystem, in the Silicon Valley, and in some instances, even beyond. Mapping the Current State of ‘GenAI’ / RoughDraft AI Dr Marcus’s book, "Mapping the Current State of ‘GenAI’ / RoughDraft AI," provides essential examples of how Generative AI (GenAI) solutions appear appealing but have significant reliability and trust issues. The author begins by demonstrating how most Business-to-Consumer (B2C) GenAI ‘solutions’ look appealing, allowing readers to explore basic examples of prompts and AI-generated content to understand the ‘appealing’ element of any B2C GenAI tool, be it in text or visuals. The author compares the ‘Henrietta Incident’, where a misleading point about Dr Marcus led a GenAI tool to produce a plausible but error-riddled output, with an LLM alleging Elon Musk’s ‘death’ by mixing his ownership of Tesla Motors with Tesla driver fatalities. These examples highlight the shaky ground of GenAI tools in terms of reliability and trust, which many technology experts, lawyers, and policy specialists have not focused on, despite the obvious references to these errors. The ‘Chevy Tahoe’ and ‘BOMB’ examples fascinate, showing how GenAI tools consume inputs but don’t understand their outputs. Despite patching interpretive issues, ancillary problems persist. The ‘BOMB’ example demonstrates how masked writing can bypass guardrails, as these tools fail to understand how guardrails can be circumvented. The author responsibly avoids regarding guardrails around LLMs and GenAI as perfect. Many technology lawyers and specialists have misled people about these guardrails’ potential worldwide. The UK Government’s International Scientific Report at the Seoul AI Summit in May 2024 echoed the author’s views, noting the ineffectiveness of existing GenAI guardrails. The book serves as a no-brainer for people to understand the hyped-up expectations associated with GenAI and the consequences associated with it. The author’s approach of not over-explaining or oversimplifying the examples makes the content more accessible and engaging for readers. The Threats Associated with Generative AI The author provides interesting quotations from the Russian Federation Government’s Defence Ministry and Kate Crawford from the AI Now Institute as he delves into offering a breakdown of the 12 biggest immediate threats of Generative AI. Now, one important and underrated area of concern addressed in the sections is medical advice. Apart from deepfakes, the author’s reference to how LLM responses to medical questions were highly variable and inaccurate was necessary to discuss. This reminds us of a trend among influencers to convert their B2C-level content to handle increased consumer/client consulting queries, which could create a misinformed or disinformed engagement loop between the specialist/generalist and potential client. The author impressively refers to the problem of accidental misinformation, pointing out the ‘Garbage-in-Garbage-Out’ problem, which could drive internet traffic, especially in technical domains like STEM. The mention of citation loops of unreal case laws alludes to how Generative AI promotes a vicious and mediocre citation loop for any topic if not dealt with correctly. In addition, the author raises an important concern around defamation risks with Generative AI. The fabrication of content used to prove defamation creates a legal dilemma, as courts may struggle to determine who should be subject to legal recourse. The book is a must-read for all major stakeholders in the Bar and Bench to understand the ‘substandardism’ associated with GenAI and its legal risks. The author’s reference to Donald Rumsfeld’s "known knowns, known unknowns, and unknown unknowns" quote frames the potential risks associated with AI, particularly those we may not yet be aware of. Interestingly, Dr Marcus debunks myths around ‘literal extinction’ and ‘existential risk’, explaining that mere malignant training imparted to ChatGPT-like tools does not give them the ability to develop ‘genuine intentions’. He responsibly points out the risks of half-baked ideas like text-to-action to engineer second and third-order effects out of algorithmic activities enabled by Generative AI, making this book a fantastic explainer of the 12 threats of Generative AI. The Silicon Valley Groupthink and What it Means for India [While the sections covering Silicon Valley in this book do not explicitly mention the Indian AI ecosystem in depth, I have pointed out some normal parallels, which could be relatable to a limited extent.] The author refers to the usual hypocrisies associated with the United States-based Silicon Valley. Throughout the book, Dr Marcus has referred to the works of Soshanna Zuboff and the problem of surveillance capitalism, largely associated with the FAANG companies of North America, notably Google, Meta, and others. He provides a polite yet critical review of the promises held by companies like OpenAI and others in the larger AI research & B2C GenAI segments. The Apple-Facebook differences emphasised by Dr Marcus are intriguing. The author highlights a key point made by Frances Haugen, a former Facebook employee turned whistleblower, about the stark contrast between Apple and Facebook in terms of their business practices and transparency. Haugen argues that Apple, selling tangible products like iPhones, cannot easily deceive the public about their offerings’ essential characteristics. In contrast, Facebook’s highly personalised social network makes it challenging for users to assess the true nature and extent of the platform’s issues. Regarding OpenAI, the author points out how the ‘profits, schmofits’ problem, around high valuations, made companies like OpenAI and Anthropic give up their safety goals around AI building. Even in the name of AI Safety, the regurgitated ‘guardrails’ and measures have not necessarily put forward the goals of true AI Safety that well. This is why building AI Safety Institutes across the world (as well as something in the lines of CERN as recommended by the author) becomes necessary. The author makes a reasonable assessment of the over-hyped & messianic narrative built by Silicon Valley players, highlighting how the loop of overpromise has largely guided the narrative so far. He mentions the "Oh no, China will get to GPT-5" myth spread across quarters in Washington DC, which relates to hyped-up conversations on AI and geopolitics in the Indo-Pacific, India, and the United States. While the author mentions several relatable points around ‘slick video’ marketing and the abstract notion of ‘money gives them immense power’, it reminds me of the discourse around the Indian Digital Competition Bill. In India, the situation gets dire because most of the FAAMG companies in the B2C side have invested their resources in such a way that even if they are not profiting enough in some sectors, they are earning well by selling Indian data and providing relevant technology infrastructure. Dr Marcus points out the intellectual failures of science popularizing movements, like effective accelerationism (e-acc). While e-acc can still be subject to interest and awe, it does not make sense in the long run, with its zero-sum mindset. The author calls out the problems in the larger Valley-based accelerationist movements. To conclude this section, I would recommend going through a sensible response given by the CEO of Honeywell, Vimal Kapur , on how AI tools might affect hardly noticeable domains such as aerospace & energy. I believe the readers might feel more excited to read this incredible book. Remembering the 19th Century and the Insistence to Regulate AI The author's reference to quotes by Tom Wheeler and Madeleine Albright reminds me of a quote from former UK Prime Minister, Tony Blair , on a lighter note: “My thesis about modern politics is that the key political challenge today is the technological revolution, the 21st century equivalent of the 19th century Industrial Revolution. And politics has been slow to catch up.” While Blair's reference is largely political, the two quotes by Wheeler and Madeleine relate to the interesting commonalities between the 19th and 21st centuries. The author provides a solid basis as to why copyright laws are important when data scraping techniques in the GenAI ecosystem do not respect the autonomy & copy-rights of the authors whose content is consumed & grasped. The reference to quotes from Ed Newton-Rex and Pete Dietert on the GenAI-copyright issue highlights the ethical and legal complexities surrounding the use of creative works in training generative AI models. Dr Marcus emphasizes the urgent need for a more nuanced and ethical approach to AI development, particularly in the realm of creative industries. The author uses these examples to underscore a critical point: the current practices of many AI companies in harvesting and using creative works without proper permission or compensation are ethically questionable and potentially exploitative. Pete Dietert's stark warning about "digital replicants" amplifies the urgency of addressing these issues, extending the conversation beyond economic considerations to fundamental human rights, as recognised in the UNESCO Recommendation on the Ethics of AI of 2021 . Dr Marcus points out how the 'Data & Trust Alliance' webpage features appealing privacy and data protection-related legal buzzwords, but the details help in shielding companies more than protecting consumers. Such attempts of subversions are being tried in Western Europe, Northern America, and even parts of the Indo-Pacific Region, including India. The author focuses on algorithmic transparency & source transparency among the list of demands people should make. He refers to the larger black box problem as the core basis to legally justify why interpretability measures matter. With respect to consumer law and human rights, AI interpretability (Explainable AI) becomes necessary to have a gestation phase to see if there is any interpretability of the activities regularly visible in AI systems at a pre-launch stage. On source transparency, the author points out the role of content provenance (labelling) in enabling distinguishability between human-created content and synthetic content, so that the tendency to create "counterfeit people" is prevented and discouraged. The author refers to the problem of anthropomorphism, where many AI systems create a counterfeit perception among human beings and, via impersonation, could potentially downgrade their cognitive abilities. Among the eight suggestions made by Dr Marcus on how people can make a difference in bettering AI governance avenues, the author makes a reasonable point that voluntary guidelines must be negotiated with major technology companies. In the case of India, there have been some self-regulatory attempts, like an AI Advisory (non-binding) in March 2024, but more consistent efforts may be implemented, starting with voluntary guidelines, with sector-specific & sector-neutral priorities. Conclusion Overall, Dr Gary Marcus has written an excellent prologue to truly ‘tame’ the Silicon Valley in the simplest way possible for anyone who is not aware of technical & legal issues around Generative AI. As recommended, this book also gives a slight glance at improving some understanding around digital competition policy measures, and the effective use of consumer law frameworks, where competition policy remains ineffective. The book is not necessarily a detailed documentation on the state of AI Hype. However, the examples, and references mentioned in the book are enough for researchers in law, economics and policy to trace out the problems associated with the American & Global AI ecosystems.
- The 'Algorithmic' Sophistry of High Frequency Trading in India's Derivatives Market
The author is a Research Intern at the Indian Society of Artificial Intelligence and Law as October 2024. A recent study conducted by the market regulator of the country, the Securities and Exchange Board of India (SEBI), shed light on tectonic disparities in the market space concerning equity derivatives. Per the study, the utilisation of algorithm trading for the purposes of proprietary trading and that of foreign funds resulted in gross profits that totalled an amount of ₹588.4 billion ($7 billion) from having traded in the equity derivates of Indian markets in the Financial Year that ended on March’24. [1] However, it was noted that in a disparate stark contrast, several individual traders faced monumental consequential losses. The study further detailed that almost 93% of the individual traders had suffered losses in the Equity Futures and Options (F&O) Segment, in the preceding three years, that is, from the financial years of 2022 to 2024 ; the aggregate losses totalling to an amount exceeding ₹1.8 lakh crore . Notably, in the immediately preceding Financial Year [2023 – March 2024] alone, the net losses incurred among individual traders approximated an amount of ₹75,000 crore. The findings of SEBI underscore the challenges faced by individual traders when the former is having to compete against a more technologically furthered, well-funded entity in the market space of derivatives. The insight clearly contends that institutional entities that have inculcated algo-trading strategies have a clear competitive edge over those who lack the former, i.e., individual traders. Understanding the Intricacies of Algorithm Trading High-Frequency Trading refers to the over-arching aspect of algorithm trading which is latency sensitive and done through the medium of an automated platform, that essentially focuses on trading. The same is facilitated through advanced computational systems that are technically capable of executing large orders ate a more efficient speed in order to achieve optimal prices at a level humans cannot match. The dominancy of algorithms in the domain of the global landscape of financial markets, have had an exponential growth the past decade. High-frequency trading (HFT) algorithms, aim at the execution of trades within fractions of a second. This high-speed computational system places institutional investors at a more profitable and higher pedestal than individual traders, who typically place reliance on manual trading strategies and consequently have an evident lack access in sophisticated analytics and real-time processing trading systems. Furthermore, HFT allows traders to trade a larger amount of shares frequently, by processing differences within marginal prices in split of a second, thereafter ensuring accuracy in the executing of a trade and enhancement of market liquidity. The premise is also paralleled in the Indian equity derivatives market with HFT firms reaping substantial profits. The study conducted by the India’s market regulator evidently sheds light on the comparable gains and losses among institutional traders and individual traders respectively. The insight expounds upon the sophistries on the competitive dynamics of the country’s derivatives market, and its superficial regulation over manual trading and computational trading. The Competitive Landscape of the Derivatives Market: The Odds Stacked Against Individual Traders The study revealed a disadvantageous plight of retail traders, with every nine out of ten retail traders having incurred losses over the preceding three FY. This thereafter raises the contentious debate surrounding viability of individual traders and market dynamics in the landscape of derivatives market. The lack of the requisite support and resources to individual traders would make the sustainability of the former difficult, especially with the backdrop of a growing trend in algorithm trading. HFT has been subjected to critique by several professionals, with the latter holding the former in contempt for unbalancing the playing field of derivatives market. Other disadvantageous impediments brought firth by such trading mechanism include: Market Noise Price volatility Strengthening of the mechanism of surveillance Heavier imposition of costs Market manipulation and consequent disruption in the structure of capital markets The Need to Regulate the Technological 'Arms Race' in Trading Given the evident differences in mechanisms of trading, there arises a pressing need for improving tools of trading and ensuring easier access to related educational resources for individual investors. SEBI, the capital market regulator of India, has the prerogative obligation to regulate such disparities. In 2016, a discussion paper was released by the former that attempted to address the various issues relating to HFT mechanisms. The same was done with the premise of instituting an environment of equitable and fair marketspace for every stakeholder therein involved. SEBI proposed the institution of a “Co-Location facility” done on a shared-basis that do not allow the installation of individual servers. This proposed move aims to potentially reduce the latency of having access to the trading system, and attempting to provide a tick-by-tick feed of data, that would be given free of cost to all trading stakeholders. SEBI further proposed a review mechanism over the requirements of trading with respect to usage of algo-trading softwares. The same is furthered by mandating stock exchanges for strengthening the regulatory framework of algo-trading, and consequently lead to the institution of a simulated environment of market for an initial test of the software, prior to its real-time application. [2] To add, SEBI has also undertaken a slew of measures to regulate the algo-trading and HFT. This includes [3] : Minimum time of rest for orders of stock Institution of a mechanism of maximum-order-message to measurements of trade, ratio Randomisation of the orders in stock and a review system on the tick-by-tick feed of data Institution of congestion charges to reduce the load on the market Thus, despite the rather unregulated stride on HFT in India, SEBI in vide has an overarching authority over the same through the provisions of SEBI Act, 1992. However, the same is prevailing in a rudimentary existence and thereafter, continues to usher in an age of unhealthy competitiveness among the traders in a capital market. References [1] Newsdesk, High Speed Traders reap $7bn profit from India’s options market, https://www.thenews.com.pk/print/1233452-high-speed-traders-reap-7bn-profit-from-india-s-options-market (last visited on 6 Oct, 2024). [2] Amit K Kashyap, et. al., Legality and issues relating to HFT in India, Taxmann, https://www.taxmann.com/research/company-and-sebi/top-story/105010000000017103/legality-and-issues-related-to-high-frequency-trading-in-india-experts-opinion (last visited on 6 Oct, 2024). [3] Id.
- New Report: Legal-Economic Issues in Indian AI Compute and Infrastructure [IPLR-IG-011]
We are thrilled to announce the release of our latest report, " Legal-Economic Issues in Indian AI Compute and Infrastructure" [IPLR-IG-011] , authored by the talented duo, Abhivardhan (our Founder) and Rasleen Kaur Dua (former Research Intern, the Indian Society of Artificial Intelligence and Law). This comprehensive study delves into the intricate challenges and opportunities that shape India's AI ecosystem. Our aim is to provide valuable insights for policymakers, entrepreneurs, and researchers navigating this complex landscape. 🧭💡 Read the complete report at https://indopacific.app/product/iplr-ig-011/ Key Highlights 🖥️ Impact of Compute Costs on AI Development in India Examining the AI compute landscape and the role of compute costs in India's AI development. 🏗️ AI-associated Challenges to Fair Competition in the Startup Ecosystem Analysing how compute costs and access to public computing infrastructure influence AI development, particularly for startups and small enterprises. 🌐 Addressing Tech MNCs under Indian Competition Policy Exploring how Indian competition and industrial policy on digital technology MNCs affects regulation and innovation catering. 🤝 India's Role in Global AI Trade and WTO Agreements Investigating India's stance on international trade policies related to AI, the impact of WTO agreements, and the potential for sector-specific AI trade agreements. 📈 Key Recommendations and Strategies for India's AI Development Offering actionable recommendations for enhancing India's AI development both domestically and in the context of global trade policies. As India strives to become a global AI powerhouse, it is crucial to address the legal and economic implications of this transformative technology. Our report aims to contribute to this important discourse and provide a roadmap for inclusive and sustainable AI development. 🌍💫 We invite you to download the full report from our website and join the conversation. Share your thoughts, experiences, and insights on the challenges and opportunities facing India's AI ecosystem. Together, we can shape a future where AI benefits all. 🙌💬 Stay tuned for more cutting-edge research and analysis from Indic Pacific Legal Research. 📣🔍 Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- Easing Regulatory Search in a Project with People+AI
I’m excited to share a major update on a positive note! For the past few months, I’ve been contributing as a Working Group Partner for Indic Pacific Legal Research LLP, collaborating with People+ai on a transformative project involving AI and Knowledge Graphs . This initiative aims to streamline the search for legal instruments, specifically subordinate legislations, from Indian regulators. Today, I’m thrilled to share the first outcomes of this ongoing project. I’ve co-authored an introductory article with Saanjh Shekhar , alongside contributions from Varsha Yogish and Praveen Kasmas . It explores how AI-powered knowledge graph technology can potentially answer legal queries by analyzing complex frameworks, such as those from the RBI. Navigating India’s legal landscape through public search platforms can be a daunting task due to fragmented documentation and inconsistent categorization. This project offers a solution to these challenges, making legal research more efficient for professionals and the public alike. Check out the article here: https://shorturl.at/B5gVW . A huge thanks to Kapil Naresh, Anshul Chaudhary, Sonali Patil Pawar, Aniket Raj , and Varsha for their valuable testimonials and support. Your feedback is always welcome!
- The French-German Report on AI Coding Assistants, Explained
The rapid advancements in generative artificial intelligence (AI) have led to the development of AI coding assistants, which are increasingly being adopted in software development processes. In September 2024, the French Cybersecurity Agency (ANSSI) and the German Federal Office for Information Security (BSI) jointly published a report titled "AI Coding Assistants" to provide recommendations for the secure use of these tools. This legal insight aims to analyse the key findings from the ANSSI and BSI report. By examining the opportunities, risks, and recommendations outlined in the document, we can understand how India should approach the regulation of AI coding assistants to ensure their safe and responsible use in the software industry. The article highlights the main points from the ANSSI and BSI report, including the potential benefits of AI coding assistants, such as increased productivity and employee satisfaction, as well as the associated risks, like lack of confidentiality, automation bias, and the generation of insecure code. The recommendations provided by the French and German agencies for management and developers are also discussed. Potential Use Cases for AI Coding Assistants While AI coding assistants are generating significant buzz, their practical use cases and impact on developer productivity are still being actively studied and debated. Some potential areas where these tools may offer benefits include: Code Generation and Autocompletion AI assistants can help developers write code faster by providing intelligent suggestions and autocompleting common patterns. This can be especially helpful for junior developers or those working in new languages or frameworks. However, the quality and correctness of the generated code can vary, so developer oversight is still required. Refactoring and Code Translation Studies suggest AI tools may help complete refactoring tasks 20-30% faster by identifying issues and suggesting improvements. They can also assist in translating code between languages. However, the refactoring suggestions may not always preserve the original behavior and can introduce subtle bugs, so caution is needed. Test Case Generation AI assistants have shown promise in automatically generating unit test cases based on code analysis. This could improve test coverage, especially for well-maintained codebases. However, the practical usefulness of the generated tests can be hit-or-miss, and they may be less suitable for test-driven development approaches. Documentation and Code Explanation By analysing code and providing natural language explanations, AI tools can help with generating documentation and getting up to speed on unfamiliar codebases. This may be valuable for onboarding and knowledge sharing. The quality and accuracy of the explanations still need scrutiny though. While these use cases demonstrate potential, the actual productivity gains seem to vary significantly based on factors like the complexity of the codebase, the skill level of the developer, and how the AI assistant is applied. Careful integration and a focus on augmenting rather than replacing developers is advised. Studies have shown productivity improvements ranging from 0-45% in certain scenarios, but have also highlighted challenges like the introduction of bugs, security vulnerabilities, and maintainability issues in the AI-generated code. Overly relying on AI assistants and blindly accepting their output can be counterproductive. Overall, while AI coding assistants demonstrate promising potential, their use cases and benefits are still a mixed bag in practice as of 2024. More research and refinement of the technology is needed to unlock their full value in real-world software engineering. Merits of the Report Thorough Coverage of Opportunities The report does a commendable job of highlighting the various ways AI coding assistants can benefit the software development process: Code Generation : The report cites studies showing AI assistants can correctly implement basic algorithms with optimal runtime performance, demonstrating their potential to automate repetitive coding tasks and enhance productivity. Debugging and Test Case Generation : It discusses how AI can reduce debugging workload by automatically detecting and fixing errors, as well as generating test cases to improve code coverage. Specific examples like JavaScript debugging and test-driven development (TDD) are provided. Code Explanation and Documentation : The report explains how AI assistants can help developers understand unfamiliar codebases by providing natural language explanations and generating automated comments/documentation. This can aid in code comprehension and maintainability. Increased Productivity and Satisfaction : While noting the difficulty of quantifying productivity, the report references survey data indicating developers feel more productive and satisfied when using AI coding assistants, mainly due to the reduction of repetitive tasks. Balanced Discussion of Risks The report provides a balanced perspective by thoroughly examining the risks and challenges associated with AI coding assistants: Confidentiality of Inputs : It highlights the risk of sensitive information like login credentials and API keys unintentionally flowing into the AI's training data, depending on the provider's contract conditions. Clear mitigation measures are suggested, such as prohibiting uncontrolled cloud access and carefully examining usage terms. Automation Bias : The report warns of the danger of developers placing excessive trust in AI-generated code, even when it contains flaws. It cites studies showing a cognitive bias where many developers perceive AI assistants as secure, despite the regular presence of vulnerabilities. Lack of Output Quality and Security : Concrete data is provided on the high rates of incorrect answers (50%) and security vulnerabilities (40%) in AI-generated code. The report attributes this partly to the use of outdated, insecure practices in training data. Supply Chain Attacks : Various attack vectors are explained in detail, such as package hallucinations leading to confusion attacks, indirect prompt injections to manipulate AI behavior, and data poisoning to generate insecure code. Specific examples and mitigation strategies are given for each. Recommendations in the Report One of the key strengths of the report is the actionable recommendations it provides for both management and developers: Management : Key suggestions include performing systematic risk analysis before adopting AI tools, establishing security guidelines, scaling quality assurance teams to match productivity gains, and providing employee training and clear usage policies. Developers : The report emphasises the importance of responsible AI use, checking and reproducing generated code, protecting sensitive information, and following company guidelines. It also encourages further training and knowledge sharing among colleagues. Research Agenda : The report goes a step further by outlining areas for future research, such as improving training data quality, creating datasets for code translation, advancing automated security control, and conducting independent studies on productivity impact. Limits in the Report Limited Scope and Depth While the report covers a wide range of topics related to AI coding assistants, it may not delve deeply enough into certain areas: The discussion on productivity and employee satisfaction is relatively brief and lacks concrete data or case studies to support the claims. More comprehensive research is needed to quantify the impact of AI coding assistants on developer productivity. The report mentions the potential for AI to assist in code translation and legacy code modernisation but does not provide a detailed analysis of the current state-of-the-art or the specific challenges involved. The research agenda proposed in the report is quite broad and could benefit from more specific recommendations and prioritisation of key areas. Lack of Practical Implementation Guidance Although the report offers high-level recommendations for management and developers, it may not provide enough practical guidance for organizations looking to implement AI coding assistants: The report suggests performing a systematic risk analysis before introducing AI tools but does not provide a framework or template for conducting such an analysis. While the report emphasizes the importance of establishing security guidelines and training employees, it does not offer specific examples or best practices for doing so. The recommendations for developers, such as checking and reproducing generated code, could be supplemented with more concrete steps and tools to facilitate this process. Limited Discussion of Ethical Considerations The report focuses primarily on the technical aspects of AI coding assistants and does not extensively address the ethical implications of this technology: The potential for AI coding assistants to perpetuate biases present in the training data is not thoroughly explored. The report does not delve into the broader societal impact of AI coding assistants, such as the potential for job displacement or the need for reskilling of developers. Ethical considerations around the use of AI-generated code, such as issues of intellectual property and attribution, are not discussed in detail. Analysis in the Indian Context The ANSSI and BSI reporton AI coding assistants provides valuable insights that can inform the development of AI regulation in India, particularly in the context of the software industry. Here are some key inferences and recommendations based on the report's findings: Establishing Guidelines for Responsible Use : The report emphasises the importance of responsible use of AI coding assistants by developers. Indian regulatory bodies may think to develop clear guidelines and best practices for using these tools, including checking and reproducing generated code, protecting sensitive information, and following company policies. These guidelines should be communicated effectively to the software development community. Mandating Risk Analysis and Security Measures : As highlighted in the report, organisations should conduct a systematic risk analysis before adopting AI coding assistants and establish appropriate security measures. Indian regulators could consider mandating such risk assessments and requiring companies to implement specific security controls, such as secure management of API keys and sensitive data, to mitigate risks associated with these tools. Scaling Quality Assurance and Security Teams : The report notes that the productivity gains from AI coding assistants must be matched by appropriate scaling of quality assurance and security teams. Indian policymakers should encourage and incentivize organisations to invest in expanding their AppSec and DevSecOps capabilities to keep pace with the increased code output enabled by AI tools. This could involve providing funding, training programs, or tax benefits for such initiatives. Promoting Awareness and Training : The ANSSI and BSI report stresses the need for employee awareness and training on the risks and proper usage of AI coding assistants. Indian regulatory bodies should collaborate with industry associations, academic institutions, and tech companies to develop and disseminate educational materials, conduct workshops, and offer certifications related to the secure use of these tools. This will help build a skilled workforce capable of leveraging AI responsibly. Encouraging Research and Innovation : The report outlines a research agenda to advance the quality, security, and productivity impact of AI coding assistants. Indian policymakers should allocate resources and create a supportive ecosystem for research and development in this area. This could involve funding academic research, establishing innovation hubs, and fostering collaboration between industry and academia to address challenges specific to the Indian software development landscape. Conclusion In conclusion, while the French-German report on AI coding assistants has some limitations in terms of scope, depth, practical guidance, and coverage of ethical considerations, it remains a valuable and commendable endeavor. By proactively examining the implications of this rapidly evolving technology, the French and German agencies have taken an important step towards understanding and addressing the potential impact of AI coding assistants on the software development industry. The report provides a solid foundation for further research, discussion, and policy development in this area. It highlights the need for ongoing collaboration between governments, industry leaders, and researchers to study the effects of AI coding assistants, establish best practices for their use, and tackle the ethical and societal challenges they present. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- Supreme Court of Singapore’s Circular on Using RoughDraft AI, Explained
The Supreme Court of Singapore came up with an intriguing circular on Using Generative AI, or RoughDraft AI (term coined by AI expert Gary Marcus), by stakeholders in courts. The guidance indicated in the circular is quite intriguing, which requires an essential breakdown. However, to begin with: the circular itself shows that the Court does not regard using GenAI tools, as an ultimatum to improve their court tasks, and has reduced the status of Generative AI tools as mere productivity enhancement tools, unlike what many AI companies in India & abroad tried to claim. This insight covers the circular in detail. On 23rd September 2024 , the Supreme Court of Singapore issued Registrar’s Circular No. 1 of 2024 , providing a detailed guide on the use of Generative Artificial Intelligence (AI) tools in court proceedings. This guide is set to take effect from 1st October 2024 and will apply across various judicial levels, including the Supreme Court, State Courts , and Family Justice Courts . The document provides nuanced principles for court users regarding the integration of Generative AI in preparing court documents, but it also places a heavy emphasis on maintaining traditional legal obligations, ensuring data accuracy, and protecting intellectual property rights. Scope and Application The circular begins by defining its scope. It applies to all matters within the Supreme Court, State Courts (including tribunals such as the Small Claims Tribunals, Employment Claims Tribunals, and the Community Disputes Resolution Tribunals), and Family Justice Courts. All categories of court users, including prosecutors, legal professionals, litigants-in-person , and witnesses , fall within the ambit of the circular. It clarifies that while the use of Generative AI tools is not outrightly prohibited, users are still bound by existing legislation, professional codes, and practice directions. Key Definitions Several key definitions are provided to frame the guide: Artificial Intelligence : Defined broadly, it encompasses technology that can perform tasks requiring intelligence, such as problem-solving, learning, and reasoning. However, it excludes more basic tools such as grammar-checkers, which do not generate content. Court Documents : Includes all written, visual, auditory, and other materials submitted during court proceedings. This extends to written submissions, affidavits , and pleadings , placing emphasis on accurate and responsible content generation. Generative AI : Described as software that generates content based on user prompts. This can encompass text, audio, video, and images. Examples include AI-powered chatbots and tools using Large Language Models (LLMs) . General Principles on the Use of Generative AI Tools The Supreme Court maintains a neutral stance on the use of Generative AI tools. The circular is clear that Generative AI is merely a tool, and court users assume full responsibility for any content generated using such tools. Notably, the Court does not require pre-emptive declarations about the use of Generative AI unless the content is questioned. However, court users are encouraged to independently verify any AI-generated content before submitting it. Responsibility and Verification : Court users, whether they are legal professionals or self-represented litigants, are required to ensure that AI-generated content is accurate, appropriate, and verified independently . For lawyers, this falls under their professional duty of care . Similarly, self-represented individuals are reminded of their obligation to provide truthful and reliable content. Neutral Stance : The court clarifies that its stance on Generative AI remains neutral. While users may employ these tools for drafting court documents, the onus of the content lies solely on the user. This emphasizes that Generative AI tools are not infallible and could generate inaccurate or misleading content. Users must ensure that all submissions are factual and comply with court protocols. Generative AI: Functional Explanation The document goes further to explain how Generative AI tools work, outlining their reliance on LLMs to generate responses that appear contextually appropriate based on user prompts. It compares the technology to a sophisticated form of predictive text but highlights that it lacks true human intelligence. While these tools may produce outputs that appear tailored, they do not engage in genuine understanding, posing risks of inaccuracies, especially in the legal context. The circular provides a cautious explanation of the limitations of these tools: Accuracy issues : It warns that Generative AI chatbots may hallucinate , i.e., generate fabricated or inaccurate responses, including non-existent legal cases or incorrect citations. Inability to provide legal advice : Court users are reminded that Generative AI cannot serve as a substitute for legal expertise, especially in matters requiring legal interpretation or advice . The circular advises caution in using such tools for legal research, as they may not incorporate the latest developments in the law. Use in Court Documents Generative AI tools can assist in the preparation of court documents , but the court mandates careful oversight. The following guidelines are provided: Fact-checking and Proofreading : Users are instructed to fact-check and proofread AI-generated content. Importantly, users cannot solely rely on AI outputs for accuracy and must verify all references independently. Relevance and IP Considerations : The court stresses that all content, whether generated by AI or not, must be relevant to the case and should not infringe on intellectual property rights . The guide cautions users against submitting material that lacks attribution or infringes copyright. Prohibited Uses : While the use of AI for drafting preliminary documents, such as a first draft of an affidavit, is allowed, the circular strictly prohibits using AI for generating evidence . It also emphasizes that AI-generated content should not be fabricated, altered, or tampered with to mislead the court. Accuracy and Verification A major focus of the circular is the need for court users to ensure accuracy in their submissions. The following are key responsibilities outlined for users: Fact-checking : AI-generated legal research or citations must be fact-checked using trusted and verified sources. Self-represented litigants are provided guidance on using resources like Singapore Statutes Online and the eLitigation GD Viewer for such verification. Accountability : If questioned by the court, users must be able to explain and verify the content generated by AI. They are expected to provide details on how the content was produced and how it was verified. The court retains the authority to question submissions and demand further explanations if the content raises doubts. Intellectual Property Concerns One of the key concerns when using Generative AI tools is ensuring that any content generated does not infringe upon the intellectual property rights of third parties. This involves adhering to copyright, trademark, and patent laws , especially when AI tools generate text, images, or other content based on user prompts. Proper Attribution and Compliance with Copyright Laws The circular mandates that court users must ensure proper attribution of sources when using AI-generated content. This includes accurately citing the original source of any material referenced or used in court documents. For instance, if a passage from a legal article or a textbook is included in an AI-generated draft, the user must provide the author’s name, title of the work , and year of publication . Failure to do so may not only lead to copyright infringement but can also affect the credibility of the court submissions. The circular further clarifies that Generative AI tools should not be relied upon to generate evidence or content meant to represent factual claims, as AI can potentially fabricate information. If AI-generated content includes case law, statutes, or quotes , it is the responsibility of the court user to ensure the accuracy and proper citation of such references. This applies to both lawyers and self-represented litigants. Generative AI and Copyright Infringement Risks A key issue with Generative AI tools is that they are trained on vast datasets, which may include copyrighted material without proper licensing. While the AI itself may generate new content, the underlying data on which it is trained may pose risks of copyright violations if not properly addressed. For example, AI-generated text could inadvertently reproduce language from a copyrighted source, which may lead to legal disputes if the original source is not acknowledged. Court users must be vigilant about verifying that the content generated by AI does not infringe on existing copyright protections. This is especially important when submitting legal documents to the court, as any infringement could lead to penalties, legal action , and damage to professional reputations . The circular reminds users that the responsibility for checking these issues lies with them, not with the AI tool. Confidentiality Concerns The circular also highlights the importance of maintaining confidentiality and safeguarding sensitive information when using Generative AI tools. This concern is particularly pressing because AI platforms may not always guarantee that the data inputted will remain confidential. In fact, many AI tools store user inputs for training purposes, which could result in unintentional disclosure of private information. Risks of Inputting Confidential Data The court warns that entering personal, confidential , or sensitive information into Generative AI platforms can lead to unintended consequences. Since most AI tools are cloud-based and developed by third-party providers, any data inputted could potentially be accessed or stored by the AI provider. This raises several issues, particularly with respect to legal privilege, client confidentiality , and data protection . For example, if a lawyer inputs sensitive case details into an AI tool to draft a legal document, those details could be stored by the AI provider. This storage may inadvertently lead to the exposure of confidential information, potentially breaching data privacy laws or client confidentiality agreements . This is particularly concerning in cases where non-disclosure agreements (NDAs) are in place, or where the data falls under privileged communication between a lawyer and their client. Compliance with Data Protection Laws The circular emphasises that court users must comply with the relevant personal data protection laws and any confidentiality orders issued by the court. In Singapore, this would involve adhering to the provisions of the Personal Data Protection Act (PDPA) , which regulates the collection, use, and disclosure of personal data. Failure to safeguard confidential data may lead to legal consequences, including fines, civil lawsuits , and disciplinary actions . Legal Privilege and Sensitive Information Additionally, the court reminds users that documents obtained through court orders must not be used for any purposes beyond the proceedings for which the order was granted. This reinforces the need for discretion when handling privileged documents and ensures that such documents are not exposed to Generative AI platforms, which could compromise their confidentiality. The circular advises court users to refrain from sharing confidential case details with AI tools. Instead, users should take extra caution when deciding what information to include in AI prompts. The document acknowledges the potential for unauthorised disclosure , noting that information input into Generative AI tools could be stored or misused. Therefore, users must take proactive steps to avoid breaching confidentiality obligations, particularly in cases involving sensitive personal data , trade secrets , or other proprietary information. Intellectual Property Rights and Legal Implications Court users are also reminded that existing laws on intellectual property rights , including provisions related to court proceedings, remain fully applicable. This means that while Generative AI tools can be used to generate drafts of legal documents, any content included in those documents must comply with IP laws. Court Order Documents : If a court has granted a production order for specific documents, these materials must not be shared with Generative AI tools or used outside the proceedings for which they were obtained. Respect for Privilege : Users must ensure that any data shared with Generative AI tools does not violate legal privilege . This includes ensuring that privileged communications between lawyers and clients remain confidential and are not disclosed to third-party AI providers. Enforcement of IP and Confidentiality Rules Failure to comply with the guidelines set out in the circular can result in significant penalties, including: Cost orders : Users may be ordered to pay costs to the opposing party, particularly if AI-generated content is found to infringe IP rights or violate confidentiality rules. Disciplinary actions : Lawyers who fail to comply with these rules could face disciplinary measures, including reprimands, suspensions, or fines. Reduction in evidentiary weight : The court may also choose to disregard AI-generated submissions or reduce their evidentiary weight if they fail to meet accuracy, attribution, or confidentiality standards. Conclusion The Singapore Supreme Court's Registrar's Circular No. 1 of 2024 provides a pragmatic yet cautious approach to the use of Generative AI in court proceedings. While the court acknowledges the utility of such tools, it emphasises that responsibility for accuracy, relevance, and appropriateness remains squarely with the court user. Generative AI is positioned as a useful aid, but not a replacement for human judgment, legal expertise, or verification processes. Users of Generative AI are held to the same standards of accuracy, truthfulness, and integrity as in any other court submission. Nevertheless, it seems clear that even the Supreme Court of Singapore does not deify the purpose of Generative AI tools and remains quite cautious, which only increases trust in their judicial system. This cautious approach is further validated by recent findings from an Australian government regulator, which discovered that generative AI text solutions can actually increase workload rather than reduce it. In a trial conducted by the regulator, it was found that AI-generated summaries of information were often less accurate and comprehensive than those produced by human analysts, requiring additional time and effort to correct and verify.This highlights the importance of the Singapore Supreme Court's emphasis on human oversight and responsibility when using generative AI in legal proceedings. While these tools may offer some efficiency gains, they are not a panacea and can potentially introduce new challenges and risks if not used judiciously. However, it would be unreasonable to write off this assumption that Generative AI/ Rough Draft AI tools will be used ad nauseam, and could lead to huge replacements. As the technology continues to evolve and improve, it is likely that generative AI will play an increasingly significant role in various aspects of legal practice, from research and document preparation to predictive analytics and decision support.The key, as emphasized in the Singapore Supreme Court's circular, is to strike a balance between leveraging the capabilities of these tools and maintaining the human expertise, judgment, and accountability that are essential to the integrity of the legal system. By setting clear guidelines and expectations for the responsible use of generative AI, the Singapore Supreme Court seems to have laid the groundwork for a future in which these technologies can be harnessed to enhance, rather than replace, the work of legal professionals. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- [New Report] Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010
We are excited to announce the publication of our 10th Infographic Report since December 2023, and our 22nd Technical Report since 2021, titled "Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010." This report is available for free for a limited time at this link . This report holds a special significance for us as it reflects our collective commitment to deeply exploring the complex legal challenges posed by emerging technologies like Generative AI. We would like to extend our heartfelt congratulations to Samyak Deshpande, Sanvi Zadoo, and Alisha Garg, whose dedication and meticulous effort have been instrumental in developing and curating this comprehensive report. In this report, we’ve included a quote from Carissa Véliz on privacy to emphasize the importance of respecting human autonomy in the creative process. This choice captures the spirit in which we approach the intricate relationship between technology and law, particularly when it comes to safeguarding the creative rights and freedoms of individuals. Why This Report Matters The publishing industry is no stranger to disruption, but the advent of Generative AI has introduced a new layer of complexity. While many may rush to declare that Generative AI infringes upon copyright and patent laws, such assertions, though valid, often oversimplify the issues at hand. The real challenge lies in addressing these concerns with the specificity and nuance they require. This report represents not just an analysis of intellectual property law issues related to Generative AI in publishing but also a broader exploration of how these technologies can create legal abnormalities that escalate to points of no return. It is the product of our collective patience, thorough research, and a deep understanding of the legal landscape. The Broader Implications Generative AI has been both lauded and criticszed for its impact on various industries. In publishing, the effects have been particularly pronounced, leading to a range of legal challenges that must be navigated with care. This report seeks to provide a balanced perspective, offering insights into how these technologies can be regulated and managed without stifling innovation or creativity. As Generative AI continues to evolve, so too must our approach to the legal frameworks that govern it. This report is a step in that direction, aiming to provide both clarity and guidance for those involved in the publishing industry and beyond. You can access the report for free for a limited time at https://indopacific.app/product/impact-based-legal-problems-around-generative-ai-in-publishing-iplr-ig-010/ Final Thoughts In a time when AI is making headlines—such as the recent mention of Anil Kapoor in TIME magazine for his connection to AI—our report offers a timely and relevant exploration of the real-world implications of these technologies. We hope it will serve as a valuable resource for those interested in understanding the complexities and challenges of the Generative AI ecosystem. We invite you to read this report and engage with the critical issues it raises. Happy reading! Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .
- [New Report] Risk & Responsibility Issues Around AI-Driven Predictive Maintenance, IPLR-IG-009
Greetings. I hope this update finds you well. Today, I want to share with you a topic that has captured my imagination for quite some time: the intriguing confluence of space law and artificial intelligence policy. As someone who has always been fascinated by astronomy and the laws of nature, I find this area of study as captivating as it is important. For the past few months, I have been working on a comprehensive report titled "Risk & Responsibility Issues Around AI-Driven Predictive Maintenance." This report, the 9th Infographic Report by Indic Pacific Legal Research LLP, delves into the complex legal landscape surrounding the use of AI in predictive maintenance for spacecraft. I am excited to announce that the report, IPLR-IG-009, is now available on the IndoPacific App at https://indopacific.app/product/navigating-risk-and-responsibility-in-ai-driven-predictive-maintenance-for-spacecraft-iplr-ig-009-first-edition-2024/ . In this report, I have not only explored the legal implications of AI-driven predictive maintenance but also showcased some fascinating case studies that demonstrate the potential of this technology in the space sector. These case studies include: 1️⃣ SPAICE Platform 2️⃣ NASA's Prognostics and Health Management (PHM) Project 3️⃣ ESA's Φ-sat-1 (Phi-sat-1) Project Each of these projects highlights the innovative ways in which AI is being leveraged to enhance the reliability, efficiency, and safety of spacecraft operations. By examining these real-world examples, we can gain valuable insights into the challenges and opportunities that lie ahead as we continue to push the boundaries of space exploration and AI technology. Thanks for reading this insight. Since May 2024, we have launched some specialised practice-based technology law and artificial intelligence & law training programmes at indicpacific.com/train . We offer special discounts for technology teams who may be interested in enrolling for the training programmes on bulk. Feel free to choose your training programme at indicpacific.com/train and contact us at vligta@indicpacific.com .