Please note: this is a Policy Brief by Anukriti Upadhyay, former Research Intern at the Indian Society of Artificial Intelligence and Law.
In a 3-page letter to Satya Nadella, Twitter's company, X Corp. had stated that Microsoft had violated an agreement over its data and had declined to pay for that usage. And in some cases, Microsoft had used more Twitter data than it was supposed to. Microsoft also shared the Twitter data with government agencies without permission, the letter said. To sum up, Twitter is trying to charge Microsoft for its data which has earned huge amount of profit to Microsoft. Mr. Musk, who bought Twitter last year for $44 billion, has said that it is urgent for the company to make money and that it is near bankruptcy. Twitter has since then introduced new subscription products and made other moves to gain more revenue. Also, in March, the company had stated it would charge more for developers to gain access to its stream of tweets.
Elon Musk and Microsoft have had a bumpy relationship recently. Among other things, Mr. Musk has concerns with Microsoft over OpenAI. Musk, who helped found OpenAI in 2015, has said Microsoft, which has invested $13 billion in OpenAI, controls the start-up’s business decisions. Of course, Microsoft has disputed that characterisation. Microsoft’s Bing chatbot and OpenAI’s ChatGPT are built from what are called large languages models, or LLMs, which build their skills by analysing vast amounts of data culled from across the internet. The letter to Satya Nadella does not specify if Twitter will take legal action against Microsoft or ask for financial compensation. It demands that Microsoft abide by Twitter’s developer agreement and examine the data use of eight of its apps.
Twitter has hired legal services which seeks report by June on how much Twitter data the company possesses, how that data was stored and used, and when government-related organizations gained access to that data. Twitter’s rules prohibit the use of its data by government agencies, unless the company is informed about it first. The letter adds that Twitter’s data was used in Xbox, Microsoft’s gaming system; Bing, its search engine; and several other tools for advertising and cloud computing.
“the tech giant should conduct an audit to assess its use of Twitter's content.” Twitter claimed that the contract between the two parties allowed only restricted access to the twitter data but Microsoft has breached this condition and has generated abnormal profits because of using Twitter’s API.
Currently, there are many tools available (from Microsoft, Google, etc.) to check the performance of AI systems, but there is no regulatory oversight. And that is why, experts believe that companies, new and old, need to put more thought into self-regulation. This dispute has highlighted the need to keep a check on the utilization of data by companies to develop their AI models and regulate them.
Data Law and Oversight Concerns
In this game of tech giants to win the race of AI development, the biggest impact is always bestowed upon the society. Any new development is prone to attract illegal activities that can have a drastic effect on the society. Even though the Personal Data Protection Bill is yet to become law, big tech firms like Google, Meta, Amazon and various e-commerce platforms are liable to be penalised for sharing users’ data with each other if consumers flag such instances.
Currently in India, under the Consumer Protection Act, 2019, the department can take action and issue directions to such firms. Since the data belongs to a consumer, if the consumer feels that their data is being shared amongst firms without their express consent, they are free to approach us under the Consumer Protection Act. If we look at the kind of data which is shared between firms, any search on Google by a person leads to the same feeds being shown on Facebook. This means that user data is being shared by big tech firms. In case the data is not shared with the express consent of users concerned, they can approach the Consumer Protection Forums. The same is relevant to the Twitter-Microsoft dispute, wherein the data used by the latter was put up by the Twitter users on their twitter account and the same was getting used by Microsoft without the user’s consent. If we analyse WhatsApp's data sharing policies for example, Meta has stated that it can share business data with Facebook. But at the same time, the Competition Commission of India has objected to this as a monopolistic practice and the matter is in court. Consumers have the right to seek redressal against unfair / restrictive trade practices or unscrupulous exploitation of consumers.
Protecting personal data should be an essential imperative of any democratic republic. Once it becomes law, citizens can intimate all digital platforms they deal with to delete their past data. The firms concerned will then need to collect data afresh from users’ and clearly spell out the purpose and usage. They will be booked for data breach if they depart from the purpose for which it was collected.
Data minimisation, purpose limitation and storage limitation are the hallmarks which cannot be compromised with. Data minimisation means firms can only collect the absolute minimum required data. Purpose limitation will allow them to use data only for the purpose for which it has been acquired. With storage limitation, once the service is delivered, firms will need to delete the data.
With the rapid development of AI, a number of ethical issues have cropped up. These include:
the potential of automation technology to give rise to job losses
the need to redeploy or retrain employees to keep them in jobs
the effect of machine interaction on human behaviour and attention
the need to address algorithmic bias originating from human bias in the data
the security of AI systems (e.g., autonomous weapons) that can potentially cause damage
While one cannot ignore these risks, it is worth keeping in mind that advances in AI can - for the most part - create better business and better lives for everyone. If implemented responsibly, artificial intelligence has immense and beneficial potential.
Investment and Commercial Licensing
AI has been called the electricity of the 21st century. While the uses and benefits of AI are exponentially increasing, there are challenges for businesses looking to harness this new technological advancement. Chief among the challenges are: The ethical use of AI, Legal compliance regarding AI and the data that fuels AI, Protection of IP rights and the appropriate allocation of ownership and use rights in the components of AI. Businesses also need to determine whether to build AI themselves or license it from others. Several unique issues impact AI license agreements. In particular, it is important to address the following key issues: “IP ownership and use rights, IP infringement, Warranties, specifically performance promises and Legal compliance.” Interestingly, IP treaties simply have not caught up to AI yet. While aspects of AI components may be protectable under patents, copyrights, and trade secrets, IP laws primarily protect human creativity. Because of the focus on human creation, issues may arise under IP laws if the AI output is created by the AI solution instead of a human creator. Since the IP laws do not squarely cover AI, as between an AI provider and user, contractual terms are the best way to attempt to gain the benefits of IP protections in AI license agreements.
How Does it Affect the Twitter-Microsoft Relationship
Considering this issue, the parties could designate certain AI components as trade secrets.
Protect AI components by: limiting use rights; designating AI components as confidential information in the terms and conditions; and restricting use of confidential information.
Include assignment rights in AI evolutions from one party or the other.
Determine the license and use rights the parties want to establish between the provider and the user for each AI component.
Clearly articulate the rights in the terms and conditions.
The data sharing agreement must cover which party will provide and own the training data, prepare and own the training instructions, conduct the training, and revise the algorithms during the training process and own the resulting AI evolutions. As for data ownership, the parties should identify the source of the data and ensure that data use complies with applicable laws and any third-party data provider requirements.
Ownership and use of production data for developing AI models must be set out in the form of terms and conditions which party provides and which party owns the production data that will be used. If the AI solution is licensed to the user on-premises (the user is running the AI solution in the user’s systems and environment), it is likely that the user will supply and own the production data. However, if the AI solution is cloud-based, the production data may include the data of other users. In a cloud situation, the user should specify whether the provider may use the user’s data for the benefit of the entire AI user group or solely for the user’s particular purposes. It is important to note that limiting the use of production data to one user with an AI solution may have unintended results. In some AI applications, the use of a broader set of data from multiple users may increase the AI solution’s accuracy and proficiency. However, counsel must weigh the benefits of permitting a broader use of data against the legal, compliance, and business considerations a user may have for limiting use of its production data. When two or more parties are each contributing to the AI evolutions, the license agreement should appoint a contractual owner. The parties must then determine who will own AI evolutions or whether AI evolutions will be jointly owned, which presents additional practical challenges.
The use of AI presents ethical issues and the organizations must consider how they will use AI and define principles and implement policies regarding the ethical use of AI. One portion of the AI ethical use consideration is legal compliance, which is another issue that is more challenging for AI than for traditional software or technology licensing. AI-based decisions must satisfy the same laws and regulations that apply to human decisions. AI is different from many other technologies because AI can produce legal harms against people and some of that legal harm might not only violate ethical norms, but may also be actionable under law. It is important to address legal compliance concerns with the provider before entering into an AI license agreement to determine which party is responsible for compliance.
Some best practices that could be adopted, are proposed as follows:
To deal with legal compliance issues in investment and licensing, companies can conduct diligence on data sharing to determine if there are any legal or regulatory risk areas that merit further inquiry.
Develop policies around data sharing and involve the various stakeholders in the policy-making process to ensure that thoughtful consideration is given about when it is appropriate to use the data and in what contexts.
Implement a risk management framework that includes a system of ongoing monitoring and controls around the use of AI. Consider which party should obtain third-party consents for data use due to potential privacy and data security issues.
AI is transforming our world rapidly and without much oversight. Developers are free to innovate, as well as to create tremendous risk. Very soon leading nations will need to establish treaties and global standards around the use of AI, not unlike current discussions about climate change. Governments will need to both: Establish laws and regulations that protect ethical and productive uses of AI. Prohibit unethical, immoral, harmful, and unacceptable uses. These laws and regulations will need to address some of the IP ownership, use rights, and protection issues discussed in this article. However, these commercial considerations are secondary to the overarching issues concerning the ethical and moral use of AI. In line with the increased attention on corporate responsibility and issues like diversity, sustainability, and responsibility to more than just investors, businesses that develop and use AI will need policies and guidance against which the use of AI should be assessed and utilised. These policies and guidance are worthy of board-level attention. Technology lawyers who in these early days assist clients with AI issues must monitor developments in these areas and, wherever possible, act as facilitators and leaders of thoughtful discussions regarding AI. Also, adapting the precautionary measures will save a lot of legal cost for the companies and will ensure that the data is not misused or oversued.