top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

ChatGPT & the Problem with Derivatives as Solutions

ChatGPT has been embroiled in several controversies related to the AI-based digital products and services market. Concurrently, Google has introduced BARD, a ChatGPT competitor that draws inspiration from their LaMDA conversation technology. Now, there are multiple such use cases which have been proposed by entrepreneurs, content creators and big technology companies. The problem, however, comes in when the understanding of novelty & value behind such derivative "solutions" is not properly assessed. This is where the problem of using ChatGPT as a means to cater such business solutions comes in.

In this article, I have analysed the legal and ethical aspects of designating derivatives made out of ChatGPT as kinds of digital products and solutions one could anticipate. The idea of this article is to formulate a legal approach to look at this practice only to see if some legitimate solutions can be generated from this approach. Henceforth, I have divided the line of enquiry into 2 parts:

  • What could be the Derivatives or Sub-products that could be made via ChatGPT as potential solutions? How does it work in the market? What disruptions could it have at an observational level?

  • How does this attempt to democratise ChatGPT via creating digital products/services as derivatives (or derivatives of derivatives) affect the future of work at fundamental and operative levels?

I would also add that this article is limited to how Generative AI tools can be democratised to build derivative products/services as commercial solutions, and it does not cover other types of narrow AI applications.

To know about ChatGPT and its impact on Technology Governance, read this article.

The Basics of Creating Derivatives as Solutions

When ChatGPT was made available for a Free Research Preview, which is still the current case, it was obvious for anyone to discern how many use cases could be observed or figured out, to provide services of many kinds. In my previous article on ChatGPT, I had discussed about DoNotPay as a proposed "use-case" to draft legal instruments and documents for normal use, such as civil liability, consumer law action, etc., especially for those who live in the United States, considering the exorbitant costs of handling such matters in the US itself. Here is a figure which explains how one can conceive a derivative product or a derivative (of a derivative) product.

Derivative of ChatGPT or Derivative of Derivatives
Figure 1: Derivative of ChatGPT or Derivative of Derivatives

Now, to be clear, making a derivative is not hard. Yes, it is possible for someone to create a derivative or product out of ChatGPT, given that the underlying technology behind ChatGPT is based on artificial intelligence and natural language processing.

One way to create a derivative or product out of ChatGPT would be to use its core technology, which is a type of machine learning called "transformer models", to train a new model with a specific focus or application. For example, a company might use ChatGPT's technology to create a chatbot or virtual assistant that can answer specific types of questions or help customers navigate a particular product or service.

Another approach could be to use ChatGPT's technology as a basis for a new application that uses natural language processing. For example, a company could use the technology to develop a system that analyzes customer feedback or reviews and automatically generates summaries or sentiment analysis.

Technologically, creating a derivative or product out of ChatGPT would require expertise in artificial intelligence and natural language processing, as well as access to large amounts of relevant data. Additionally, it would require significant computing resources, as training machine learning models can be computationally intensive. However, with the right expertise and resources, you can use the technology behind ChatGPT to create new and innovative products and applications.

How to Create a Derivative Product/Service out of ChatGPT?
Figure 2: How to Create a Derivative Product/Service out of ChatGPT?

Creating a derivative or product from ChatGPT or any other machine learning model involves considering various parameters. Some of the key parameters that could be important include:

  • Training Data: The quality and quantity of training data used to train the model are critical parameters in the development of a derivative or product out of ChatGPT. The dataset used must be relevant and large enough to ensure that the model can capture the necessary nuances in the input data.

  • Model Architecture: The architecture of the model is also an essential parameter to consider when creating a derivative or product out of ChatGPT. The model architecture includes the number of layers, the number of neurons per layer, the activation functions, and other design choices that can affect the model's performance.

  • Hyperparameters: Hyperparameters are additional model parameters that can be adjusted to optimize model performance. These include learning rate, batch size, optimizer, and regularization parameters, among others.

  • Evaluation Metrics: The evaluation metrics used to assess the model's performance should be relevant to the specific use case of the derivative or product. Common evaluation metrics for language models include accuracy, perplexity, and F1-score.

  • Deployment Environment: The environment in which the derivative or product will be deployed is also a critical parameter to consider. Factors such as the available computing resources, scalability, and reliability of the infrastructure can impact the effectiveness of the model in real-world use.

  • Ethics and Privacy: Considerations around the ethical and privacy implications of the derivative or product must also be taken into account. The data used to train the model must be ethically sourced and representative, and the model's deployment should not violate any privacy laws.

Let's declutter each parameter mentioned here.

  1. First, if you have training data, then it means that you have designated some parameters in the product development of that Derivative Product, which you intend to build. Now, this training data, is not the training data of ChatGPT directly. This data is attributed to the product you are making. However, there is no doubt that ChatGPT would be capable to estimate how this training data is used and is workable, at least at some level. In the lexicon of artificial intelligence ethics, training data comes under the ambit of ethical issues. The best one could proposed here is to understand how a derivative product could understand the training data to generate outputs. That is something which may require either a regulatory sandbox or another potential regulatory mechanism in place.

  2. Second, a model architecture may not be a generalised in the lexicon of AI ethics, unless it becomes necessary quantitatively where too many use cases are being tested, which have an adverse impact on the human environment.

  3. Third, hyperparameters become important but if their use case or market distribution, even for testing purposes is insignificant, then the least that could be expected is to create the technological safeguards by default and design, which are helpful.

  4. Fourth, Evaluation metrics, could be considered important, to achieve AI explainability. While training data needs to be proper and avoid those biases which could create adverse outcomes, evaluation metrics can be treated as an addition to understand how training data is being used.

  5. Fifth, the deployment environment is connected to how evaluation metrics reflect upon the effectiveness of the environment in which that derivative AI product is being tested. The better it is done, and the clearer it is understood, the safer it could be know if the training data is effective.

  6. Lastly, ethical and privacy concerns are obvious to happen and there is no doubt that such basic safeguards need to be maintained. However, a better measure to track down ethical concerns (since for privacy, design & default can be built) is to foresee the risks attached. There is a quantitative element to it, and that could be really helpful.

Overall, there are many solutions and possibilities. However, data scientists and experts in Generative AI believe that the use cases of ChatGPT in the form of derivatives are over-hyped, and many a times they might not be that useful or perfect as aimed. A former Google AI Ethicist remarks about the use of proprietary information by ChatGPT and other LLM platforms, which raises concerns on data rights and anti-competitive practices:

She said that the data used to train these models (GPT-3.5, or LaMDA) is either proprietary or just scraped from the internet. “Not a lot of attention is paid to the rights of the people in those data—also referred to as Data Subjects in the EU’s Artificial Intelligence Act—and also the people who have created those data, including artists, writers, etc.,” said Hanna, explaining that these people are not getting compensated and most companies are considering it like an afterthought.

Let's now understand if creating such derivatives or derivatives of derivatives could affect market competition.

Competition Law Concerns on Derivative Products

There are multiple competition law concerns that may emerge when such derivative products and solutions are created. Although the concerns may saturate, a workable understanding is necessary to understand the hasty and overrated use of Generative AI tools.

To make it simple, I have categorised the Ethical Dilemmas with Explanations in the form of a table.

Ethical Dilemmas


Dominant Market Position

Creating a derivative or product out of ChatGPT that dominates a particular market can lead to concerns about anti-competitive behavior. Companies must be mindful of the potential impact of their products on market competition and ensure that they comply with applicable competition laws and regulations.

Exclusive Agreements

Companies may use exclusive agreements to limit competition in a particular market, which could be seen as anti-competitive behavior. Creating derivatives or products out of ChatGPT that rely on exclusive agreements could give rise to concerns about anti-competitive behavior.

Price Fixing

Companies must be careful not to engage in price-fixing or other anti-competitive practices when creating derivatives or products out of ChatGPT. This could include practices such as collusion with competitors, setting prices artificially high or low, or engaging in other practices that restrict competition.

Intellectual Property

Companies must be mindful of intellectual property issues when creating derivatives or products out of ChatGPT. This could include issues such as patent infringement or misappropriation of trade secrets. Companies must ensure that they have the appropriate licenses and permissions to use the intellectual property associated with ChatGPT.

Mergers and Acquisitions

Companies that create derivatives or products out of ChatGPT may engage in mergers or acquisitions that could give rise to concerns about anti-competitive behavior. Companies must ensure that their mergers and acquisitions do not harm competition in the relevant markets.


Companies must be careful not to engage in practices that limit interoperability when creating derivatives or products out of ChatGPT. Interoperability refers to the ability of different products and systems to work together seamlessly. Limiting interoperability can give rise to concerns about anti-competitive behavior.

Data Ownership

Companies must be mindful of data ownership issues when creating derivatives or products out of ChatGPT. This could include issues such as using data without permission or failing to compensate data owners appropriately. Companies must ensure that they have the appropriate permissions and licenses to use the data associated with ChatGPT.

Naturally, these concerns are inter-related and it was necessary to categorise them. The dilemma that could be attached is whether such tools have a market-related impact. As it was discussed in VLiGTA-TR-001, our report for the Vidhitsa Law Institute, here are the working conditions which generate Artificial Intelligence Hype:

Stage 1: Influence or Generation Determination

  • An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle.

Stage 2: Influencing or Generating Market Perceptions & Conditions

  • The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view.

  • The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices.

  • Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern.

Stage 3: Uninformed or Disinformed Markets

  • The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases.

Stage 4: Misdirected Perceptions in the Information & Digital Economy

  • The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed.

Stage 5: Estimation of the Hype Cycle through Risk Determination

  • In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach.

Taking these working conditions into context, we may see a range of competition policy issues, which could be even relatable to Stage 3, 4 and 5 of an AI Hype cycle as per their working conditions in real life.

Here is a table which simplifies and explains how can we map the impact of such derivative products based on the working conditions of AI Hype that we have developed in VLiGTA-TR-001.



Example of Digital Competition Law Violation


Influence or Generation Determination

A company develops a derivative AI product based on ChatGPT that is marketed as superior to competing products, leading to increased demand and market share.


Influencing or Generating Market Perceptions & Conditions