top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

The Digital Personal Data Protection Act & Shaping AI Regulation in India



As of August 11, 2023, the President of India has given assent to the Digital Personal Data Protection Act (DPDPA), and it is clear that the legal instrument after its notification in the Official Gazette, is notified as a law. Now, there have been multiple briefs, insights and infographics which have been reproduced and published by several law firms across India. This article thus focuses on the key provisions of the Act, and explores how it would shape the trajectory of AI Regulation in India, especially considering the recent amendments in the Competition Act, 2002 and the trajectory for the upcoming Digital India Act, which is still in the process.

You can read the analysis on the Digital India Act as proposed in March 2023 here. You can also find this complete primer of the important provisions of the Digital Personal Data Protection Act here, which have been discussed in this article. We urge you to download the file as we have discussed provisions which are described in this document.
 

General Review of the Key Provisions of the DPDPA


Let's begin with the stakeholders under this Act. The Digital Personal Data Protection Act, 2023 (DPDP Act) defines the following stakeholders and their relationships:

  • Data Principal: The individual to whom the personal data relates.

  • Consent Manager: A person or entity appointed by a Data Fiduciary to manage consents for processing personal data.

  • Data Protection Board (DPB): A statutory body established under the DPDP Act to regulate the processing of personal data in India.

  • Data Processor: A person or entity who processes personal data on behalf of a Data Fiduciary.

  • Data Fiduciary: A person or entity who alone or in conjunction with other persons determines the purpose and means of processing of personal data.

  • Significant Data Fiduciary: A Data Fiduciary that meets certain thresholds, such as for example, having a turnover of more than INR 100 crores or processing personal data of more than 50 million data principals. However, it is to be noted that no specified threshold has been defined in the Act, as of now.

The relationships among these stakeholders are as follows:

  • The Data Principal is the owner of their personal data and has the right to control how their data is processed.

  • The Consent Manager is responsible for managing consents for processing personal data on behalf of the Data Fiduciary.

  • The DPB is responsible for regulating the processing of personal data in India. It has the power to investigate complaints, issue directions, and impose penalties.

  • The Data Processor is responsible for processing personal data on behalf of the Data Fiduciary in accordance with the Data Fiduciary's instructions.

  • The Data Fiduciary is responsible for determining the purpose and means of processing personal data. They must comply with the DPDP Act and the directions of the DPB.

  • A Significant Data Fiduciary has additional obligations under the DPDP Act, such as appointing a Data Protection Officer and conducting data protection impact assessments.


Key Stakeholders in the Digital Personal Data Protection Act, 2023 (India)
Figure 1: Key Stakeholders in the DPDPA

Data Protection Rights

Now, while the Act clearly has a set of rights for Data Principals and obligations attached to Data Fiduciaries, which is discussed further. However, a lot of the provisions in the Act, contain the clause "as may be prescribed". This means a lot of the provisions will remain subject to delegated legislation, which makes sense, because the Government could not integrate every aspect of data regulation and protection into the Act and could only propose specific and basic provisions, which could make sense, from a multi-stakeholder and citizen perspective. Now, like the General Data Protection Regulation in the European Union, the rights of a Data Principal are clearly defined in Sections 11-14 of the Act, stated as follows:

  • Right to access information about personal data which includes:

    • a summary of personal data

    • identities of Data Fiduciaries and Data Processors who have been shared the same

    • any other related information related to the Data Principal and the processing itself

  • Right to:

    • correction of personal data

    • completion of personal data

    • updating of personal data and

    • erasure (deletion) of personal data

  • Right to grievance redressal which has to be readily available

  • Right to nominate someone else to their exercise their data protection rights under this Act, as Data Principals

There are no specific parameters or factors defined when it comes to the Right to be Forgotten (erasure of personal data). Hence, we can expect some specific guidelines and circulars to address this issue, along with industry-specific interventions, for example, by the RBI in the fintech industry.


Now, the provisos containing a list of duties of a Data Principal are referred to for obvious reasons. That is done for a reflective perspective to estimate policy and ethical perspectives on the Data Protection Board's internal expectations. Like the Fundamental Duties, these duties also do not have any binding value, nor does it affect the data-related jurisprudence in India, especially on matters related to this Act. However, those duties could be invoked by any party to a data protection-related civil dispute for the purposes of interpretation and to elaborate on the purpose of the Act. Nevertheless, invoking the duties of Data Principals has a limited impact.


Legitimate Use of Personal Data


The following are considered as "legitimate use" of personal data by a Data Fiduciary:

  • Processing personal data for the Government with respect to any subsidy, benefit, service, certificate, licence or permit prescribed by the Government.

    • For example: to let people avail benefits of a government scheme or programme through an App, personal data would have to be processed

  • Processing personal data to:

    • Fulfil any obligation under any law in force or

    • Disclose any information to the State or any of its instrumentalities

      • This is subject to the obligation that processing of personal data is being done in accordance with the provisions regarding disclosure of such information in any other law

    • Processing personal data in compliance with:

      • Any judgment or decree or order issued in India, or

      • Any judgment or order relating to claims of a contractual or civil nature based on a law in force outside India

  • When a Data Principal voluntarily offers personal data to the Data Fiduciary (a company, for example).

    • This is applicable when it has not been indicated at all that the Data Fiduciary does not have consent to process data

    • This is therefore a negative obligation on the Data Fiduciary (a company, for example). If consent is not granted by indication, then data cannot be processed

There are other broad grounds as well, such as national security, sovereignty of India, disaster management measures, medical services and others.


Major Policy Dilemmas & Challenges with DPDPA


Now, there are certain aspects on the data protection rights in this Act, which must be understood.

  • Now, publicly available data as stated in the Section 3 of this Act, will not be covered by the provisions of this Act. This means that if you post something on social media (for example), or give prompts to generative AI tools, then they are not covered under the provisions of this Act in India, which is not the case in Western countries and even China overall. Since different provisions refer to the Data Protection Board having powers of a civil court on specific matters, under the Civil Procedure Code of 1908, and that the orders of the Appellate Tribunal under this Act, are executable as a civil decree, it clearly - and obviously signifies that most data protection issues would be commercial and civil law issues. In other countries, the element of public duty (emanated from public law) comes in. This also shows clearly that in the context of public law, India is not opening its approach to regulate the use of artificial intelligence technologies at macro and micro scales yet. I am certain this will be addressed in the context of high-risk and low-risk AI systems in the Digital India Act.

  • On the transnational flow of data and the issue of building bridges and digital connectivity between India and other countries, the Act gives unilateral powers to the Government to restrict flow of data whenever they find a ground to do so. This is why nothing specific as to the measures have been described by the Government yet, because of the trade negotiations on information economy between India and stakeholders such as the UK, the European Union and others, which useless get stuck. In fact, this is a general problem across the board for companies and governments around the world for the simple reasons - (1) the trans-border flow of data is a trade law issue, requiring countries to render diplomatic negotiations, without reaching at a consensus, due to the transactional aspect of it; (2) data protection law, which is a subset of technology law, has a historical inference to the field of telecommunications law, which is why the contractual and commercial nature of trans-border data flow since being related to telecom law, may not arrive at conclusions. This is relatable to the poignant issue of moratoriums on digital goods and services under WTO Law, which is subject to discussion in future WTO Ministerial Conferences. Here is an excerpt from the India & South Africa's joint submissions on 'E-commerce Moratoriums':

What about the positive impacts of the digital economy for developing countries? Should these not also be taken into account in the discussion on losses and the impact of the moratorium? After all, it is often said that new digital technologies can provide developing countries with new income generation opportunities, including for their Micro and Small and Medium Sized Enterprises (MSMEs). [...] Further, ownership of platforms is the new critical factor measuring success in the digital economy. The platform has emerged as the new business model, capable of extracting and controlling immense amounts of data. However, with ‘platformisation’, we have seen the rise of large monopolistic firms. UNCTAD’s Digital Economy Report (2019) highlights that the US and East Asia accounts for 90 percent of the market capitalization value of the world’s 70 largest digital platforms. Africa and Latin America’s share together is only 1 percent. Seven ‘super platforms’ – Microsoft, Apple, Amazon, Google, Facebook, Tencent and Alibaba – account for two-thirds of total market value. In particular, Africa and Latin America are trailing far behind.
  • Also, startups have been given exemptions from certain crucial compliances under this Act. While this may be justified as a move to promote the Digital India and startup ecosystem in India, and some may argue that it is against creating a privacy-compliant startup ecosystem, another aspect which is ignored by most critics of this Act (formerly a Bill), is the sluggishness and hawkishness of the bureaucratic mindset behind ensuring compliances. Maybe, this gives some room to ensure a flexible compliance environment, if the provisions are used reasonably. Plus, how would this affect fintech companies when it comes to data collection-related compliances would have to be seen. Although it is clear that the data protection law, for its own limits, will not supersede fintech regulations and other public & private law systems. This means, the fintech regulations on data collection and restrictions on the use of it, will prevail over this Data Protection law.

  • For Data Fiduciaries, if they would have to collect data every time, they would have to give a notice every time when they request consent from a Data Principal. It is argued rightfully that merely having a privacy policy would not matter. since there would be multiple instances of data collection in an app / website interface in multiple locations of the app / website. Here is an illustration from the Act, which explains the same.

X, an individual, opens a bank account using the mobile app or website of Y, a bank. To complete the Know-Your-Customer requirements under law for opening of bank account, X opts for processing of her personal data by Y in a live, video-based customer identification process. Y shall accompany or precede the request for the personal data with notice to X, describing the personal data and the purpose of its processing.
  • Interestingly, the Act defines obligations for Data Fiduciaries, but not Data Processors, which seems strange. Or, it could be argued that the Government would like to keep the legal issues between the Data Fiduciary and their assigned Data Processors, subject to contractual terms. We must remember that for example, in Section 8(1) of the Act, the Data Fiduciaries are required to comply with the provisions of the Act (DPDPA), "irrespective of any agreement to the contrary or failure of a Data Principal to carry out the duties provided under this Act" considering any processing undertaken by the Data Processor. Now, the issue that may arise is - what happens if the Data Processor makes a shoddy mistake? What if the data breach is caused by the actions of the Data Processor despite due dilligence by the Data Fiduciary? This makes the role of Data Processors more of a commercial law issue or dilemma when contracts are agreed upon, instead of making it a civil or public law issue, in the context of the Act.

  • Finally, the Act introduces a new concept known as the "consent manager." Now, as argued by Sriya Sridhar - such a conceptual stakeholder could be related with one of the most successful stakeholder systems created in the RBI's fintech regulation framework, that i.e., Account Aggregators (AAs). Since the DPDPA would not have precedence over fintech regulations of the Reserve Bank of India, for example - and the role of data protection itself could be generalised and tailor-made subject to the best industry-centric regulatory practices, Consent Managers not being Data Fiduciaries, would be helpful for AAs as well. Some aspects related to the inclusion of artificial intelligence technology in the context of Consent Managers is discussed in the next section of this article.

The next section of this article covers all aspects covered related to the use of artificial intelligence in the Digital Personal Data Protection Act, 2023.


Key Definitions & Provisions in the DPDPA on Artificial Intelligence


Here are some definitions in Section 2 of the Act, which must be read and understood, to begin with:

(b) “automated” means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data;
(f) “child” means an individual who has not completed the age of eighteen years;
(g) “Consent Manager” means a person registered with the Board, who acts as a single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform;
(h) “data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means;
(i) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;
(j) “Data Principal” means the individual to whom the personal data relates and where such individual is —
(i) a child, includes the parents or lawful guardian of such a child;
(ii) a person with disability, includes her lawful guardian, acting on her behalf;
(k) “Data Processor” means any person who processes personal data on behalf of a Data Fiduciary;
(n) “digital personal data” means personal data in digital form;
(s)(vii) every artificial juristic person, not falling within any of the preceding sub-clauses;
(t) “personal data” means any data about an individual who is identifiable by or in relation to such data;
(x) “processing” in relation to personal data, means a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as collection, recording, organisation, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction;

Key Definitions of the Digital Personal Protection Act on Artificial Intelligence
Figure 2: Key Definitions of the Digital Personal Protection Act, 2023 on Artificial Intelligence

Now, with reference to Figure 2, the four most important definitions with respect to artificial intelligence, are in the Section 2, especially sub-sections (b), (s)(vii) and (x). The definition of the term "automated" clearly states that "automated" means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data. This means that AI systems that are capable of making decisions without human intervention are considered to be "automated" for the purposes of the Act. Of course, this recognition was impliedly done, as the integration of AI systems in data processing is a long-known reality. However, the wording makes it meticulously clear. This definition is broad enough to encompass a wide range of AI systems, including:

  • Machine learning systems: These systems are trained on large amounts of data to learn how to make predictions or decisions. Once they are trained, they can make these decisions without human intervention.

  • Natural language processing systems: These systems can understand and process human language. They can be used to generate text, translate languages, and answer questions.

  • Computer vision systems: These systems can identify and track objects in images and videos. They can be used for tasks such as facial recognition and object detection.

It would be intriguing to observe how this plays out when the Digital India Act is released, since the Act is proposed to cover high-risk, medium-risk and low-risk AI systems.


Artificial Juristic Person


Furthermore, the definition of "every artificial juristic person" as defined in the sub-section 2(s)(vii) of the Act is interesting, considering that the Act uses the word "person" at least 30+ times, which is obvious. is important because it helps to clarify what types of AI systems are considered to be "legal persons" for the purposes of the law.


The definition states that "artificial juristic person" means every artificial juristic person, not falling within any of the preceding sub-clauses. This means that AI systems that are not explicitly defined in the preceding sub-clauses, such as companies, firms, and associations of persons, may still be considered to be "artificial juristic persons" if they have the capacity to acquire rights and incur liabilities.


The wording is important to notice simply because it allows the Act to apply to AI systems that are not traditionally considered to be "legal persons." This is important because AI systems are becoming increasingly sophisticated and are capable of making decisions that have a significant impact on people's lives. By classifying AI systems as "legal persons," the Act helps to ensure that these systems are held accountable for their actions and that they are subject to the same legal protections as humans.

It could be argued that the definition of "artificial juristic person" in the DPDPA would evolve, as AI technology continues to develop and the integration of assessing AI-related persona issues come up for law and policy stakeholders to address.


To add, the definition of an artificial juristic person, clearly reeks of offering an ad hoc (or specific) understanding of legal recognition or legal affirmation, which could be granted to AI systems. This is in line with the ISAIL Classifications on Artificial Intelligence, especially the CEI Classification Method. As per the classifications defined in the 2020 Handbook on AI and International Law, the CEI Method could classify AI as a Concept, an Entity or an Industry. The reference to "artificial juristic persons" can be directly alluded to the classification of an AI system as a Juristic Entity. Here is an excerpt from the 2020 Handbook (pages 45 and 47), explaining the concept of a Juristic Entity in the case of artificial intelligence:

On the question of the entitative status of AI, under jurisprudence, there can be 2 distinctions on a prima facie basis: (1) the legal status; and (2) the juristic status. […] In both the cases, it is suitable to establish the substantive attributes of AI both as legal and juristic entities. There can be disagreements on the procedural attributes here due to the simple reasons that there at procedural levels, it is not practically possible to have similar legal standards of different kinds of products and services which involve AI directly or indirectly.

Here are some examples of AI systems that could be considered to be "artificial juristic persons" under the Act:

  • Self-driving cars: These cars are capable of making decisions about how to navigate roads and avoid obstacles without human intervention.

  • Virtual assistants: These assistants can understand and respond to human language, and they can be used to perform a variety of tasks, such as booking appointments, making travel arrangements, and playing music.

  • Chatbots: These bots can engage in conversations with humans, and they can be used to provide customer service, answer questions, and even write creative content.


AI as a Consent Manager?


Nevertheless, let's examine where the use of the term "artificial juristic persons" gets intriguing. Let's begin with the concept of a "Consent Manager". Of course, Section 2(g) states that a Consent Manager is a person registered with the Data Protection Board of India acting as the single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform.


This means that a Consent Manager can be any person, including an individual, a company, or an AI system. However, in order to be registered with the Board, a Consent Manager must meet certain requirements. In the context of AI, the definition of "Consent Manager" could be interpreted to mean that an AI system could be registered as a Consent Manager. However, it is important to note that the AI system must meet the same requirements as any other Consent Manager, such as having the necessary technical expertise and experience to manage consent effectively. The function of the Consent Managers could also be explained in the context of the following sub-sections of Section 5 of the Act:

(7) The Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. (8) The Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. (9) Every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed.

Now, Section 5(7) states that the Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. This means that a Data Principal can use an AI Consent Manager to manage their consent to the processing of their personal data by a Data Fiduciary. Meanwhile, it is interesting to notice that Section 5(8) states the Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. This means that the AI Consent Manager must be designed and used in a way that ensures that it is acting in the best interests of the Data Principal. This includes being transparent about how it is using personal data and being able to explain its decisions to the Data Principal. Finally, Section 5(9) states that every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed. This means that any AI Consent Manager that wants to operate in India must be registered with the Data Protection Board (DPB). The DPB will set out the technical, operational, financial and other conditions that AI Consent Managers must meet in order to be registered. Here are some specific ways that AI could be used to support the functions of Consent Managers:

  • Automating consent management: AI could be used to automate the process of giving, managing, reviewing and withdrawing consent. This would make it easier for Data Principals to control their personal data and it would also reduce the risk of human error.

  • Providing personalised consent experiences: AI could be used to personalize the consent experience for each Data Principal. This would involve understanding the Data Principal's individual needs and preferences and tailoring the consent process accordingly.

  • Ensuring transparency and accountability: AI could be used to ensure that consent is transparent and accountable. This would involve tracking how consent is given, managed, reviewed and withdrawn, and it would also involve providing Data Principals with clear and concise information about how their personal data is being used.

Additionally, the AI system must be designed in a way that ensures that it is acting in the best interests of Data Principals. This means that the AI system must be transparent about how it is using personal data and it must be able to explain its decisions to Data Principals.


Now, on the rights of Data Principals (data subjects) for grievance redressal, the role of AI as Consent Managers could become interesting. Section 13(1) of the Act states that a Data Principal shall have the right to have readily available means of grievance redressal provided by a Data Fiduciary or Consent Manager in respect of any act or omission of such Data Fiduciary or Consent Manager regarding the performance of its obligations in relation to the personal data of such Data Principal or the exercise of her rights under the provisions of this Act and the rules made thereunder. This means that a Data Principal can use an AI Consent Manager to file a grievance if they are unhappy with the way their personal data is being handled by a Data Fiduciary. Meanwhile, the Section 13(2) states that the Data Fiduciary or Consent Manager shall respond to any grievances referred to in sub-section (1) within such period as may be prescribed from the date of its receipt for all or any class of Data Fiduciaries. This means that the AI Consent Manager must be designed and used in a way that ensures that it can respond to grievances in a timely and effective manner.


Here are some use cases of AI in Consent Management, which could be looked upon:

  1. Personalised consent experiences: AI can be used to personalise the consent experience for each individual user. This can be done by understanding the user's individual needs and preferences, and tailoring the consent process accordingly. For example, AI could be used to suggest relevant consent options to users, or to provide users with more detailed information about how their data will be used.

  2. Automated consent management: AI can be used to automate the process of giving, managing, reviewing and withdrawing consent. This can make it easier for users to control their data, and it can also reduce the risk of human error. For example, AI could be used to send automatic reminders to users about their consent preferences, or to automatically revoke consent when a user no longer uses a particular service.

  3. Ensuring transparency and accountability: AI can be used to ensure that consent is transparent and accountable. This can be done by tracking how consent is given, managed, reviewed and withdrawn, and by providing users with clear and concise information about how their data is being used. For example, AI could be used to create a audit trail of consent activity, or to generate reports that show how users' data is being used.

  4. Grievance redressal: AI can be used to support the grievance redressal process. This can be done by automating the process of filing and tracking grievances, and by providing users with clear and concise information about the status of their grievance. For example, AI could be used to create a chatbot that allows users to file grievances without having to speak to a human representative, or to generate reports that show how grievances are being resolved.

  5. Compliance with regulations: AI can be used to help organisations comply with regulations related to consent management. This can be done by tracking consent activity, generating reports, and providing users with clear and concise information about how their data is being used. For example, AI could be used to create a dashboard that shows how an organisation is complying with the General Data Protection Regulation (GDPR), or to generate reports that show how users' data is being used in accordance with the California Consumer Privacy Act (CCPA).

Processing as Defined in the Act


The term “processing” in relation to personal data, includes the following:

  • It is a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as:

    • collection,

    • recording,

    • organisation,

    • structuring,

    • storage,

    • adaptation,

    • retrieval,

    • use,

    • alignment or combination,

    • indexing,

    • sharing,

    • disclosure by transmission, dissemination or otherwise making available,

    • restriction,

    • erasure or destruction;

Now, in the context of digital rights management (DRM), and the use of artificial intelligence technology through Data Processors in CMS-based platforms, would have to be observed carefully. There are certain activities which could easily be covered by automated intelligence systems, or those AI systems, which have narrow use cases, like collection, recording, storage, organisation and others. Since there is nothing clearly stated on the role of Data Processors and the burden is on the Data Fiduciary to ensure compliance with the Act, it would now be a matter of contract, as to how will companies do contracts with Data Processors to ensure compliance, and redressal of matter among themselves, especially on the limited & specific use of AI. Nevertheless, the processing capabilities of any AI system, which is preceded by their computational capabilities (for example, Generative AI systems), it would be necessary to see how the prescribed regulations, bye-laws, circulars and industry-based self-regulatory & regulatory measures would work. For Data Processors who use artificial intelligence systems, they would have to clarify the use of AI in their contracts, for sure to keep things explained about liability and accountability issues, by the arrangement of the contract they would naturally have with Data Fiduciaries.


Role of AI Usage in Shaping Rights of Data Principals


The rights of data principals under the Sections 11 to 14 of the DPDP Act are important in the context of the commercial and technical use cases of AI applications, especially those of generative AI applications. Let's decipher that.

  • Section 11: The right to obtain information about personal data that is being processed by a data fiduciary is essential for data principals to understand how their data is being used by AI applications. This information can help data principals to make informed decisions about whether or not to use an AI application, and it can also help them to identify and address any potential privacy concerns. However, to complement the instance discussed of AI as Consent Managers, or the involvement of AI in consent management, the role of technology-enabled and human-monitored elements of processing of personal data would have to be explained.

  • Section 12: The right to correct, complete, update, or erase personal data is also important in the context of AI applications. This is because AI applications can often make mistakes when processing personal data, and these mistakes can have a significant impact on data principals. For example, an AI application that is used to make lending decisions could make a mistake and deny a loan to a data principal who is actually eligible for the loan. The data principal's right to correct the mistake is essential to ensuring that they are not unfairly discriminated against.

  • Section 13: The right to have readily available means of grievance redressal is also important in the context of AI applications. This is because AI applications can be complex and it can be difficult for data principals to understand how their data is being used. If data principals believe that their rights under the DPDP Act have been violated, they should be able to easily file a complaint with the data fiduciary or consent manager.

  • Section 14: The right to nominate another individual to exercise one's rights under the DPDP Act is also important in the context of AI applications. This is because AI applications can be used to collect and process personal data about individuals who are not able to exercise their own rights, such as children or people with disabilities. The right to nominate another individual to exercise one's rights ensures that these individuals' rights are still protected.

In addition to the rights listed above, data fiduciaries that use generative AI applications must also take steps to safeguard the privacy of data principals. This includes using appropriate security measures to protect personal data, and ensuring that generative AI applications are not used to create content that is harmful or discriminatory. Here are some specific safeguards that data fiduciaries can implement to protect the privacy of data principals when using generative AI applications:

  • Implement access controls to restrict who can access personal data.

  • Use anonymisation techniques to remove personally identifiable information from personal data.

  • Monitor generative AI applications for bias and discrimination.

  • Educate data principals about their privacy rights.


Conclusion & Emerging Policy Dilemmas


Overall, this Act is not a disappointing piece of legislation from the Union Government. However, it is not a groundbreaking legislation, as India's political and technological viewpoints on data and AI regulation are still emerging. This legislation is clearly emblematic of the fact that merely having data protection laws do not ensure regulatory malleability and proficiency to tackle data-related issues, in commercial, technology and public laws. Regulatory subterfuge in matters of data law could easily happen when laws are not specific and not rooted enough to be mechanical. The DPDPA is mechanically suitable as a law, and considering India's digital economy-related trade negotiations at the WTO and beyond, the law will suffice its sourced and general purpose. Of course, the law would be challenged in the Supreme Court, and upcoming bye-laws, regulations and circulars under this Act's provisions would be subject to transformation. However, the competition law and trade law-centric approach towards data regulation and digital connectivity is not shifting to purely civil law and public law issues anytime soon.


In the case of artificial intelligence and law, there are certain legal and policy dilemmas that are surely going to emerge:


The Rise of International Algorithmic Law is inevitable

No matter what is argued about this perspective that over-focus on data-related trade issues leads to deviation from larger data law issues, the proper way to resolve and quantify problem-solving legal and policy prescriptions in the case of data law, could come from developing a soft law approach to a newer field of global governance, and international law, i.e., International Algorithmic Law. Here is an excerpt on the definition of International Algorithmic Law from my paper on the same:

The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law.

It could be easily argued that data law issues must be addressed by default, and there is no doubt that it should be done. However, the data protection laws, lack that legal consortium of understanding that could tackle and address the economics behind data colonialism and exploitation. Domestic regulators would also have to develop economic law tools which are principled and rules-based, because regulation by enforcement and endless reliance on trade negotiations alone would never help if a privacy-centric digital economy has to achieved at domestic levels. Hence, beyond certain general compliances and issues where the Data Protection laws across the world can have impact, developing regulatory tendencies around the anthropomorphic use of artificial intelligence could surely be the best way forward.

I would even argue that having AI-related 'treaties' could be possible as well. However, those 'treaties' would not be about some comic book utopia, or a sci-fi movie utopia on political control. It could be about basic ethics issues, data processing issues, or issues related to optimal explainability of the algorithms and their neural networks, and models. It could be like a treaty due to its purpose-based legal workflows and use cases.

Blending Legal Prescriptions of Data Jurisprudence & Intellectual Property Law


Now, this is a controversial suggestion, but in VLiGTA-TR-002, our report on Generative AI applications, I had proposed that in certain intellectual property issues, the protections offered to proprietary information produced by Generative AI applications could be justified by companies to manufacture the consent of data principals at every stage of prompting, by virtue of the technology by design. In such a case, I had proposed that in the case of copyright law, invoking data protection law could be helpful. Now, considering market trends, I would state that data protection law could also be invoked in the case of trade secrets. Here is an excerpt from the report (page 127):

Regulators would need to develop a better legal recognition regime, where based on the nature of use cases, copyright-related concerns could be addressed or averted. In this case, we have to consider the role of data protection and privacy laws, when it comes to the data subject.

However, the legal position to invoke data protection rights of the Data Principals, to address the justification of invoking IP rights of proprietary information by technology companies has to be done to achieve specific remedies. For AI developers and data scientists, they would have to address the issue of bias-variance tradeoff, when it comes to their AI models, especially the large language models. Here is an excerpt from an article from Analytics India Magazine:

Bias and variance are inversely connected and it is practically impossible to have an ML model with a low bias and a low variance. When we modify the ML algorithm to better fit a given data set, it will in turn lead to low bias but will increase the variance. This way, the model will fit with the data set while increasing the chances of inaccurate predictions. The same applies while creating a low variance model with a higher bias. [...] Models like GPT have billions of parameters, enabling them to process vast amounts of data and learn intricate patterns in language. However, these models are not immune to the bias-variance tradeoff. Moreover, it is possible that the larger the model, the chances of showing bias and variance is higher. [...] To tackle underfitting, especially when the training data contains biases or inaccuracies it is important to include as many examples as possible. [...] On the other hand, over-explanation to models to perfectly align with human values can lead to an overfit model that shows mundane and results that represent only one point of view. This often happens because of RLHF, the key ingredient for LLMs like OpenAI’s GPT, which has often been criticised to be too politically correct when it shouldn’t be. To mitigate overfitting, various techniques are employed, such as regularisation, early stopping, and data augmentation. LLMs with high bias may struggle to comprehend the complexities and subtleties of human language. They may produce generic and contextually incorrect responses that do not align with human expectations.

To conclude, the economics of AI explainability, in the case of India's Digital Personal Data Protection Act, can be developed by the market in India. If we achieve the economics that makes AI explainable, accountable and responsible enough, which enables sustainable business models, then a lot could be achieved on the front on data protection ethics and standards to enable a privacy-compliant ecosystem.






bottom of page