top of page

AIACT.IN

India's inaugral private AI regulation proposal, authored by Abhivardhan.

What is aiact.in?

AIACT.IN (also named as the Draft Artificial Intelligence (Development & Regulation) Bill, 2023) is India's first private proposal on the regulation of artificial intelligence technologies in India.

This bill was drafted and proposed by our Founder, Abhivardhan.

In addition, a New Artificial Intelligence Strategy for India, 2023 was also proposed, which was authored by Abhivardhan & Akash Manwani.

  • Version 2.0, March 14, 2024 [Read]

    Version 1.0 (Original), November 7, 2023 [Read]

  • Version 1.0 (Original), November 7, 2023 [Read]

Navigation guide

Draft Artificial Intelligence (Development & Regulation) Act

Version 2.0

March 14, 2024

Author: Abhivardhan, Managing Partner, Indic Pacific Legal Research

AIACT.IN Video Explainer

Hang on! If you think the AIACT.IN is a long doc, here is a detailed video explainer of AIACT.IN by the author of this draft, Abhivardhan.

Chapter I

Chapter I: PRELIMINARY


Section 1 - Short Title and Commencement


(1) This Act may be called the Artificial Intelligence (Development & Regulation) Act, 2023.

(2) It shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.


 

Section 2 – Definitions


[Please note: we have not provided all definitions, which may be required in this bill. We have only provided those definitions which are more essential, in signifying the legislative intent of the bill.]


In this Bill, unless the context otherwise requires—​


(a)   “Artificial Intelligence”, “AI”, “artificial intelligence technology”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. Such a system constitutes a diverse class of technology that includes various sub-categories of technical, commercial, and sectoral nature, in accordance with the means of classification set forth in Section 3.


(b)   “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an artificial intelligence technology, which includes, but is not limited to text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application;


(c)    “Algorithmic Bias” includes –


(i)    the inherent technical limitations within an artificial intelligence product, service or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results; and


(ii)   the technical limitations within artificial intelligence products, services and systems that emerge from the design, development, and operational stages of AI, including but not limited to:


(a)    programming errors;

(b)   flawed algorithmic logic; 

(c)    improper data handling; and 

(d)   deficiencies in model training and validation.


(d)   “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997;


(e)    "Combinations of intellectual property protections" means the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of artificial intelligence systems;


(f)    “Content Provenance” means the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history, including:


(i)    The source data, models, and algorithms used to generate the content;

(ii)   The individuals or entities involved in the creation, modification, and distribution of the content;

(iii) The date, time, and location of content creation and any subsequent modifications;

(iv)  The intended purpose, context, and target audience of the content;

(v)   Any external content, citations, or references used in the creation of the AI-generated content, including the provenance of such external sources; and

(vi)  The chain of custody and any transformations or iterations the content undergoes, forming a content and citation/reference loop that enables traceability and accountability.


(g)   “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated or augmented means;


(h)   “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;


(i)    "Data portability" means the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary, where:


(i)    The personal data has been provided to the data fiduciary by the data principal;

(ii)   The processing is based on consent or the performance of a contract; and

(iii)  The processing is carried out by automated means.


(j)    “Data Principal” means the individual to whom the personal data relates and where such individual is—


(i)             a child, includes the parents or lawful guardian of such a child;

(ii)            a person with disability, includes her lawful guardian, acting on her behalf;


(k)   “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023;


(l)    “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode;


(m)  “Digital personal data” means personal data in digital form;


(n)   "Digital Public Infrastructure" or "DPI" means the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including but not limited to:


(i)    Digital identity systems that provide secure and verifiable identification for individuals and businesses;

(ii)   Digital payment systems that facilitate efficient, transparent, and inclusive financial transactions;

(iii)  Data exchange platforms that enable secure and interoperable sharing of data across various sectors and stakeholders;

(iv)  Digital registries and databases that serve as authoritative sources of information for various public and private services;

(v)   Open application programming interfaces (APIs) and standards that promote innovation, interoperability, and collaboration among different actors in the digital ecosystem.


(o)   “Knowledge asset” includes, but is not limited to:


(i)    Intellectual property rights including but not limited to patents, copyrights, trademarks, and industrial designs;

(ii)   Documented knowledge, including but not limited to research reports, technical manuals and industrial practices & standards;

(iii)  Tacit knowledge and expertise residing within the organization's human capital, such as specialized skills, experiences, and know-how;

(iv)  Organizational processes, systems, and methodologies that enable the effective capture, organization, and utilization of knowledge;

(v)   Customer-related knowledge, such as customer data, feedback, and insights into customer needs and preferences;

(vi)  Knowledge derived from data analysis, including patterns, trends, and predictive models; and

(vii)Collaborative knowledge generated through cross-functional teams, communities of practice, and knowledge-sharing initiatives.


(p)   “Knowledge management” means the systematic processes and methods employed by organisations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of artificial intelligence systems;


(q)   “IDRC” means IndiaAI Development & Regulation Council, a statutory and regulatory body established to oversee the development and regulation of artificial intelligence systems across government bodies, ministries, and departments;


(r)    “Inherent Purpose” means the underlying technical objective for which an artificial intelligence technology is designed, developed, and deployed, and that it encompasses the specific tasks, functions, and capabilities that the artificial intelligence technology is intended to perform or achieve; 


(s)    “Insurance Policy” means measures and requirements concerning insurance for research and development, production, and implementation of artificial intelligence technologies;


(t)    "Interoperability considerations" means the technical, legal, and operational factors that enable artificial intelligence systems to work together seamlessly, exchange information, and operate across different platforms and environments. These considerations may include –


(i)    Ensuring that the combinations of intellectual property protections, including but not limited to copyrights, patents, trademarks, and design rights, do not unduly hinder the interoperability of AI systems and their ability to access and use data and knowledge assets necessary for their operation and improvement;

(ii)   Balancing the need for intellectual property protections to incentivize innovation in AI with the need for transparency, explainability, and accountability in AI systems, particularly when they are used in decision-making processes that affect individuals and public good;

(iii) Developing technical standards, application programming interfaces (APIs), and other mechanisms that facilitate the seamless integration and communication between AI systems, while respecting intellectual property rights and maintaining the security and integrity of the systems;

(iv)  Addressing the legal and ethical implications of using copyright-protected works including but not limited to music, images, and text, in the training of AI models, and ensuring that such use is consistent with existing frameworks of intellectual property rights; and

(v)   Promoting the development of open and interoperable AI frameworks, libraries, and tools that enable developers to build upon existing AI technologies and create new applications, while respecting intellectual property rights and fostering a vibrant and competitive AI ecosystem.


(u)   "Open Source Software" means computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose.


(v)   “National Registry of Artificial Intelligence Use Cases” means a national-level digitised registry of use cases of artificial intelligence technologies based on their technical & commercial features and inherent purpose, maintained by the Central Government for the purposes of standardisation and certification of use cases of artificial intelligence technologies;


(w)  “Person” includes—


(i)             an individual;

(ii)            a Hindu undivided family;

(iii)          a company;

(iv)           a firm;

(v)            an association of persons or a body of individuals, whether incorporated or not;

(vi)           the State; and

(vii)         every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (r);


(x)   "Post-Deployment Monitoring" means all activities carried out by Data Fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service;


(y)   “Quality Assessment” means the evaluation and determination of the quality of AI systems based on their technical, ethical, and commercial aspects;


(z)   “Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023;


(aa) "Spatial aspects of AI systems" means the unique characteristics and dimensions of artificial intelligence technologies that distinguish them from traditional intellectual property, including their ability to:


(i)             Dynamically adapt and generate novel outputs based on changing inputs and environments;

(ii)            Operate with varying levels of autonomy in decision-making and task execution;

(iii)          Infer and reason about spatial relationships, including but not limited to 3D spatial awareness, object recognition, and scene understanding;

(iv)           Integrate and analyse data from multiple spatial and temporal sources including but not limited to geospatial data, sensor data, and real-time information; and

(v)            Enable location-based services, autonomous systems, and immersive experiences through spatial computing technologies;


(bb)“State” shall be construed as the State defined under Article 12 of the Constitution of India;


(cc) “Strategic sector” means a strategic sector as defined in the Foreign Exchange Management (Overseas Investment) Directions, 2022, and includes any other sector or sub-sector as deemed fit by the Central Government;


(dd)“training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network;


(ee) “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that artificial intelligence technology before its placing on the market or putting into service;


(ff)   “use case” means a specific application of an artificial intelligence technology, subject to their inherent purpose, to solve a particular problem or achieve a desired outcome;


(gg)         “Whole-of-Government Approach” means a collaborative and integrated method of governance where all government entities, including ministries, departments, and agencies, work in a coordinated manner to achieve unified policy objectives, optimize resource utilization, and deliver services effectively to the public.

Chapter II

Chapter II: CATEGORIZATION AND PROHIBITION


Section 3 - Stratification of Artificial Intelligence


 (1) All artificial intelligence technologies are categorised on the basis of the means of classification provided as follows –


(a) Conceptual methods of classification: These methods categorize artificial intelligence technologies through a conceptual assessment of their utilization, development, maintenance, regulation, and proliferation to examine & recognise their inherent ethical purpose. This classification is further categorised as –


(i)     Issue-to-Issue Concept Classification involves the method to determine the inherent purpose of artificial intelligence technologies on a case-to-case basis;

(ii)   Ethics-Based Concept Classification involves the method of recognising the ethics-based relationship of artificial intelligence technologies in sector-specific & sector-neutral contexts;

(iii)  Phenomena-Based Concept Classification involves the method of addressing rights-based issues associated with the use and dissemination of artificial intelligence technologies; and

(iv)  Anthropomorphism-Based Concept Classification involves the method of evaluating scenarios where AI systems conceptually simulate, mimic, imitate, replicate, or emulate human attributes, including autonomy, perception, and behavioural tendencies.


(b) Technical methods of classification: These methods classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations as –


(i)              General intelligence applications with multiple stable use cases as per relevant sector-specific & sector-neutral industrial and regulatory standards

(ii)            General intelligence applications with multiple short- run or unclear use cases as per relevant sector-specific & sector-neutral industrial and regulatory standards

(iii)           Artificial Intelligence applications with one or more associated standalone use cases or test cases


(c) Commercial methods of classification: These methods involve the categorisation of commercially and industrially produced and disseminated artificial intelligence technologies subject to their inherent purpose as –


(i)     Artificial Intelligence as a Product (AI-P);

(ii)   Artificial Intelligence as a Service (AIaaS);

(iii)  Artificial Intelligence as a Component (AI-C) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure;

(iv)  Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provides which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts; and

(v)   Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital public infrastructure. 


(d) Risk-centric methods of classification: These methods classify artificial intelligence technologies based on their outcome and impact-based risks – 

(i)     Narrow risk AI systems may include artificial intelligence technologies that pose a low level of risk to individuals, society, or the environment subject to their scale, inherent purpose, technical features and technical limitations;

(ii)   Medium risk AI systems may include artificial intelligence technologies that pose a moderate level of risk to individuals, society, or the environment subject to their technical features and technical limitations;

(iii)  High risk AI systems may include artificial intelligence technologies that pose a significant level of risk to individuals, society, or the environment regardless of their scale, inherent purpose, technical features and technical limitations; and

(iv)  Unintended risk AI systems may include artificial intelligence technologies that may not be deliberately designed or developed but emerge from the complex interactions of their components & system infrastructure and may pose unforeseen risks.


(2) All the artificial intelligence technologies except stated otherwise in the Section 5 (1) shall be examined on the basis of their outcome and impact-based risks, in accordance with the means of classifications set forth in sub-section (1) and the criteria to examine their risks and vulnerabilities – 


(i) The extent to which the artificial intelligence technology has been utilized or is expected to be employed shall be considered by the IDRC;

(ii) The assessment shall include an examination of the potential harm or adverse impact, taking into account its severity and the number of individuals affected.

(iii) In case it does not remain feasible for data principals to have the ability to opt out of the outcomes produced by the artificial intelligence technology, then the reasons for such limitations shall be examined by the IDRC;

(iv) The assessment shall consider the vulnerability of data principals using the AI system, taking into those foreseeable factors as may be prescribed which may limit the autonomy of the data principles to realise and foresee the vulnerability of using the AI system;

(v) It may be determined whether the outcomes produced by the AI system can be easily reversed;


(3) If the use case of an AI system not prohibited under Section 4 is associated with strategic sectors, then the use case of the AI system is required to be examined as per sub-section (2) subject to the scope of legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed. 


 

Section 4 - Prohibition of Unintended Risk AI Systems


The development, deployment, and use of unintended risk AI systems classified on the basis of the means of classification of artificial intelligence technologies set forth in the sub-section (1) of Section 3 is prohibited.

Chapter III

Chapter III: SECTOR-SPECIFIC STANDARDS FOR HIGH-RISK AI SYSTEMS

Section 5 - High-Risk AI Systems in Strategic Sectors


(1) Sector-specific standards are required to be developed for high-risk AI systems in strategic sectors as designated by the Central Government on the basis of the means of classification of artificial intelligence technologies set forth in the sub-section (1) of Section 3. (2) These standards shall address issues such as safety, security, reliability, transparency, accountability, and ethical considerations subject to the legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed.

Chapter IV

Chapter IV: CERTIFICATION AND ETHICS CODE


Section 6 - Certification of AI Systems


(1) The IDRC shall establish a voluntary certification scheme for AI systems based on their industry use cases and risk levels, on the basis of the means of classification set forth in the sub-section (1) of Section 3. The certification scheme shall be designed to promote responsible AI development and deployment while fostering innovation and avoiding unnecessary regulatory burdens.


(2) The IDRC shall maintain a National Registry of Artificial Intelligence Use Cases to record and track the development and deployment of AI systems across various sectors. The registry shall be used to inform the development and refinement of the certification scheme and to promote transparency and accountability in the AI ecosystem.


(3) The certification scheme shall be based on a set of clear, objective, and risk-proportionate criteria that assess the inherent purpose, technical characteristics, and potential impacts of AI systems. 


(4) AI systems that are classified as narrow or medium risk under Section 3 may be exempted from the certification requirement if they meet one or more of the following conditions:


(a) The AI system is still in the early stages of development or testing and has not yet achieved technical or economic thresholds for effective standardization;

(b) The AI system is being developed or deployed in a highly specialized or niche application area where certification may not be feasible or appropriate; or

(c) The AI system is being developed or deployed by start-ups, micro, small & medium enterprises, or research institutions.


(5) The IDRC shall provide guidance and support to help AI developers and deployers navigate the certification process and comply with the relevant requirements. This may include:


(a) Developing clear guidelines and best practices for AI development and deployment;

(b) Providing access to technical expertise, testing facilities, and other resources under the Digital India Programme;

(c) Establishing regulatory sandboxes or other mechanisms to enable experimentation and innovation in a controlled environment;


(6) The certification scheme shall undergo periodic review and updating every 12 months to ensure its relevance and effectiveness in response to technological advancements and market developments. The review process shall include meaningful consultation with sector-specific regulators and market stakeholders.


(7) The IDRC shall monitor and enforce compliance with the certification scheme through a combination of self-assessment, third-party audits, and regulatory oversight. Enforcement actions shall be proportionate to the risks posed by non-compliance and shall take into account the specific circumstances of the AI developer or deployer.

 

Section 7 - Ethics Code for Narrow & Medium Risk AI Systems


(1) An Ethics Code for the development, procurement, and commercialization of narrow and medium-risk AI systems shall be established to promote responsible AI development and utilization while addressing potential risks and fostering self-regulatory technical & market practices.


(2) The Ethics Code shall be based on the following principles:


(a) AI systems shall respect human dignity, well-being, and fundamental rights;

(b) AI systems shall be fair, non-discriminatory, and promote social justice;

(c) AI systems shall be transparent, explainable, and accountable;

(d) AI systems shall respect privacy and data protection frameworks in accordance with the provisions of the Digital Personal Data Protection Act, 2023;

(e) AI systems shall be secure, safe, and robust;

(f) AI systems shall promote environmental sustainability and minimize negative ecological impacts;

(g) AI systems shall foster human agency, oversight, and the ability for humans to make informed decisions;

(h) AI systems shall be developed and deployed with consideration for their societal and ethical implications, promoting the common good and public interest.


(3) The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end:


(a) AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny;

(b) The use of OSS in AI development shall not exempt AI systems from complying with the principles and requirements set forth in this Ethics Code;

(c) AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems;

(d) The IDRC shall support research and development initiatives under the Digital India Programme that leverage OSS to create AI tools and frameworks that prioritize ethics, safety, and inclusivity.


(4) The Ethics Code shall be reviewed and updated periodically to reflect advancements in AI technologies, and emerging best practices in responsible AI development and deployment.


(5) Compliance with the Ethics Code shall be voluntary for narrow and medium-risk AI systems. However, the IDRC may consider mandating adherence to specific provisions of the Code for AI systems deployed in sensitive domains or those with significant potential for societal impact.

Chapter V

Chapter V: KNOWLEDGE MANAGEMENT AND DECISION-MAKING


Section 8 - Model Standards on Knowledge Management


(1) The IDRC shall develop, document and promote comprehensive model standards on knowledge management and decision-making processes concerning the development, maintenance, and compliance of high-risk AI systems.


(2) These model standards shall encompass the following areas:


(a) Effective knowledge management practices and procedures to ensure the quality, reliability, and security of data used for training, validating, and improving AI systems.

(b) Model governance frameworks that define the roles and responsibilities of individuals, departments, or committees involved in AI model development, deployment, and monitoring.

(c) Transparent decision-making procedures, including the establishment of model selection criteria, model performance assessment, and ethical considerations in AI system operation.


(3) All entities that are engaged in the development, deployment, or utilization of high-risk AI systems shall be bound by the model standards on knowledge management and decision-making as provided by this section.


(4) Entities already operating AI systems within the jurisdiction of India shall comply with the prescribed model standards within a reasonable timeframe, as determined by the IDRC. The compliance timeline may vary based on the complexity and risk levels associated with AI systems.


(5) The Central Government shall empower the IDRC or agencies to establish a knowledge management registry process to enable the standardisation of various knowledge management practices and procedures associated with the life cycle of high-risk AI systems.


(6) The entities responsible for the development of high-risk AI systems shall be required to submit regular audit reports to the IDRC, outlining their adherence to the model standards for knowledge management and decision-making.


(7) Failure to adhere to the prescribed model standards for knowledge management and decision-making shall result in penalties and regulatory sanctions, which may include monetary fines, suspension of AI operations, or other regulatory actions as determined and prescribed by the IDRC.


(8) The IDRC shall encourage the sharing of AI-related knowledge, including datasets, models, and algorithms, through open source software repositories and platforms, subject to applicable intellectual property rights and the provisions of the Digital Personal Data Protection Act, 2023. 

 

Section 9 - IndiaAI Development & Regulation Council (IDRC)


(1) With effect from such date as the Central Government may, by notification, appoint, there shall be established, for the purposes of this Act, a Council to be called as the IndiaAI Development & Regulation Council (IDRC).


(a) The Council shall be constituted as a statutory and regulatory body with a whole-of-government approach to coordinate across government bodies, ministries, and departments;

(b) The Council shall be a body corporate by the name aforesaid, having perpetual succession and a common seal, with power, subject to the provisions of this Act, to acquire, hold and dispose of property, both movable and immovable, and to contract and shall, by the said name, sue or be sued;

(c) The headquarters of the Council shall be at such place as the Central Government may notify;(d) The Council shall consist of a Chairperson and such number of other Members as the Central Government may notify;

(e) The Chairperson and other Members shall be appointed by the Central Government in such manner as may be prescribed;

(f) The Chairperson and other Members shall be a person of ability, integrity and standing who possesses special knowledge or practical experience in the fields of data & artificial intelligence governance, administration or implementation of laws related to social or consumer protection, dispute resolution, information and communication technology, digital economy, law, regulation or techno-regulation, or in any other field which in the opinion of the Central Government may be useful to the Council, and at least one among them shall be an expert in the field of law;


We have not designated the functions of the IDRC in detail, considering this draft Act to be a proposal.

Chapter VI

Chapter VI: GUIDANCE FOR AGREEMENTS AND MONITORING


Section 10 - Guidance Principles for Agreements


(1) The following guidance principles shall apply to AI-related agreements to promote transparent, fair, and responsible practices in the development, deployment, and use of AI technologies:


(a) AI Software License Agreement (ASLA):


(i) The ASLA shall clearly define the scope of rights granted to the licensee, including any limitations on the use, modification, or distribution of the AI software.

(ii) The ASLA shall address intellectual property rights, the term and termination of the agreement, warranties, and indemnification provisions.


(b) AI Service Level Agreement (SLA):


(i) The SLA shall define the level of service to be provided, including performance metrics, service availability, and support.

(ii) The SLA shall establish mechanisms for monitoring and measuring service performance, change management, and problem resolution.


(c) AI End-User License Agreement (EULA) or AI End-Client License Agreement (ECLA):


(i) The EULA or ECLA shall clearly define the permitted uses of the AI system, user obligations, and data privacy provisions in line with the Digital Personal Data Protection Act, 2023.

(ii) The EULA or ECLA shall address intellectual property rights, warranties, and limitations of liability.


(d) AI Explainability Agreement:


(i) The AI Explainability Agreement shall require the AI provider to furnish clear and understandable explanations for the outputs of AI systems.

(ii) The agreement shall specify the format and frequency of documentation and reporting on the AI system's decision-making processes, and address the role of human review and intervention.


(2) The IDRC shall develop and publish model AI-related agreements incorporating these guidance principles, taking into account the unique characteristics and risks associated with different types of AI systems.


(3) The model agreements shall be reviewed and updated periodically to reflect advancements in AI technologies, evolving best practices, and changes in the legal and regulatory landscape.


(4) Entities engaging in the development, deployment, or use of AI systems shall be encouraged to adopt and customize the model agreements to suit their specific needs and contexts, while adhering to the core principles of transparency, fairness, and responsibility.


(5) The adoption of these guidance principles and model agreements shall be voluntary, but the IDRC may consider mandating their use for high-risk AI systems or in specific sectors where the potential risks are deemed significant. 

 

Section 11 - Post-Deployment Monitoring of High-Risk AI Systems


(1) High-risk AI systems shall be subject to ongoing monitoring and evaluation with a whole-of-government approach throughout their lifecycle to ensure their safety, security, reliability, transparency, accountability, and compliance with applicable laws, regulations, and ethical standards.


(2) The post-deployment monitoring shall be conducted by the providers, deployers, or users of the high-risk AI systems, as appropriate, in accordance with the guidelines established by the IDRC.


(3) The IDRC shall develop and establish comprehensive guidelines for the post-deployment monitoring of high-risk AI systems, which shall include, but not be limited to, the following:


(a)    Identification and assessment of potential risks, including performance deviations, malfunctions, unintended consequences, security vulnerabilities, data breaches, and adverse impacts on individuals, society, or the environment;

(b)    Evaluation of the effectiveness of risk mitigation measures and implementation of necessary updates, corrections, or remedial actions;

(c)    Continuous improvement of the AI system's performance, reliability, and trustworthiness based on real-world feedback and evolving best practices; and

(d)    Regular reporting to the IDRC on the findings and actions taken as a result of the post-deployment monitoring, including any incidents, malfunctions, or adverse impacts identified, and the measures implemented to address them.


(4) The post-deployment monitoring shall involve collaboration and coordination among providers, deployers, users, and relevant stakeholders, including regulatory authorities, subject matter experts, and civil society organizations, to ensure a comprehensive and inclusive approach to AI system oversight.


(5) The IDRC shall establish mechanisms for the independent auditing and verification of the post-deployment monitoring activities, ensuring transparency, accountability, and public trust in the governance of high-risk AI systems.


​(6) Failure to comply with the post-deployment monitoring requirements and guidelines established by the IDRC may result in penalties, regulatory sanctions, or other enforcement actions, as determined by the IDRC and in accordance with applicable laws and regulations.

Chapter VII

Chapter VII: REPORTING AND SHARING


Section 12 - Third-Party Vulnerability Reporting


(1) The IDRC shall establish a secure and accessible digitised platform for third-party vulnerability reporting of risks associated with AI systems. This platform shall allow individuals and organizations to anonymously report vulnerabilities without fear of retaliation.


(2) The IDRC shall establish a vulnerability response team to promptly review and assess reported vulnerabilities. This team shall have the expertise and resources to investigate vulnerabilities, determine their severity, and develop mitigation strategies.


(3) The IDRC shall establish a communication protocol for informing affected parties of identified vulnerabilities and coordinating mitigation efforts. This protocol shall ensure that vulnerabilities are addressed in a timely and effective manner. 

 

Section 13 - Incident Reporting and Mitigation


(1) Developers, operators, and users of AI systems shall have mechanisms in place to report incidents related to AI systems. These mechanisms shall be easily accessible and user-friendly to encourage reporting. 


(2) The priority of access will have to be given to incidents related to high-risk AI systems.


(3) The IDRC shall establish a central repository for incident reports. 


(4) This repository shall allow for the collection, analysis, and sharing of incident data to identify trends and potential risks.


(5) The IDRC shall develop and publish guidelines for incident reporting and mitigation. 


(6) These guidelines shall provide clear and actionable steps for organizations to follow in the event of an AI-related incident.

 

Guidance Principles for Incident Reporting and Incident Mitigation


We have recommended some guidance principles for incident reporting and incident mitigation independent of the provisions of the draft Bill.

Incident Reporting   


  • Establish clear reporting mechanisms through measures including having a dedicated incident reporting hotline or a secure online portal or a designated email address if possible or feasible or necessary 

  • Encourage timely reporting for timely mitigation 

  • Provide clear reporting guidelines for AI-related incidents, and mention them in any AI-related agreements and bye-laws   

  • Protect confidentiality of incident reports Principles for Incident Mitigation 

  • Assess the incident and its severity

  • Contain the incident by measures involving the isolation of AI systems, disabling specific functionalities and other measures

  • Investigate the incident

  • Remediate the incident by involving patching software, updating security protocols or retaining employees 

  • Communicate the incident

  • Review and improve incident response procedures


 

Section 14 - Responsible Information Sharing


(1) Developers, operators, and users of AI systems shall share information in a responsible and ethical manner. This includes ensuring that information is accurate, complete, and relevant to the purpose of sharing.


(2) Information sharing shall be transparent, verifiable, and subject to appropriate safeguards to protect privacy and data security, which includes obtaining informed consent from data principals whose data is being shared.


(3) Data fiduciaries and third-party companies must describe a set of general practices applicable to the developers, operators and users of AI system attributed to information sharing in any agreement involving the use, development and commercialisation of artificial intelligence technologies.


(4) The IDRC shall develop guidelines for responsible information sharing in the context of AI.

Chapters VIII & IX

Chapter VIII: INTELLECTUAL PROPERTY AND STANDARDS

Section 15 - Intellectual Property Protections


(1) In recognition of the unique challenges and opportunities presented by the development and use of artificial intelligence systems, AI systems must be protected through a combination of existing intellectual property (IP) rights, such as copyright, patents, and design rights, as well as new and evolving IP concepts specifically tailored to address the spatial aspects of AI systems.


(2) The objectives of providing a combination of existing intellectual property rights are –


(a) Encourage innovation in the development of AI systems by providing developers with secure and enforceable rights over their creations and innovations;

(b) Enhance the interoperability of AI systems by ensuring that relevant contractual arrangements are not unduly hindered by IP restrictions;

(c) Promote fair competition in the AI market by preventing the unauthorized appropriation and exploitation of IP assets generated in India;

(d) Protect the privacy and security of individuals by ensuring that the combinations of intellectual property protections do not compromise the confidentiality and integrity of personal data as per the provisions of the Digital Personal Data Protection Act, 2023;


(3) The IDRC shall establish consultative mechanisms in cooperation with the National Data Management Office and the relevant Centres of Excellence for AI for the identification, protection, and enforcement of intellectual property rights, including any intellectual property rights emanated from knowledge management procedures and practices. These mechanisms shall address issues such as:


(a) The definition and scope of combinations of intellectual property protections including their limitations as per the legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed;

(b) The compatibility of such protections with existing IP laws;

(c) The interoperability considerations for the combinations of intellectual property protections;


(4) The use of open source software in AI systems shall be subject to the terms and conditions of the respective open source licenses. The IDRC shall provide guidance on the compatibility of different open source licenses with the intellectual property protections for AI systems. 

 

Chapter IX: SECTOR-NEUTRAL & SECTOR-SPECIFIC STANDARDS


Section 16 - Shared Sector-Neutral Standards


The IDRC shall develop shared sector-neutral standards for the responsible development, deployment, and use of AI systems based on the following set of principles:


(1) Transparency and Explainability


(a) AI systems should be designed and developed in a transparent manner, allowing users to understand how they work and how decisions are made.

(b) AI systems should be able to explain their decisions in a clear and concise manner, allowing users to understand the reasoning behind their outputs.


(2) Fairness and Bias


(a) AI systems should be regularly monitored for bias and discrimination, and appropriate mitigation measures should be implemented to address any identified issues.


(3) Safety and Security


(a) AI systems should be designed and developed with safety and security by design & default.

(b) AI systems should be protected from unauthorized access, modification, or destruction.


(4) Human Control and Oversight


(a) AI systems should be subject to human control and oversight to ensure that they are used responsibly.


(5) Open Source and Interoperability


(a) The development of shared sector-neutral standards for AI systems shall leverage open source software and open standards to promote interoperability, transparency, and collaboration.

(b) The IDRC shall encourage the participation of open source communities and stakeholders in the development of AI standards.

Chapter X

Chapter X: CONTENT PROVENANCE


Section 17 - Content Provenance and Identification


(1) AI systems that produces an AI-generated content or manipulates content are required to have mechanisms in place to identify the source of the content and to maintain a record of the provenance of the content. 


(2) The mechanisms to identify the source and enable content provenance can be in-house measures or the companies can rely on tools & mechanisms developed by third-party companies. 


(3) This record shall include information such as the date and time the content was generated or manipulated, the identity of the AI system that generated or manipulated the content, and any other relevant information as may be prescribed.


(4) AI systems shall use watermarking techniques to embed identifying information into generated or manipulated content in a manner that is robust & explainable and that can be used to verify the authenticity of the content and differentiate between AI-generated content and content, which is not produced or manipulated by an AI system.


(5) The watermarking or other identifying information shall be accessible to the public in a transparent manner, which may or may not involve publishing the watermarking or identifying information in a public repository or making it available through an open API.


(6) The IDRC shall develop and publish guidelines for the implementation, licensing and use of watermarking and other identifying techniques in AI systems. (7) These guidelines shall address issues such as –


(i) the type of information to be embedded in watermarks; 

(ii) the licensing of the watermarking techniques;

(iii) the robustness of watermarking techniques; and 

(iv) the accessibility of watermarking information.


(7) The IDRC shall certify the use of watermarking techniques in AI systems and assess the effectiveness of these techniques in preventing the misuse of AI-generated content.


(8) The provisions of this Section shall apply to all AI systems that generate or manipulate content, regardless of the purpose or intended use of the content, including those AI systems that generate text, images, audio, video or any other forms of content.

Chapter XI

Chapter XI: EMPLOYMENT AND INSURANCE

Section 18 - Employment and Skill Security Standards


(1) The IDRC shall develop consultative guidelines on:


(a) the impact of high-risk AI systems on sector-specific employment opportunities in line with relevant sector-specific regulations and standards of employment.

(b) employment security in the context of the deployment of high-risk AI systems, emphasizing fair labour practices.


(2) Data fiduciaries employing high-risk AI systems are required to implement safeguards to examine and protect the rights and livelihood of affected employees to the extent possible.


(3) Employers deploying high-risk AI systems are required to engage in consultation with their employees and relevant employee representatives to establish fair transition plans that may include retraining, redeployment, or alternative employment opportunities.


(4) Data fiduciaries or third-party institutions actively involved in the development, application, or research of artificial intelligence technologies designated as high-risk AI systems shall facilitate skill development initiatives for their workforce and, where appropriate, offer training programs for acquiring new skills.


(5) Sector-specific skill development programs and vocational training centres must be promoted to address the evolving skill requirements arising from AI technology advancements.(6) The IDRC shall encourage the development of skills related to open source software development and collaboration in the AI workforce through training programs, certifications, and other initiatives. 

 

Section 19 - Insurance Policy for AI Technologies


(1) Any Data Fiduciary or class of Data Fiduciaries notified as a Significant Data Fiduciary under section 10 of the Digital Personal Data Protection Act, 2023, that develops, deploys, or utilises high-risk AI systems shall be mandated to obtain comprehensive insurance coverage to manage and mitigate potential risks associated with AI operations.


(2) The Insurance Regulatory and Development Authority of India (IRDAI) shall, in conjunction with relevant ministries, specify the minimum insurance coverage standards for AI technologies. The insurance coverage requirements may include –


(i) technical failures; 

(ii) data breaches; 

(iii) technical forms of algorithmic bias subject to the inherent purpose of the artificial intelligence technology;

(iv) data protection & privacy risks; and

(v) accidents,


(3) Entities deploying high-risk AI systems must maintain records of their insurance policies, ensuring that these policies cover a comprehensive spectrum of AI-related risks and liabilities.


(4) Insurance companies offering AI technology coverage shall operate within the guidelines and directives laid out by the IRDAI.


(5) The IRDAI shall establish rigorous underwriting criteria and risk assessment procedures specific to AI-related insurance policies, which includes –


(i) assessment methods; 

(ii) premium calculation models; and 

(iii) claims processing standards.


(6) Insurance providers shall be responsible for presenting transparent and detailed insurance policies tailored to the unique risks of AI technologies, and they must promptly address claims and compensate policyholders as per the policy terms.


(7) Data fiduciaries deploying AI systems shall be obligated to furnish evidence of their insurance coverage when procuring or deploying high-risk AI systems.


(8) The Central Government, through the Ministry of Electronics and Information Technology, may necessitate AI entities to furnish periodic reports on their insurance policies, including claims made and settled. 

Chapters XII-XV

Chapter XII: APPEAL AND ALTERNATIVE DISPUTE RESOLUTION

Section 20 – Appeal to Appellate Tribunal


(1) The Appellate Tribunal established under the Telecom Regulatory Authority of India Act, 1997, shall also serve as the Appellate Tribunal for the purposes of this Act.


(2) Any person aggrieved by any direction, decision, or order of the IDRC under this Act may prefer an appeal to the Appellate Tribunal within a period of 60 days from the date on which a copy of the direction, decision, or order is received by the person.


(3) The Appellate Tribunal may entertain an appeal after the expiry of the said period of 60 days if it is satisfied that there was sufficient cause for not filing it within that period.


(4) On receipt of an appeal, the Appellate Tribunal may, after giving the parties to the appeal an opportunity of being heard, pass such orders thereon as it thinks fit, confirming, modifying, or setting aside the direction, decision, or order appealed against.


(5) The Appellate Tribunal shall send a copy of every order made by it to the parties to the appeal and to the IDRC.


(6) The appeal filed before the Appellate Tribunal shall be dealt with by it as expeditiously as possible, and endeavour shall be made by it to dispose of the appeal finally within 6 months from the date of receipt of the appeal.


(7) The Appellate Tribunal may, for the purpose of examining the legality, propriety, or correctness of any direction, decision, or order of the IDRC, on its own motion or otherwise, call for the records relevant to disposing of such appeal and make such orders as it thinks fit.


(8) The provisions of sections 14-I to 14K of the Telecom Regulatory Authority of India Act, 1997, shall, mutatis mutandis, apply to the Appellate Tribunal in the discharge of its functions under this Act, as they apply to it in the discharge of its functions under that Act.


(9) Any person aggrieved by any decision or order of the Appellate Tribunal may file an appeal to the Supreme Court within a period of 60 days from the date of communication of the decision or order of the Appellate Tribunal.


(10) The Appellate Tribunal shall endeavour to function as a digital office to the extent practicable, with the filing of appeals, hearings, and pronouncement of orders being conducted through digital means. 

 

Section 21 – Orders passed by Appellate Tribunal to be executable as decree


(1) An order passed by the Appellate Tribunal under this Act shall be executable by it as a decree of civil court, and for this purpose, the Appellate Tribunal shall have all the powers of a civil court.


(2) Notwithstanding anything contained in sub-section (1), the Appellate Tribunal may transmit any order made by it to a civil court having local jurisdiction and such civil court shall execute the order as if it were a decree made by that court. 

 

Section 22 – Alternate Dispute Resolution


If the IRDC is of the opinion that any complaint may be resolved by mediation, it may direct the parties concerned to attempt resolution of the dispute through such mediation by such mediator as the parties may mutually agree upon, or as provided for under any law for the time being in force in India.


We have provided a list of suggested provisions, which may be expected in the draft Bill, but do not have any substantive necessity to be drafted.

 

CHAPTER XIII: MISCELLANEOUS 


Section 23 - Power to Make Rules


(1) The Central Government may, by notification, make rules to carry out the provisions of this Act.(2) In particular, and without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely:—


(a) The manner of appointment, qualifications, terms and conditions of service of the Chairperson and Members of the IDRC under Section 9(1)(e);


(b) The form, manner, and fee for filing an appeal before the Appellate Tribunal under Section 20(2);(c) The procedure to be followed by the Appellate Tribunal while dealing with an appeal under Section 20(8);(d) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by rules.


(3) Every rule made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the rule or both Houses agree that the rule should not be made, the rule shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that rule.

 

Section 24 - Power to Make Regulations


(1) The IDRC may, by notification, make regulations consistent with this Act and the rules made thereunder to carry out the provisions of this Act.


(2) In particular, and without prejudice to the generality of the foregoing power, such regulations may provide for all or any of the following matters, namely —


(a) The criteria and process for the stratification of AI systems based on their inherent outcome and impact-based risks, as specified in Section 3;

(b) The standards, guidelines, and best practices for the development, deployment, and use of AI systems, including those related to transparency, explainability, fairness, safety, security, and human oversight;

(c) The procedures and requirements for the certification of AI systems, including the criteria for exemptions and the maintenance of the National Registry of Artificial Intelligence Use Cases, as outlined in Section 6;

(d) The principles and guidelines for the Ethics Code for narrow and medium-risk AI systems, as mentioned in Section 7;

(e) The model standards on knowledge management and decision-making processes for high-risk AI systems, as specified in Section 8;

(f) The guidelines and mechanisms for post-deployment monitoring of high-risk AI systems, as outlined in Section 11;

(g) The procedures and protocols for third-party vulnerability reporting, incident reporting, and responsible information sharing, as mentioned in Sections 12, 13, and 14;

(h) The mechanisms for the identification, protection, and enforcement of intellectual property rights related to AI systems, as specified in Section 15;

(i) The shared sector-neutral standards for the responsible development, deployment, and use of AI systems, as outlined in Section 16;

(j) The guidelines and requirements for content provenance and identification in AI-generated content, as mentioned in Section 18;

(k) The consultative guidelines on employment security and skill development in the context of AI deployment, as specified in Section 19;

(l) The insurance coverage requirements and risk assessment procedures for entities developing or deploying high-risk AI systems, as outlined in Section 20;

(m) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by regulations.


(3) Every regulation made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the regulation or both Houses agree that the regulation should not be made, the regulation shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that regulation.

 

Section 25 - Protection of Action Taken in Good Faith


No suit, prosecution or other legal proceedings shall lie against the Central Government, the IDRC, its Chairperson and any Member, officer or employee thereof for anything which is done or intended to be done in good faith under the provisions of this Act or the rules made thereunder.

 

Section 26 - Offenses and Penalties [**]


CHAPTER XIV: REPEAL AND SAVINGS 


Section 27 - Savings Clause


(1) The provisions of this Act shall be in addition to, and not in derogation of, the provisions of any other law for the time being in force.


(2) Nothing in this Act shall affect the validity of any action taken or decision made by any entity in relation to the development, deployment, or use of AI systems prior to the commencement of this Act, provided such action or decision was in accordance with the laws in force at that time.


(3) Any investigation, legal proceeding, or remedy in respect of any right, privilege, obligation, liability, penalty, or punishment under any law, initiated or arising before the commencement of this Act, shall be continued, enforced, or imposed as if this Act had not been enacted.


(4) Nothing in this Act shall be construed as preventing the Central Government from making any rules or regulations, or taking any action, which it considers necessary for the purpose of removing any difficulty that may arise in giving effect to the provisions of this Act.

 

CHAPTER XV: FINAL PROVISIONS 


Section 28 - Power to Remove Difficulties


(1) If any difficulty arises in giving effect to the provisions of this Act, the Central Government may, by order published in the Official Gazette, make such provisions, not inconsistent with the provisions of this Act as may appear to it to be necessary for removing the difficulty. 


(2)    No such order shall be made under this Section after the expiry of a period of five years from the commencement of this Act.


(3)    Every order made under this Section shall be laid, as soon as may after it is made before each House of Parliament.

 

Section 29 - Amendment of [Other Legislation]


(1) The Digital Personal Data Protection Act, 2023 shall be amended as follows:(a) In Section 2, after clause (x), the following clause shall be inserted:


"(xa) 'Artificial Intelligence system' shall have the same meaning as assigned to it under clause (a) of Section 2 of the Artificial Intelligence (Development & Regulation) Act, 2023."


(b) In Section 7, after sub-section (6), the following sub-section shall be inserted:


"(7) The processing of personal data by an Artificial Intelligence system shall be considered a legitimate purpose under this section, subject to compliance with the provisions of the Artificial Intelligence (Development & Regulation) Act, 2023 and the rules and regulations made thereunder."


(2) The Competition Act, 2002 shall be amended as follows:


(a) In Section 2, after clause (r), the following clause shall be inserted:


"(ra) 'Artificial Intelligence system' shall have the same meaning as assigned to it under clause (a) of Section 2 of the Artificial Intelligence (Development & Regulation) Act, 2023."


(b) In Section 19, after sub-section (6), the following sub-section shall be inserted:


"(7) While determining whether an agreement has an appreciable adverse effect on competition under sub-section (1), the Commission shall also consider the impact of the use of Artificial Intelligence systems by the parties to the agreement, in accordance with the factors specified in Section 17(2) of the Artificial Intelligence (Development & Regulation) Act, 2023."


(3) The Patents Act, 1970 shall be amended as follows:


(a) In Section 2, after clause (1)(j), the following clause shall be inserted:


"(ja) 'Artificial Intelligence system' shall have the same meaning as assigned to it under clause (a) of Section 2 of the Artificial Intelligence (Development & Regulation) Act, 2023."


(b) In Section 3, after clause (k), the following clause shall be inserted:


"(l) a computer programme per se, including an Artificial Intelligence system, unless it is claimed in conjunction with a novel hardware."


(4) The Copyright Act, 1957 shall be amended as follows:


(a) In Section 2, after clause (ffc), the following clause shall be inserted:


"(ffd) 'Artificial Intelligence system' shall have the same meaning as assigned to it under clause (a) of Section 2 of the Artificial Intelligence (Development & Regulation) Act, 2023."


(b) In Section 13, after sub-section (3), the following sub-section shall be inserted:


"(3A) In the case of a work generated by an Artificial Intelligence system, the author shall be the person who causes the work to be created, unless otherwise provided by the Artificial Intelligence (Development & Regulation) Act, 2023 or the rules and regulations made thereunder."


(5) The Consumer Protection Act, 2019 shall be amended as follows:


"(a) In Section 2, after clause (1), the following clause shall be inserted:


"(1A) 'Artificial Intelligence system' shall have the same meaning as assigned to it under clause (a) of Section 2 of the Artificial Intelligence (Development & Regulation) Act, 2023."


(b) In Section 2, after clause (47), the following clause shall be inserted:


"(47A) 'Unfair trade practice' includes the use of an Artificial Intelligence system in a manner that violates the provisions of the Artificial Intelligence (Development & Regulation) Act, 2023 or the rules and regulations made thereunder, and causes loss or injury to the consumer."


new artificial intelligence strategy for india

Version 1.0

November 7, 2023

Authors: Abhivardhan & Akash Manwani, Indic Pacific Legal Research

AI Policy

#1

Strengthen and empower India’s Digital Public Infrastructure to transform its potential to integrate governmental and business use cases of artificial intelligence at a whole-of-government level.

Notes

Whole-of-government approach

A whole-of-government approach to AI is essential for ensuring that AI is used effectively and efficiently across government. This requires coordination and collaboration between different government agencies. Such an approach to AI can help to avoid duplication of effort, ensure consistency of approach, and maximize the benefits of AI in a flexible and coordinated manner.


 

#2

Transform and rejuvenate forums of judicial governance and dispute resolution to keep them effectively prepared to address and resolve disputes related to artificial intelligence, which are related to issues ranging from those of data protection & consent to algorithmic activities & operations and corporate ethics.

Notes

Effective preparedness of courts, tribunals and dispute resolution forums

Forums of judicial governance and dispute resolution play a crucial role in ensuring that AI is used in a fair and just manner. These forums provide a platform for individuals and businesses to seek redress in the event of disputes related to AI.

It would become necessary for the courts, tribunals and dispute resolution forums to address the interpretability and maintainability of technology law disputes, at various levels, as proposed:

  • Level 1: Data Protection / Privacy / Consent / Processing Issues

  • Level 2: Level 1 + Sector-specific Civil Law Issues

  • Level 3: Algorithmic Use and Ethics Issues

  • Level 4: Level 3 + Issues related to AI Governance in Companies / Government Bodies

  • Level 5: AI and Corporate Practice Issues + Sector-specific Competition Law / Trade Law / Investment Law Issues

  • Level 6: Level 5 + Telecom Arbitration / Technology Arbitration

Reasonable Distinction of Legal and Policy Issues

For courts, tribunals and dispute resolution forums to address and resolve disputes related to artificial intelligence and law, they would require to adopt a technology-neutral approach to interpret and examine the veracity of legal issues related to artificial intelligence use, proliferation and democratisation, based on a reasonable distinction of legal and policy issues, as proposed:

  • Data Protection / Privacy / Consent Issues

  • Data Processing and Pseudonymisation Issues

  • Legitimate Use of Data-related Issues

  • Data Erasure / Right to be Forgotten Issues

  • Contractual Disputes between Data Processors, Consumers and Data Fiduciaries

  • Jurisdiction and Cross-Border Ownership and Liability Questions

  • Transboundary Flow of Data

  • Algorithmic Ethics Issues in Company Law

  • Algorithmic Transparency and Bias Issues in Commercial Law

  • Regulation and Compliance of Algorithmic Activities & Operations of AI Use Cases, subject to their Technical Features and Commercial Viability

  • Artificial Intelligence Governance Issues at Business-to-Business & Business-to-Government levels.

  • AI-related Mergers & Acquisitions Issues

  • AI-related Investment Issues

  • Arbitrability of Telecom Disputes Arising out of use of Artificial Intelligence Technologies

AI Diplomacy

#3

Focus on the socio-technical empowerment and skill mobility for businesses, professionals, and academic researchers in India and the Global South to mobilize and prepare for the proliferation of artificial intelligence & its versatile impact across sectors.

Notes

 

Provide training and education on AI preparedness

Educate businesses, professionals, and academic researchers in India and Global South to strengthen them for preparedness against the risks and proliferation of artificial intelligence technologies.

Promote AI adoption

Enable AI learning and mobilization among businesses, professionals and academic researchers beyond preparedness to enable them to adopt and utilise relevant AI use cases. This helps them to help regulators in India and Global South countries to develop reasonable compliance frameworks and industrial standardisation ecosystems.


 

#4

Enable safer and commercially productive AI & data ecosystems for startups, professionals and MSMEs in the Global South countries.

Notes

Enable Safer AI & Data Ecosystems

  • Aid start-ups, professionals, and MSMEs in the Global South to navigate the complexities of AI with confidence and security.

  • Promote risk mitigation, ensuring that these entities can explore AI and data-driven ventures without excessive threats to their businesses.

  • Foster innovation by creating an environment where start-ups, professionals, and MSMEs can experiment with AI solutions, driving economic growth.

  • Encourage foreign and domestic investments, positioning the Global South as an attractive hub for AI entrepreneurship and development.

 


 

#5

 

Bridge economic and digital cooperation with countries in the Global South to promote the implementation of sustainable regulatory and enforcement standards, when the lack of regulation on digital technologies, especially artificial intelligence becomes an unintended systemic, economic and political risk.

Notes

Bridge Economic and Digital Cooperation to Promote Sustainable Regulatory and Enforcement Standards

 

  • Address the inherent risks posed by the absence of regulations on digital technologies, reducing systemic, economic, and political vulnerabilities.

  • Encourage knowledge exchange and best practices sharing among nations, enabling the implementation of sustainable regulatory and enforcement standards for AI and digital technologies.

  • Enhance the digital readiness of Global South countries, positioning them to tap into the opportunities presented by AI while mitigating risks and uncertainties.

  • Strengthen diplomatic and economic relationships, creating a mutually supportive environment for nations as they navigate the complexities of AI and digital ecosystems.

  • Position the Global South as a collective force in shaping AI regulations and standards, allowing its members to have a more influential and balanced role in the global AI landscape.

AI Entrepreneurship

#6

Develop and promote India-centric, locally viable commercial solutions in the form of AI products & services.

Notes

 

Promote innovation and economic growth

Developing and promoting India-centric, locally viable commercial solutions in the form of AI products and services can help to promote innovation and economic growth. AI-powered products and services can create new jobs, boost productivity, and open up new markets.

Encourage the development of locally viable AI solutions in India

This can help to reduce India's reliance on foreign technology. This can make India more resilient to external shocks and give it more control over its own economic destiny.


 

#7

Enable the industry standardization of sector-specific technical & commercial AI use cases.

Notes

Enable Industry Standardization

  • Promote consistency and interoperability in AI applications across sectors, reducing fragmentation and enhancing efficiency.

  • Foster the development of clear benchmarks for AI use cases, facilitating seamless integration and promoting fair competition

  • Position India to lead in sector-specific AI use cases, attracting investments and fostering innovation in targeted industries

  • Empower professionals and businesses by offering a structured approach to AI adoption, reducing barriers to entry and risks associated with uncertainty.


 

#8

Subsidize & incentivize the availability of compute infrastructure, and technology ecosystems to develop AI solutions for local MSMEs and emerging start-ups.

Notes

Provide financial assistance to SMEs and start-ups to purchase cloud computing resources 
 
  • Provide financial assistance to SMEs and start-ups to purchase cloud computing resources, such as compute power, storage, and networking. This will make it more affordable for SMEs and start-ups to access the resources they need to develop and deploy AI solutions.

​​

​Establish AI innovation hubs 

  • Establish AI innovation hubs across the country. These hubs will provide SMEs and start-ups with access to compute infrastructure, technology ecosystems, and expertise. The hubs can also help to foster collaboration between SMEs, start-ups, and other stakeholders.

Partner with universities and research institutions 

  • Partner with universities and research institutions to develop AI curriculum and to provide training to SMEs and start-ups on AI. This will help to ensure that SMEs and start-ups have the skills and knowledge they need to develop and deploy AI solutions.


 

#9

Establish a decentralized, localized & open-source data repository for AI test cases & use cases and their training models, with services to annotate & evaluate models and develop a system of incentives to encourage users to contribute data and to annotate and evaluate models. 

Notes

Establish Decentralized Data Repository

  • Facilitate accessibility to AI test cases, use cases, and training models, promoting transparency and innovation within the AI ecosystem on a sector-wide basis.

  • Encourage the development of localized, context-aware AI solutions that are adapted to the nuances and requirements of different regions and communities.

  • Foster open-source collaboration, allowing AI practitioners and developers to contribute, annotate, and evaluate models, enhancing knowledge sharing and the quality of AI systems.

  • Enhance the quality of AI models through crowdsourced annotation and evaluation, leading to better-performing, more reliable AI applications.

  • Establish a system of incentives to motivate users to actively participate in data contribution, annotation, and evaluation, creating a collaborative AI ecosystem.

  • Supports the development of AI solutions that align with local requirements and cultural sensitivities, fostering the ethical and responsible deployment of AI.


 

#10

Educate better and informed perspectives on AI-related investments on areas such as: (1) research & development; (2) supply chains; (3) digital goods & services; and (4) public-private partnership & digital public infrastructure.

Notes

 

Research & Development

  • Ensure that stakeholders are well-informed about AI investments in research and development, promoting effective allocation of resources.

Supply Chains

  • Enhance the understanding of AI's impact on supply chains, optimizing logistics and creating resilience in the face of disruptions.

Digital Goods & Services

  • Promote informed investment in the development of digital goods and services, aligning product offerings with market needs and emerging trends.

Public-Private Partnership & Digital Public Infrastructure

  • Facilitate the creation of robust public-private partnerships, fostering collaboration to develop digital public infrastructure that benefits society. The potential of public-private partnerships to boost the use and proliferation of India’s DPI remains untapped and AI education can address the gaps.


 

#11

Address and mitigate the risks of artificial intelligence hype by promoting net neutrality to discourage anti-competitive practices involving the use of AI at various levels and stages of research & development, maintenance, production, marketing & advertising, regulation, self-regulation, and proliferation.

Notes

Research & Development Stage

  • Encourage fair competition in AI research and development, preventing undue concentration of power and resources.

Stages of Maintenance, Production, Marketing, and Advertising

  • Reduce the risk of AI maintenance, production, marketing, and advertising becoming platforms for hype, ensuring ethical and responsible AI promotion.

 

Stages of Regulation, Self-Regulation and Proliferation

 

  • Mitigate the risk of AI proliferation without proper oversight, ensuring that AI technologies are developed and utilized responsibly and for the greater good.

AI Regulation

#12

Foster flexible and gradually compliant data privacy and human-centric explainable AI ecosystems for consumers and businesses.

Notes

 

Foster flexible and gradually compliant data privacy and human-centric explainable AI ecosystems for consumers and businesses.

A flexible and gradually compliant approach to data privacy and AI regulation can help to address these challenges while also promoting innovation. This can ensure:

  • Reduced risk of harm from AI systems

  • Increased customer trust

  • Enhanced reputation

Specific legal and policy issues for consideration

Data Protection / Privacy / Consent Issues: Ensure the sector-neutral interpretative and adjudicatory enablement of data protection rights, and enforcement mechanisms in line with the Digital Personal Data Protection Act, 2023 & its guidelines and the Code of Civil Procedure, 1908.

Data Processing and Pseudonymisation Issues: It is important to ensure that data is processed in a fair and explainable manner and that pseudonymisation is used where appropriate to protect the privacy of individuals.

Legitimate Use of Data-related Issues: The legitimate use of personal and non-personal data must be clarified, standardized and sensitized by the efforts of regulatory, judicial & dispute resolution institutions.

Data Erasure / Right to be Forgotten Issues: Consumers have the right to have their data erased in certain circumstances. There will be legible consumer law, and competition law issues, where the lack of abiding by the right to be forgotten generates dark patterns, which need to be adequately dealt with.

Contractual Disputes between Data Processors, Consumers and Data Fiduciaries: AI systems often involve complex contractual relationships between data processors, consumers, and data fiduciaries. It is important to ensure that these contracts are clear and fair and that consumers have access to effective dispute resolution mechanisms.

Algorithmic Ethics Issues in Company Law: AI systems can raise a number of algorithmic ethics issues. It is important to develop company law principles that promote the responsible & explainable use of AI.

Algorithmic Transparency and Bias Issues in Commercial Law: AI systems can often be opaque and difficult to understand. It is important to develop commercial law principles that promote transparency and accountability in AI systems.

 

#13

Develop regulatory sandboxes for sector-specific use cases of AI to standardize AI test cases & use cases subject to their technical and commercial viability.

Notes

 

Standardization of AI test cases and use cases via regulatory sandboxes

Regulatory sandboxes can provide a safe and controlled environment for testing and evaluating AI applications in a sector-specific context. For example, a regulatory sandbox could be established to allow healthcare providers and technology companies to test and evaluate AI-powered medical diagnostic tools. This would involve developing a set of standardized test cases and use cases that could be used to assess the accuracy, safety, and efficacy of these tools.

Improving technical and commercial viability of AI applications

Regulatory sandboxes can help to identify and address the regulatory and commercial challenges associated with the deployment of AI applications. This can help to make AI applications more technically and commercially viable, and to accelerate their adoption. In addition, defining human autonomy and its extent for AI use cases, technical & commercial, could be helpful for research and commercial purposes, to further standardise AI in the context of the future of work & innovation.


 

#14

Promote the sensitization of the first order, second order and third order effects of using AI products and services to B2C consumers (or citizens), B2B entities and even inter and intra-government stakeholders, which includes courts, ministries, departments, sectoral regulators and statutory bodies at both standalone & whole-of-government levels.

Notes

Sensitization for B2C Consumers

This would be helpful to inform consumers to be vigilant against market practices, which reveal dark patterns and other forms of manipulative practices engineered and promoted through artificial intelligence systems.

Sensitization for B2B entities

To help businesses make informed decisions about the use of AI in their businesses and to enhance their competitiveness.

Sensitization for inter and intra-government stakeholders

To maintain and improve the trust quotient of inter and intra-government stakeholders at two levels:

  • For standalone government and judicial institutions

  • For all organs of the government, from the judicial institutions to the executive branches, which includes statutory, cooperative, diplomatic and administrative sections of the Government of India, and the administrative branches of various state and union territory governments.


 

#15

Enable self-regulatory practices to strengthen the sector-neutral applicability of the Digital Personal Data Protection Act, 2023 and its regulations, circulars and guidelines.

Notes

Sector-neutral applicability of the Digital Personal Data Protection Act, 2023 and its regulations, circulars and guidelines.

Self-regulatory practices can