Artificial Intelligence (Development & Regulation) Bill
November 7, 2023
Chapter I: PRELIMINARY
Section 1 - Short Title and Commencement
(1) This Act may be called the Artificial Intelligence (Development & Regulation) Bill, 2023.
(2) It shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.
Section 2 – Definitions
[Please note: we have not provided all definitions, which may be required in this bill. We have only provided those definitions which are more essential, in signifying the legislative intent of the bill.]
In this Bill, unless the context otherwise requires,—
(a) “Artificial Intelligence”, “AI”, “artificial intelligence application”, “artificial intelligence system” and “AI systems” mean –
(i) an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. It is a diverse class of technology encompassing various sub-categories of technical, commercial, and sectoral nature, on the basis of the means of classification provided as follows –
a. Conceptual Classification: AI is conceptually classified for the ethical and legal evaluation of its use, development, maintenance, regulation, and proliferation. This classification is further categorised as –
i. Technical Concept Classification means the process of estimating the legal and policy risks associated with technical use cases of AI systems at a conceptual level;
ii. Issue-to-Issue Concept Classification means the process of assessing AI systems on an issue-specific basis to determine their conceptual nature;
iii. Ethics-Based Concept Classification means the process of shaping AI as a concept based on ethical theories, particularly in matters concerning regulation and adjudication;
iv. Phenomena-Based Concept Classification means the process of addressing rights-based issues due to the use of AI systems, focusing on natural and human-related phenomena; and
v. Anthropomorphism-Based Concept Classification means the process of evaluating scenarios where AI systems conceptually anthropomorphize human attributes and realities, thereby challenging traditional views of the symbiotic relationship between AI and human environments.
b. Technical and Commercial Classification: AI is classified as a product, service, or system in its technical and commercial aspects.
(b) “AI-Generated Content” means content, physical or digital that has been created or significantly modified by an Artificial Intelligence (AI) system, which includes, but is not limited to, text, images, audio, and video created through a variety of techniques, subject to the test case or the use case of the artificial intelligence application;
(c) “Appellate Tribunal” means the Telecom Disputes Settlement and Appellate Tribunal established under section 14 of the Telecom Regulatory Authority of India Act, 1997;
(d) “Content Provenance” means the identification, tracking, and watermarking of AI-generated content to establish its origin and authenticity.
(e) “Data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means;
(f) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data;
(g) “Data Principal” means the individual to whom the personal data relates and where such individual is—
(i) a child, includes the parents or lawful guardian of such a child;
(ii) a person with disability, includes her lawful guardian, acting on her behalf;
(h) “Data Processor” means any person who processes personal data on behalf of a Data Fiduciary;
(i) “Data Protection Officer” means an individual appointed by the Significant Data Fiduciary under clause (a) of sub-section (2) of section 10 of the Digital Personal Data Protection Act, 2023;
(j) “Digital Office” means an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode;
(k) “Digital personal data” means personal data in digital form;
(l) “Employment and Skill Security Standards” means regulations and practices addressing risks arising from the deployment and utilization of artificial intelligence systems concerning employment and skills;
(m) “Ethics Code” means set of principles and guidelines governing the development, procurement, and commercialization of artificial intelligence technologies with an approach fostering innovation and technology-neutral AI governance;
(n) “High Risk AI Systems” means artificial intelligence systems with significant potential risks and are classified as high risk as per the risk-stratification framework outlined in this Bill;
(o) “IDRC” means IndiaAI Development & Regulation Council, a statutory and regulatory body established to oversee the development and regulation of artificial intelligence systems across government bodies, ministries, and departments;
(p) “Inherent Purpose” means the underlying objective or goal for which an artificial intelligence system is designed, developed, and deployed, and that it encompasses the specific tasks, functions, or capabilities that the artificial intelligence system is intended to perform or achieve;
(q) “Insurance Policy” means measures and requirements concerning insurance for research and development, production, and implementation of artificial intelligence technologies;
(r) “Juridical Person” means an artificial intelligence technology recognized as a juridical person under the definitions of this Bill;
(s) “Medium Risk AI Systems” means artificial intelligence systems with moderate potential risks, identified in the medium risk category within the risk-stratification framework set out in this Bill;
(t) “Narrow Risk AI Systems” means artificial intelligence systems assessed to have minimal potential risks and fall within the lowest risk stratum as per the risk-stratification framework provided in this Bill;
(u) “Person” includes—
(i) an individual;
(ii) a Hindu undivided family;
(iii) a company;
(iv) a firm;
(v) an association of persons or a body of individuals, whether incorporated or not;
(vi) the State; and
(vii) every artificial juristic person, not falling within any of the preceding sub-clauses including otherwise referred to in sub-section (p) of the Bill;
(v) “Post-Deployment Monitoring” means all activities carried out by the data fiduciaries or third-party providers of AI systems to collect and review experience gained from the use of the artificial intelligence systems they place on the market or put into service for the purpose of identification to reasonably foresee to apply any preventive or corrective actions;
(w) “Quality Assessment” means the evaluation and determination of the quality of AI systems, encompassing technical, ethical, and commercial aspects;
(x) “Risk & Vulnerability Assessment” means the comprehensive analysis of potential risks and vulnerabilities associated with AI systems, particularly high-risk AI systems;
(y) “Significant Data Fiduciary” means any Data Fiduciary or class of Data Fiduciaries as may be notified by the Central Government under section 10 of the Digital Personal Data Protection Act, 2023;
(z) “State” means the State as defined under article 12 of the Constitution;
(aa) “training data” means data used for training an AI system through fitting its learnable parameters, which includes the weights of a neural network;
(bb) “Unintended Risk AI Systems” means AI systems that are prohibited under the provisions of this Bill due to their potential risks;
(cc) “testing data” means data used for providing an independent evaluation of the artificial intelligence system subject to training and validation to confirm the expected performance of that AI system before its placing on the market or putting into service;
(dd) “use case” means a specific application of an artificial intelligence system to solve a particular problem or achieve a desired outcome;
Chapter II: CATEGORIZATION AND PROHIBITION
Section 3 - Stratification of AI Systems
(1) AI systems shall be classified into four categories based on their inherent risk –
(a) Narrow risk AI systems: AI systems that pose a low level of risk to individuals, society, or the environment;
(b) Medium risk AI systems: AI systems that pose a moderate level of risk to individuals, society, or the environment;
(c) High risk AI systems: AI systems that pose a significant level of risk to individuals, society, or the environment; and
(d) Unintended risk AI systems: AI systems that are not deliberately designed or developed but emerge from the complex interactions of AI components and may pose unforeseen risks.
(2) All the AI systems designated in the categories above as per sub-section (1), except stated otherwise in the Section 5 (1) shall be examined on the basis of their inherent purpose, which is subject to the basis of the means of classifications provided in sub-section (a)(i) of Section 2 of the Bill, the listing of classes of artificial intelligence systems in Schedule III and the following risks and vulnerabilities as examined:
(i) The extent to which the AI system has been utilized or is expected to be employed shall be considered by the IDRC;
(ii) The assessment shall include an examination of the potential harm or adverse impact, taking into account its severity and the number of individuals affected.
(iii) In case it does not remain feasible for data principals to have the ability to opt out of the AI system's outcomes, then the reasons for such limitations shall be examined by the IDRC;
(iv) The assessment shall consider the vulnerability of data principals using the AI system, taking into those foreseeable factors as may be prescribed which may limit the autonomy of the data principles to realise and foresee the vulnerability of using the AI system;
(v) It shall be determined whether the outcomes produced by the AI system can be easily reversed;
(3) If the use case of an AI system not prohibited under Section 4 is associated with strategic industry sectors referred to in Schedule II, then the set of risks & vulnerabilities required to be examined as per sub-section (2) shall have to be examined subject to the legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed.
Section 4 - Prohibition of Unintended Risk AI Systems
The development, deployment, or use of unintended risk AI systems as listed in Schedule III is prohibited.
Chapter III: SECTOR-SPECIFIC STANDARDS FOR HIGH-RISK AI SYSTEMS
Section 5 - High-Risk AI Systems in Strategic Sectors
(1) Sector-specific standards shall be developed for high-risk AI systems in strategic industry sectors as designated by the Central Government in Schedule II.
(2) These standards shall address issues such as safety, security, reliability, transparency, accountability, and ethical considerations subject to the legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed.
Chapter IV: CERTIFICATION AND ETHICS CODE
Section 6 - Certification of AI Systems
(1) A certification scheme for AI systems shall be established.
(2) This scheme shall certify AI systems that meet the requirements of this Bill and other applicable laws as referred to in Schedule I.
(3) The purpose of every certification scheme is to identify and examine the inherent purpose of an AI system based on their risk levels designated in Section 3, and the means of classification provided in sub-section (a)(i) of Section 2 of the Bill, and listed in Schedule III.
Section 7 - Ethics Code for Narrow & Medium Risk AI Systems
(1) An Ethics Code for the development, procurement, and commercialization of artificial intelligence technologies shall be established.
(2) This Ethics Code shall promote responsible AI development and utilization while addressing the potential risks associated with AI technologies.
(3) The Ethics Code shall be based on the following principles:
(a) AI systems shall respect human dignity and well-being;
(b) AI systems shall be fair and non-discriminatory;
(c) AI systems shall be transparent and explainable;
(d) AI systems shall be accountable;
(e) AI systems shall respect privacy and data protection as per the provisions of the Digital Personal Data Protection Act, 2023;
(f) AI systems shall be secure and safe;
Chapter V: KNOWLEDGE MANAGEMENT AND DECISION-MAKING
Section 8 - Model Standards on Knowledge Management
(1) The IRDC shall develop and prescribe comprehensive model standards on knowledge management and decision-making processes concerning the development, maintenance, and compliance of artificial intelligence systems for all companies operating within the jurisdiction of India, regardless of the assessed risk levels of their AI systems.
(2) These model standards shall encompass the following areas:
(a) Effective knowledge management practices and procedures to ensure the quality, reliability, and security of data used for training, validating, and improving AI systems.
(b) Model governance frameworks that define the roles and responsibilities of individuals, departments, or committees involved in AI model development, deployment, and monitoring.
(c) Transparent decision-making procedures, including the establishment of model selection criteria, model performance assessment, and ethical considerations in AI system operation.
(3) All entities, whether public or private, that are engaged in the development, deployment, or utilization of artificial intelligence systems shall be bound by the model standards on knowledge management and decision-making as provided by this section.
(4) Entities already operating AI systems within the jurisdiction of India shall comply with the prescribed model standards within a reasonable timeframe, as determined by the IRDC. The compliance timeline may vary based on the complexity and risk levels associated with AI systems.
(5) The Central Government shall empower the IRDC or agencies to establish a knowledge management certification process that verifies the compliance of AI entities with the model standards outlined in this section.
(6) AI entities, upon fulfilling the compliance requirements, may be awarded a knowledge management certification, signifying their adherence to industry best practices in data handling, model governance, and ethical decision-making in AI technology.
(7) AI entities shall be required to submit regular reports to the designated authorities, outlining their adherence to the model standards for knowledge management and decision-making.
(8) The Ministry of Electronics and Information Technology shall establish a regulatory oversight framework to ensure the consistent application and enforcement of these model standards. This framework may involve audits, assessments, and inspections, as deemed necessary.
(9) Failure to adhere to the prescribed model standards for knowledge management and decision-making shall result in penalties and regulatory sanctions, which may include monetary fines, suspension of AI operations, or other regulatory actions as determined by the IRDC.
(10) Repeated or severe violations of these standards may lead to escalated enforcement actions, including the revocation of AI deployment licenses or registrations.
Section 9 - IndiaAI Development & Regulation Council (IDRC)
(1) With effect from such date as the Central Government may, by notification, appoint, there shall be established, for the purposes of this Act, a Council to be called the IndiaAI Development & Regulation Council (IDRC).
(a) The Council shall be constituted as a statutory and regulatory body with a whole-of-government approach to coordinate across government bodies, ministries, and departments;
(b) The Council shall be a body corporate by the name aforesaid, having perpetual succession and a common seal, with power, subject to the provisions of this Bill, to acquire, hold and dispose of property, both movable and immovable, and to contract and shall, by the said name, sue or be sued;
(c) The headquarters of the Council shall be at such place as the Central Government may notify;
(d) The Council shall consist of a Chairperson and such number of other Members as the Central Government may notify;
(e) The Chairperson and other Members shall be appointed by the Central Government in such manner as may be prescribed;
(f) The Chairperson and other Members shall be a person of ability, integrity and standing who possesses special knowledge or practical experience in the fields of data & artificial intelligence governance, administration or implementation of laws related to social or consumer protection, dispute resolution, information and communication technology, digital economy, law, regulation or techno-regulation, or in any other field which in the opinion of the Central Government may be useful to the Council, and at least one among them shall be an expert in the field of law;
We have not designated the functions of the IDRC, considering this draft bill to be a proposal.
Chapter VI: AGREEMENTS AND MONITORING
Section 10 - Guidance Principles for Agreements
(1) The guidance principles are applicable to the following class of agreements related to the use, development and commercialisation of artificial intelligence systems -
(a) AI Software License Agreement (ASLA) –
(i) An AI Software License Agreement shall grant the licensee the right to use AI software in accordance with the terms and conditions of the agreement;
(ii) The agreement should clearly outline the rights and responsibilities of both parties involved in the software licensing process. In principle, the following essentialities may be included in the agreement –
Grant of Rights: The ASLA should clearly define the scope of rights granted to the licensee, including the right to use, modify, and distribute the AI software.
License Restrictions: The ASLA should specify any limitations on the licensee's use of the AI software, such as restrictions on commercial use, modification, or distribution.
Intellectual Property Rights: The ASLA should address ownership of intellectual property rights, including copyright, patents, and trademarks, related to the AI software.
Term and Termination: The ASLA should specify the duration of the license agreement and the conditions under which it can be terminated.
Disclaimer of Warranties: The ASLA should include a disclaimer of warranties, limiting the liability of the software vendor for any defects or errors in the AI software.
Indemnification: The ASLA should specify the extent to which an aggrieved party will be subject to indemnification, based on the terms and conditions of the agreement.
(b) AI Service Level Agreement (SLA) –
(i) An AI service level agreement (SLA) between a service provider and a customer shall define the level of service provided, including performance metrics, service availability, and service support. In principle, the following essentialities may be included in the agreement –
Service Definition: The SLA should clearly define the scope of services to be provided, including the specific AI functionalities and performance metrics.
Service Availability: The SLA should specify the level of service availability, including uptime guarantees and response times.
Service Support: The SLA should outline the level of technical support to be provided, including response times and escalation procedures.
Performance Monitoring: The SLA should establish mechanisms for monitoring and measuring service performance against agreed-upon metrics.
Change Management: The SLA should address the process for implementing changes to the AI services, including notification requirements and impact assessments.
Problem Resolution: The SLA should define the process for identifying, investigating, and resolving service issues
(c) AI End-User License Agreement (EULA) or AI End-Client License Agreement (ECLA) –
(i) An AI end-user license agreement (EULA) or AI end-client license agreement (ECLA) between a software vendor and an end-user or a client shall respectively legitimise and mutually agree on the control and use of AI software. In principle, the following essentialities may be included in the agreement –
Scope of Use: The EULA or ECLA should clearly define the permitted uses of the AI system, including restrictions on commercial use, modification, or distribution.
User Obligations: The EULA or ECLA should outline the responsibilities of the end-user or client, such as complying with licensing terms and protecting data privacy.
Data Privacy: The EULA or ECLA should address the collection, use, and disclosure of personal data by the AI software, in line with the provisions of the Digital Personal Data Protection Act, 2023.
Intellectual Property Rights: The EULA or ECLA should acknowledge the ownership of intellectual property rights related to the AI software and restrict unauthorized use or infringement at a mutual level.
Disclaimer of Warranties: The EULA or ECLA should include a disclaimer of warranties, limiting the liability of the software vendor for any defects or errors in the AI software.
Limitation of Liability: The EULA or ECLA should limit the liability of the software vendor for damages arising from the use of the AI software.
(d) AI Explainability Agreement –
(i) An AI explainability agreement between software vendors and clients or customers shall require the company (vendor) to provide and submit explanations for the outputs of AI systems. In principles, the following essentialities may be included in the agreement –
Transparency and Explainability: The AI explainability agreement should require the software vendor to provide clear and understandable explanations for the outputs of any class of AI system as listed in Schedule I.
Documentation and Reporting: The agreement should specify the format and frequency of documentation and reporting on the AI system's decision-making processes.
Human Review and Intervention: The agreement should address the role of human review and intervention in the AI system's decision-making processes to examine the reasonably foreseeable misuse of an AI system based on their inherent purpose, means of classification as per subsection (a)(i) of Section 2 and the risk-based designation as per Section 3.
Continuous Improvement: The agreement should establish and state measures to encourage continuous improvement of the AI system's explainability and transparency.
(2) The guidance principles are consultative and indicative in nature, as may be prescribed.
Section 11 - Post-Deployment Monitoring of High-Risk AI Systems
(1) High-risk AI systems shall be subject to ongoing monitoring and evaluation to ensure their safety, security, and compliance with applicable laws and regulations.
(2) The monitoring and evaluation shall be conducted by the developers, operators, or users of the AI systems, as appropriate.
(3) The IDRC shall develop and establish guidelines for the post-deployment monitoring of high-risk AI systems.
Chapter VII: REPORTING AND SHARING
Section 12 - Third-Party Vulnerability Reporting
(1) The IDRC shall establish a secure and accessible platform for third-party vulnerability reporting of risks associated with AI systems. This platform shall allow individuals and organizations to anonymously report vulnerabilities without fear of retaliation.
(2) The IDRC shall establish a vulnerability response team to promptly review and assess reported vulnerabilities. This team shall have the expertise and resources to investigate vulnerabilities, determine their severity, and develop mitigation strategies.
(3) The IDRC shall establish a communication protocol for informing affected parties of identified vulnerabilities and coordinating mitigation efforts. This protocol shall ensure that vulnerabilities are addressed in a timely and effective manner.
Section 13 - Incident Reporting and Mitigation
(1) Developers, operators, and users of AI systems shall have mechanisms in place to report incidents related to AI systems. These mechanisms shall be easily accessible and user-friendly to encourage reporting.
(2) The priority of access will have to be given to incidents related to high-risk AI systems.
(3) The IDRC shall establish a central repository for incident reports.
(4) This repository shall allow for the collection, analysis, and sharing of incident data to identify trends and potential risks.
(5) The IDRC shall develop and publish guidelines for incident reporting and mitigation.
(6) These guidelines shall provide clear and actionable steps for organizations to follow in the event of an AI-related incident.
Guidance Principles for Incident Reporting and Incident Mitigation
We have recommended some guidance principles for incident reporting and incident mitigation independent of the provisions of the draft Bill.
• Establish clear reporting mechanisms through measures including having a dedicated incident reporting hotline or a secure online portal or a designated email address if possible or feasible or necessary
• Encourage timely reporting for timely mitigation
• Provide clear reporting guidelines for AI-related incidents, and mention them in any AI-related agreements and bye-laws
• Protect confidentiality of incident reports
Principles for Incident Mitigation
• Assess the incident and its severity
• Contain the incident by measures involving the isolation of AI systems, disabling specific functionalities and other measures
• Investigate the incident
• Remediate the incident by involving patching software, updating security protocols or retaining employees
• Communicate the incident
• Review and improve incident response procedures
Section 14 - Responsible Information Sharing
(1) Developers, operators, and users of AI systems shall share information in a responsible and ethical manner. This includes ensuring that information is accurate, complete, and relevant to the purpose of sharing.
(2) Information sharing shall be transparent, verifiable, and subject to appropriate safeguards to protect privacy and data security, which includes obtaining informed consent from data principals whose data is being shared.
(3) Data fiduciaries and third-party companies must describe a set of general practices applicable to the developers, operators and users of AI system attributed to information sharing in any agreement involving the use, development and commercialisation of artificial intelligence technologies.
(4) The IDRC shall develop guidelines for responsible information sharing in the context of AI.
Chapter VIII: INTELLECTUAL PROPERTY AND STANDARDS
Section 15 - Intellectual Property Protections
(1) In recognition of the unique challenges and opportunities presented by the development and use of artificial intelligence systems, AI systems must be protected through a combination of existing intellectual property (IP) rights, such as copyright, patents, and design rights, as well as new and evolving IP concepts specifically tailored to address the spatial aspects of AI systems.
(2) The objectives of providing a combination of existing intellectual property rights are –
(a) Encourage innovation in the development of AI systems by providing developers with secure and enforceable rights over their creations and innovations;
(b) Enhance the interoperability of AI systems by ensuring that relevant contractual arrangements are not unduly hindered by IP restrictions;
(c) Promote fair competition in the AI market by preventing the unauthorized appropriation and exploitation of IP assets generated in India;
(d) Protect the privacy and security of individuals by ensuring that the combinations of intellectual property protections do not compromise the confidentiality and integrity of personal data as per the provisions of the Digital Personal Data Protection Act, 2023;
(3) The IDRC shall establish consultative mechanisms in cooperation with the National Data Management Office and the relevant Centres of Excellence for AI for the identification, protection, and enforcement of intellectual rights. These mechanisms shall address issues such as:
(a) The definition and scope of combinations of intellectual property protections including their limitations as per the legitimate uses designated in the provisions of the Section 7 of the Digital Personal Data Protection Act, 2023 and otherwise as may be prescribed;
(b) The compatibility of such protections with existing IP laws;
(c) The interoperability considerations for the combinations of intellectual property protections;
Chapter IX: SECTOR-NEUTRAL & SECTOR-SPECIFIC STANDARDS
Section 16 - Shared Sector-Neutral Standards
The IDRC shall establish a process for developing shared sector-neutral standards for the responsible development, deployment, and use of AI systems.
We have recommended some sector-neutral standards for AI systems independent of the provisions of the draft Bill.
Transparency and Explainability
• AI systems should be designed and developed in a transparent manner, allowing users to understand how they work and how decisions are made.
• AI systems should be able to explain their decisions in a clear and concise manner, allowing users to understand the reasoning behind their outputs.
Fairness and Bias
• AI systems should be regularly monitored for bias and discrimination, and appropriate mitigation measures should be implemented to address any identified issues.
Safety and Security
• AI systems should be designed and developed with safety and security by design & default.
• AI systems should be protected from unauthorized access, modification, or destruction.
Human Control and Oversight
• AI systems should be subject to human control and oversight to ensure that they are used responsibly.
• There should be mechanisms in place for data principals to intervene in the operation of AI systems if necessary.
Chapter X: ANTI-COMPETITIVE PRACTICES AND CONTENT
Section 17 - Assessment of Anti-Competitive Practices
(1) In recognition of the potential for artificial intelligence (AI) systems to be used for anti-competitive purposes, the framework proposed in this Section shall complement and supplement the provisions of the Competition Act, 2002.
(2) The Competition Commission of India (CCI) shall have the primary responsibility for assessing and investigating anti-competitive practices involving AI systems.
Guidance Principles for on AI-related Anti-Competitive Practices
We propose a set of guidance principles for the Competition Commission of India, to examine anti-competitive practices involving AI systems, independent of the contents of this draft Bill.
In assessing and investigating anti-competitive practices involving AI systems, the CCI may consider the following factors:
(a) The nature and extent of market power possessed by AI systems or their developers.
(b) The potential for AI systems to be used to collude, price fix, or engage in other anti-competitive conduct.
(c) The ability of AI systems to collect, analyze, and use data in a manner that may harm competition.
(d) The potential for AI systems to create or reinforce barriers to entry or expansion in markets.
(e) The impact of AI systems on consumer choice, innovation, and economic efficiency.
The CCI shall develop and publish guidelines for assessing and investigating anti-competitive practices involving AI systems. These guidelines shall provide guidance on the factors to consider when assessing anti-competitive conduct, the types of evidence that may be relevant, and the appropriate remedies for anti-competitive behavior.
The CCI shall monitor and review the impact of AI systems on competition and update its guidelines and enforcement practices as needed to address emerging challenges.
The CCI shall promote public awareness of the potential for anti-competitive practices involving AI systems and encourage individuals and organizations to report suspected anti-competitive conduct.
Section 18 - Content Provenance and Identification
(1) Every AI system, AI that produces an AI-generated content or manipulates content shall have mechanisms in place to identify the source of the content and to maintain a record of the provenance of the content.
(2) This record shall include information such as the date and time the content was generated or manipulated, the identity of the AI system that generated or manipulated the content, and any other relevant information as may be prescribed.
(3) AI systems shall use watermarking to embed identifying information into generated or manipulated content in a manner that is robust to manipulation and that can be used to verify the authenticity of the content and differentiate between AI-generated content and content, which is not produced or manipulated by an AI system.
(4) The watermarking or other identifying information shall be accessible to the public in a transparent manner, which may involve publishing the watermarking or identifying information in a public repository or making it available through an open API.
(5) The IDRC shall develop and publish guidelines for the implementation and use of watermarking and other identifying techniques in AI systems.
(6) These guidelines shall address issues such as –
(i) the type of information to be embedded in watermarks;
(ii) the robustness of watermarking techniques; and
(iii) the accessibility of watermarking information.
(7) The IDRC shall certify the use of watermarking techniques in AI systems and assess the effectiveness of these techniques in preventing the misuse of AI-generated content.
(8) The provisions of this Section shall apply to all AI systems that generate or manipulate content, regardless of the purpose or intended use of the content, including those AI systems that generate text, images, audio, or video.
Chapter XI: EMPLOYMENT AND INSURANCE
Section 19 - Employment and Skill Security Standards
(1) Employment Security
(i) Companies or entities employing high-risk AI systems shall not reduce human employment opportunities without implementing safeguards to protect the rights and livelihood of affected employees.
(ii) Employers deploying high-risk AI systems shall engage in consultation with their employees and relevant employee representatives to establish fair transition plans that may include retraining, redeployment, or alternative employment opportunities.
(iii) The Ministry of Labour and Employment, in collaboration with the Ministry of Electronics and Information Technology, shall prescribe detailed guidelines for employment security in the context of AI technology deployment, emphasizing fair labour practices.
(2) Skill Security
(i) Companies or institutions actively involved in the development, application, or research of AI technologies shall facilitate skill development initiatives for their workforce and, where appropriate, offer training programs for acquiring new skills.
(ii) Sector-specific skill development programs and vocational training centres shall be promoted to address the evolving skill requirements arising from AI technology advancements.
(iii) The National Skill Development Corporation (NSDC) and sector-specific Skill Councils shall collaborate with the Ministry of Skill Development and Entrepreneurship to define skill development standards and certifications tailored to the AI sector's demands subject to the National Occupational Standards and Qualification Packs, and even otherwise as may be prescribed.
Section 20 - Insurance Policy for AI Technologies
(1) Any organization or entity that develops, deploys, or utilizes high-risk AI systems shall be mandated to obtain comprehensive insurance coverage to manage and mitigate potential risks associated with AI operations.
(2) The Insurance Regulatory and Development Authority of India (IRDAI) shall, in conjunction with relevant ministries, specify the minimum insurance coverage standards for AI technologies. The insurance coverage requirements shall encompass –
(i) technical failures;
(ii) data breaches; and
(3) Entities deploying AI systems must maintain records of their insurance policies, ensuring that these policies cover a comprehensive spectrum of AI-related risks and liabilities.
(4) Insurance companies offering AI technology coverage shall operate within the guidelines and directives laid out by the IRDAI.
(5) The IRDAI shall establish rigorous underwriting criteria and risk assessment procedures specific to AI-related insurance policies, which shall encompass –
(i) assessment methods;
(ii) premium calculation models; and
(iii) claims processing standards.
(6) Insurance providers shall be responsible for presenting transparent and detailed insurance policies tailored to the unique risks of AI technologies, and they must promptly address claims and compensate policyholders as per the policy terms.
(7) Entities deploying AI systems shall be obligated to furnish evidence of their insurance coverage when procuring or deploying high-risk AI systems.
(8) The Central Government, through the Ministry of Electronics and Information Technology, may necessitate AI entities to furnish periodic reports on their insurance policies, including claims made and settled.
(9) Failure to comply with the insurance requirements outlined in this section shall subject the AI entity to monetary fines and potential regulatory sanctions as per the discretion of the Central Government.
Chapter XII: APPEAL AND ALTERNATIVE DISPUTE RESOLUTION
Section 21 – Appeal to Appellate Tribunal
(1) Any person aggrieved by an order or direction made by the IRDC under this Bill may prefer an appeal before the Appellate Tribunal.
(2) Every appeal under sub-section (1) shall be filed within a period of ninety days from the date of receipt of the order or direction appealed against and it shall be in such form and manner and shall be accompanied by such fee as may be prescribed.
(3) The Appellate Tribunal may entertain an appeal after the expiry of the period specified in sub-section (2), if it is satisfied that there was sufficient cause for not preferring the appeal within that period.
(4) On receipt of an appeal under sub-section (1), the Appellate Tribunal may, after giving the parties to the appeal, an opportunity of being heard, pass such orders thereon as it thinks fit, confirming, modifying or setting aside the order appealed against.
(5) The Appellate Tribunal shall send a copy of every order made by it to the Board and to the parties to the appeal.
(6) The appeal filed before the Appellate Tribunal under sub-section (1) shall be dealt with by it as expeditiously as possible and endeavour shall be made by it to dispose of the appeal finally within nine months from the date on which the appeal is presented to it.
(7) Where any appeal under sub-section (6) could not be disposed of within the period of nine months, the Appellate Tribunal shall record its reasons in writing for not disposing of the appeal within that period.
(8) Without prejudice to the provisions of section 14A and section 16 of the Telecom Regulatory Authority of India Act, 1997, the Appellate Tribunal shall deal with an appeal under this section in accordance with such procedure as may be prescribed.
(9) Where an appeal is filed against the orders of the Appellate Tribunal under this Bill, the provisions of section 18 of the Telecom Regulatory Authority of India Act, 1997 shall apply.
(10) In respect of appeals filed under the provisions of this Bill, the Appellate Tribunal shall, as far as practicable, function as a digital office, with the receipt of appeal, hearing and pronouncement of decisions in respect of the same being digital by design.
Section 22 – Orders passed by Appellate Tribunal to ne executable as decree
(1) An order passed by the Appellate Tribunal under this Bill shall be executable by it as a decree of civil court, and for this purpose, the Appellate Tribunal shall have all the powers of a civil court.
(2) Notwithstanding anything contained in sub-section (1), the Appellate Tribunal may transmit any order made by it to a civil court having local jurisdiction and such civil court shall execute the order as if it were a decree made by that court.
Section 23 – Alternate Dispute Resolution
If the IRDC is of the opinion that any complaint may be resolved by mediation, it may direct the parties concerned to attempt resolution of the dispute through such mediation by such mediator as the parties may mutually agree upon, or as provided for under any law for the time being in force in India.
We have provided a list of suggested provisions, which may be expected in the draft Bill, but do not have any substantive necessity to be drafted.
Chapters XIII-XV & SCHEDULES
CHAPTER XIII: MISCELLANEOUS
Section 21 - Power to Make Rules
Section 22 - Power to Make Regulations
Section 23 - Protection of Action Taken in Good Faith
Section 24 - Offenses and Penalties
CHAPTER XIII: REPEAL AND SAVINGS
Section 25 - Savings Clause
CHAPTER XV: FINAL PROVISIONS
Section 26 - Power to Remove Difficulties
Section 27 - Amendment of [Other Legislation]
We propose that the schedules as described in the draft Bill must cover the following contents as proposed:
Schedule I: Applicable Laws which are sector-specific, sector-neutral and related to the substantive aspects of the draft Bill.
Schedule II: List of Strategic Industry Sectors
Schedule III: List of Classes of Artificial Intelligence Technologies in a Table
New Artificial Intelligence Strategy for India, 2023
November 7, 2023
We suggest that in a reinvented AI strategy for India, the four pillars of India's position on Artificial Intelligence must be AI policy, AI diplomacy, AI entrepreneurship and AI regulation. These are the most specific commitments in the four key areas that could be achieved in 5-10 years. The rationale and benefits of adopting each of the points in the policy proposal are explained on a point-to-point basis.
Strengthen and empower India’s Digital Public Infrastructure to transform its potential to integrate governmental and business use cases of artificial intelligence at a whole-of-government level.
A whole-of-government approach to AI is essential for ensuring that AI is used effectively and efficiently across government. This requires coordination and collaboration between different government agencies. Such an approach to AI can help to avoid duplication of effort, ensure consistency of approach, and maximize the benefits of AI in a flexible and coordinated manner.
Transform and rejuvenate forums of judicial governance and dispute resolution to keep them effectively prepared to address and resolve disputes related to artificial intelligence, which are related to issues ranging from those of data protection & consent to algorithmic activities & operations and corporate ethics.
Effective preparedness of courts, tribunals and dispute resolution forums
Forums of judicial governance and dispute resolution play a crucial role in ensuring that AI is used in a fair and just manner. These forums provide a platform for individuals and businesses to seek redress in the event of disputes related to AI.
It would become necessary for the courts, tribunals and dispute resolution forums to address the interpretability and maintainability of technology law disputes, at various levels, as proposed:
Level 1: Data Protection / Privacy / Consent / Processing Issues
Level 2: Level 1 + Sector-specific Civil Law Issues
Level 3: Algorithmic Use and Ethics Issues
Level 4: Level 3 + Issues related to AI Governance in Companies / Government Bodies
Level 5: AI and Corporate Practice Issues + Sector-specific Competition Law / Trade Law / Investment Law Issues
Level 6: Level 5 + Telecom Arbitration / Technology Arbitration
Reasonable Distinction of Legal and Policy Issues
For courts, tribunals and dispute resolution forums to address and resolve disputes related to artificial intelligence and law, they would require to adopt a technology-neutral approach to interpret and examine the veracity of legal issues related to artificial intelligence use, proliferation and democratisation, based on a reasonable distinction of legal and policy issues, as proposed:
Data Protection / Privacy / Consent Issues
Data Processing and Pseudonymisation Issues
Legitimate Use of Data-related Issues
Data Erasure / Right to be Forgotten Issues
Contractual Disputes between Data Processors, Consumers and Data Fiduciaries
Jurisdiction and Cross-Border Ownership and Liability Questions
Transboundary Flow of Data
Algorithmic Ethics Issues in Company Law
Algorithmic Transparency and Bias Issues in Commercial Law
Regulation and Compliance of Algorithmic Activities & Operations of AI Use Cases, subject to their Technical Features and Commercial Viability
Artificial Intelligence Governance Issues at Business-to-Business & Business-to-Government levels.
AI-related Mergers & Acquisitions Issues
AI-related Investment Issues
Arbitrability of Telecom Disputes Arising out of use of Artificial Intelligence Technologies
Focus on the socio-technical empowerment and skill mobility for businesses, professionals, and academic researchers in India and the Global South to mobilize and prepare for the proliferation of artificial intelligence & its versatile impact across sectors.
Provide training and education on AI preparedness
Educate businesses, professionals, and academic researchers in India and Global South to strengthen them for preparedness against the risks and proliferation of artificial intelligence technologies.
Promote AI adoption
Enable AI learning and mobilization among businesses, professionals and academic researchers beyond preparedness to enable them to adopt and utilise relevant AI use cases. This helps them to help regulators in India and Global South countries to develop reasonable compliance frameworks and industrial standardisation ecosystems.
Enable safer and commercially productive AI & data ecosystems for startups, professionals and MSMEs in the Global South countries.
Enable Safer AI & Data Ecosystems
Aid start-ups, professionals, and MSMEs in the Global South to navigate the complexities of AI with confidence and security.
Promote risk mitigation, ensuring that these entities can explore AI and data-driven ventures without excessive threats to their businesses.
Foster innovation by creating an environment where start-ups, professionals, and MSMEs can experiment with AI solutions, driving economic growth.
Encourage foreign and domestic investments, positioning the Global South as an attractive hub for AI entrepreneurship and development.
Bridge economic and digital cooperation with countries in the Global South to promote the implementation of sustainable regulatory and enforcement standards, when the lack of regulation on digital technologies, especially artificial intelligence becomes an unintended systemic, economic and political risk.
Bridge Economic and Digital Cooperation to Promote Sustainable Regulatory and Enforcement Standards
Address the inherent risks posed by the absence of regulations on digital technologies, reducing systemic, economic, and political vulnerabilities.
Encourage knowledge exchange and best practices sharing among nations, enabling the implementation of sustainable regulatory and enforcement standards for AI and digital technologies.
Enhance the digital readiness of Global South countries, positioning them to tap into the opportunities presented by AI while mitigating risks and uncertainties.
Strengthen diplomatic and economic relationships, creating a mutually supportive environment for nations as they navigate the complexities of AI and digital ecosystems.
Position the Global South as a collective force in shaping AI regulations and standards, allowing its members to have a more influential and balanced role in the global AI landscape.
Develop and promote India-centric, locally viable commercial solutions in the form of AI products & services.
Promote innovation and economic growth
Developing and promoting India-centric, locally viable commercial solutions in the form of AI products and services can help to promote innovation and economic growth. AI-powered products and services can create new jobs, boost productivity, and open up new markets.
Encourage the development of locally viable AI solutions in India
This can help to reduce India's reliance on foreign technology. This can make India more resilient to external shocks and give it more control over its own economic destiny.
Enable the industry standardization of sector-specific technical & commercial AI use cases.
Enable Industry Standardization
Promote consistency and interoperability in AI applications across sectors, reducing fragmentation and enhancing efficiency.
Foster the development of clear benchmarks for AI use cases, facilitating seamless integration and promoting fair competition
Position India to lead in sector-specific AI use cases, attracting investments and fostering innovation in targeted industries
Empower professionals and businesses by offering a structured approach to AI adoption, reducing barriers to entry and risks associated with uncertainty.
Subsidize & incentivize the availability of compute infrastructure, and technology ecosystems to develop AI solutions for local MSMEs and emerging start-ups.
Provide financial assistance to SMEs and start-ups to purchase cloud computing resources
Provide financial assistance to SMEs and start-ups to purchase cloud computing resources, such as compute power, storage, and networking. This will make it more affordable for SMEs and start-ups to access the resources they need to develop and deploy AI solutions.
Establish AI innovation hubs
Establish AI innovation hubs across the country. These hubs will provide SMEs and start-ups with access to compute infrastructure, technology ecosystems, and expertise. The hubs can also help to foster collaboration between SMEs, start-ups, and other stakeholders.
Partner with universities and research institutions
Partner with universities and research institutions to develop AI curriculum and to provide training to SMEs and start-ups on AI. This will help to ensure that SMEs and start-ups have the skills and knowledge they need to develop and deploy AI solutions.
Establish a decentralized, localized & open-source data repository for AI test cases & use cases and their training models, with services to annotate & evaluate models and develop a system of incentives to encourage users to contribute data and to annotate and evaluate models.
Establish Decentralized Data Repository
Facilitate accessibility to AI test cases, use cases, and training models, promoting transparency and innovation within the AI ecosystem on a sector-wide basis.
Encourage the development of localized, context-aware AI solutions that are adapted to the nuances and requirements of different regions and communities.
Foster open-source collaboration, allowing AI practitioners and developers to contribute, annotate, and evaluate models, enhancing knowledge sharing and the quality of AI systems.
Enhance the quality of AI models through crowdsourced annotation and evaluation, leading to better-performing, more reliable AI applications.
Establish a system of incentives to motivate users to actively participate in data contribution, annotation, and evaluation, creating a collaborative AI ecosystem.
Supports the development of AI solutions that align with local requirements and cultural sensitivities, fostering the ethical and responsible deployment of AI.
Educate better and informed perspectives on AI-related investments on areas such as:
research & development,
digital goods & services and
public-private partnership & digital public infrastructure.
Research & Development
Ensure that stakeholders are well-informed about AI investments in research and development, promoting effective allocation of resources.
Enhance the understanding of AI's impact on supply chains, optimizing logistics and creating resilience in the face of disruptions.
Digital Goods & Services
Promote informed investment in the development of digital goods and services, aligning product offerings with market needs and emerging trends.
Public-Private Partnership & Digital Public Infrastructure
Facilitate the creation of robust public-private partnerships, fostering collaboration to develop digital public infrastructure that benefits society. The potential of public-private partnerships to boost the use and proliferation of India’s DPI remains untapped and AI education can address the gaps.
Address and mitigate the risks of artificial intelligence hype by promoting net neutrality to discourage anti-competitive practices involving the use of AI at various levels and stages of:
research & development,
marketing & advertising,
Research & Development Stage
Encourage fair competition in AI research and development, preventing undue concentration of power and resources.
Stages of Maintenance, Production, Marketing, and Advertising
Reduce the risk of AI maintenance, production, marketing, and advertising becoming platforms for hype, ensuring ethical and responsible AI promotion.
Stages of Regulation, Self-Regulation and Proliferation
Mitigate the risk of AI proliferation without proper oversight, ensuring that AI technologies are developed and utilized responsibly and for the greater good.
Foster flexible and gradually compliant data privacy and human-centric explainable AI ecosystems for consumers and businesses.
Foster flexible and gradually compliant data privacy and human-centric explainable AI ecosystems for consumers and businesses.
A flexible and gradually compliant approach to data privacy and AI regulation can help to address these challenges while also promoting innovation. This can ensure:
Reduced risk of harm from AI systems
Increased customer trust
Specific legal and policy issues for consideration
Data Protection / Privacy / Consent Issues: Ensure the sector-neutral interpretative and adjudicatory enablement of data protection rights, and enforcement mechanisms in line with the Digital Personal Data Protection Act, 2023 & its guidelines and the Code of Civil Procedure, 1908.
Data Processing and Pseudonymisation Issues: It is important to ensure that data is processed in a fair and explainable manner and that pseudonymisation is used where appropriate to protect the privacy of individuals.
Legitimate Use of Data-related Issues: The legitimate use of personal and non-personal data must be clarified, standardized and sensitized by the efforts of regulatory, judicial & dispute resolution institutions.
Data Erasure / Right to be Forgotten Issues: Consumers have the right to have their data erased in certain circumstances. There will be legible consumer law, and competition law issues, where the lack of abiding by the right to be forgotten generates dark patterns, which need to be adequately dealt with.
Contractual Disputes between Data Processors, Consumers and Data Fiduciaries: AI systems often involve complex contractual relationships between data processors, consumers, and data fiduciaries. It is important to ensure that these contracts are clear and fair and that consumers have access to effective dispute resolution mechanisms.
Algorithmic Ethics Issues in Company Law: AI systems can raise a number of algorithmic ethics issues. It is important to develop company law principles that promote the responsible & explainable use of AI.
Algorithmic Transparency and Bias Issues in Commercial Law: AI systems can often be opaque and difficult to understand. It is important to develop commercial law principles that promote transparency and accountability in AI systems.
Develop regulatory sandboxes for sector-specific use cases of AI to standardize AI test cases & use cases subject to their technical and commercial viability.
Standardization of AI test cases and use cases via regulatory sandboxes
Regulatory sandboxes can provide a safe and controlled environment for testing and evaluating AI applications in a sector-specific context. For example, a regulatory sandbox could be established to allow healthcare providers and technology companies to test and evaluate AI-powered medical diagnostic tools. This would involve developing a set of standardized test cases and use cases that could be used to assess the accuracy, safety, and efficacy of these tools.
Improving technical and commercial viability of AI applications
Regulatory sandboxes can help to identify and address the regulatory and commercial challenges associated with the deployment of AI applications. This can help to make AI applications more technically and commercially viable, and to accelerate their adoption. In addition, defining human autonomy and its extent for AI use cases, technical & commercial, could be helpful for research and commercial purposes, to further standardise AI in the context of the future of work & innovation.
Promote the sensitization of the first order, second order and third order effects of using AI products and services to B2C consumers (or citizens), B2B entities and even inter and intra-government stakeholders, which includes courts, ministries, departments, sectoral regulators and statutory bodies at both standalone & whole-of-government levels.
Sensitization for B2C Consumers
This would be helpful to inform consumers to be vigilant against market practices, which reveal dark patterns and other forms of manipulative practices engineered and promoted through artificial intelligence systems.
Sensitization for B2B entities
To help businesses make informed decisions about the use of AI in their businesses and to enhance their competitiveness.
Sensitization for inter and intra-government stakeholders
To maintain and improve the trust quotient of inter and intra-government stakeholders at two levels:
For standalone government and judicial institutions
For all organs of the government, from the judicial institutions to the executive branches, which includes statutory, cooperative, diplomatic and administrative sections of the Government of India, and the administrative branches of various state and union territory governments.
Enable self-regulatory practices to strengthen the sector-neutral applicability of the Digital Personal Data Protection Act, 2023 and its regulations, circulars and guidelines.
Sector-neutral applicability of the Digital Personal Data Protection Act, 2023 and its regulations, circulars and guidelines.
Self-regulatory practices can also help to ensure that the Act is applied in a sector-neutral manner, meaning that it applies to all organizations, regardless of their sector of activity.
Promote and maneuver intellectual property protections for AI entrepreneurs & research ecosystems in India.
Promote IP Protections for AI Entrepreneurs in India
Encourage AI innovation by safeguarding intellectual property rights, providing creators with a competitive advantage.
Promotes collaboration between academia, industry, and government, resulting in knowledge-sharing and cross-pollination of ideas.
Maneuver IP Protections for AI Entrepreneurs in India
Foster a thriving AI research ecosystem by protecting inventors' discoveries, fostering a culture of creativity and entrepreneurship.
Boost economic development by enabling AI start-ups and entrepreneurs to monetize their innovations and create revenue streams.
Abhivardhan is honoured to serve as the Chairperson & Managing Trustee of the Indian Society of Artificial Intelligence and Law and as the Managing Partner at Indic Pacific Legal Research. Throughout his journey, he has gained valuable experience in international technology law, corporate innovation, global governance, and cultural intelligence.
With deep respect for the field, Abhivardhan has been fortunate to contribute to esteemed law, technology, and policy magazines and blogs. His book, “Artificial Intelligence Ethics and International Law: An Introduction” (2019), modestly represents his exploration of the important connection between artificial intelligence and ethical considerations. Emphasizing the significance of an Indic approach to AI Ethics, Abhivardhan aims to bring diverse perspectives to the table.
Abhivardhan remains humbled by the opportunity to share knowledge through various papers on international technology law. Alongside his consulting and policy advocacy, he has been involved in both authoring and editing books, focusing on public international law and its relationship with artificial intelligence. Some of his notable works also include the 2020 Handbook on AI and International Law, the 2021 Handbook on AI and International Law and the technical reports on Generative AI, Explainable AI and Artificial Intelligence Hype.
Abhivardhan is the drafter and author of the Artificial Intelligence (Development & Regulation) Bill.
Akash Manwani is honoured to serve as the Chief Innovation Consultant of the Indian Society of Artificial Intelligence and Law. His work embodies a fusion of legal rigor and forward-thinking, positioning him as a vital voice in shaping the dialogue around artificial intelligence in the legal sphere. His notable works include 2020 Handbook on AI and International Law and Technical Reports on AI Auditing and Web3.
Both Akash and Abhivardhan have authored the New Artificial Intelligence Strategy for India.
The contents of the Artificial Intelligence (Development & Regulation) Bill, 2023 and the New Artificial Intelligence Strategy for India, 2023 are proposals submitted to the Ministry of Electronics & Information Technology, Government of India.
You may use the contents for personal and non-commercial purposes only. If you use any content from the page of this website in your own work, you must properly attribute the source. This means including a link to this website and citing the name of the page.
Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example):
Indic Pacific Legal Research LLP, 'AIACT.IN' (Indic Pacific Legal Research, 2023) <https://www.indicpacific.com/ai>
You are not authorised to reproduce, distribute, or modify the contents without the express written permission of a representative of Indic Pacific Legal Research. Any critical suggestions or reviews to the contents of "AIACT.IN" are not counted as reproduction, provided that a correct attribution of the contents of "AIACT.IN" is done.
The Firm makes no representations or warranties about the accuracy or completeness of the contents of "AIACT.IN". The Firm disclaims all liability for any errors or omissions in the contents of "AIACT.IN".
You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the contents of "AIACT.IN".