top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

[Draft] Artificial Intelligence Act for India, Version 2



The Artificial Intelligence (Development & Regulation) Bill, 2023 (AIACT.In) Version 2, released on March 14, 2024, builds upon the framework established in Version 1 while introducing several new provisions and amendments. This draft legislation proposed by our Founder, Mr Abhivardhan, aims to promote responsible AI development and deployment in India through a comprehensive regulatory framework.


Please note that draft AIACT.IN (Version 2) is an Open Proposal developed by Mr Abhivardhan and Indic Pacific Legal Research, and is not a draft legislation proposed by any Ministry of the Government of India.


You can access and download the Version 2 of the AIACT.IN by clicking below.

Key Features of Artificial Intelligence Act for India [AIACT.In] Version 2


  1. Categorization of AI Systems: Version 2 introduces a detailed categorization of AI systems based on conceptual, technical, commercial, and risk-centric methods of classification. This stratification helps in identifying and regulating AI technologies according to their inherent purpose, technical features, and potential risks.

  2. Prohibition of Unintended Risk AI Systems: The development, deployment, and use of unintended risk AI systems, as classified under Section 3, is prohibited in Version 2. This provision aims to mitigate the potential harm caused by AI systems that may emerge from complex interactions and pose unforeseen risks.

  3. Sector-Specific Standards for High-Risk AI: Version 2 mandates the development of sector-specific standards for high-risk AI systems in strategic sectors. These standards will address issues such as safety, security, reliability, transparency, accountability, and ethical considerations.

  4. Certification and Ethics Code: The IDRC (IndiaAI Development & Regulation Council) is tasked with establishing a voluntary certification scheme for AI systems based on their industry use cases and risk levels. Additionally, an Ethics Code for narrow and medium-risk AI systems is introduced to promote responsible AI development and utilization.

  5. Knowledge Management and Decision-Making: Version 2 emphasizes the importance of knowledge management and decision-making processes for high-risk AI systems. The IDRC is required to develop comprehensive model standards in these areas, and entities engaged in the development or deployment of high-risk AI systems must comply with these standards.

  6. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to address the spatial aspects of AI systems. The IDRC is tasked with establishing consultative mechanisms for the identification, protection, and enforcement of intellectual property rights related to AI.


Comparison with AIACT.In Version 1


  1. Expanded Scope: Version 2 expands upon the regulatory framework established in Version 1, introducing new provisions and amendments to address the evolving landscape of AI development and deployment.

  2. Detailed Categorization: While Version 1 provided a basic categorization of AI systems, Version 2 introduces a more comprehensive and nuanced approach to classification based on conceptual, technical, commercial, and risk-centric methods.

  3. Sector-Specific Standards: Version 2 places a greater emphasis on the development of sector-specific standards for high-risk AI systems in strategic sectors, compared to the more general approach taken in Version 1.

  4. Knowledge Management and Decision-Making: The importance of knowledge management and decision-making processes for high-risk AI systems is highlighted in Version 2, with the IDRC tasked with developing comprehensive model standards in these areas. This aspect was not as prominently featured in Version 1.

  5. Intellectual Property Protections: Version 2 recognizes the need for a combination of existing intellectual property rights and new IP concepts tailored to AI systems, whereas Version 1 did not delve into the specifics of intellectual property protections for AI.


Detailed Description of the Features of AIACT.IN Version 2


Significance of Key Section 2 Definitions


Section 2 of AIACT.IN provides essential definitions that signify the legislative intent of the Act. Some of the key definitions are:

  1. Artificial Intelligence: The Act defines AI as an information system that employs computational, statistical, or machine-learning techniques to generate outputs based on given inputs. This broad definition encompasses various subcategories of technical, commercial, and sectoral nature, as set forth in Section 3.

  2. AI-Generated Content: This refers to content, physical or digital, that has been created or significantly modified by an artificial intelligence technology. This includes text, images, audio, and video created through various techniques, subject to the test case or use case of the AI application.

  3. Algorithmic Bias: The Act defines algorithmic bias as inherent technical limitations within an AI product, service, or system that lead to systematic and repeatable errors in processing, analysis, or output generation, resulting in outcomes that deviate from objective, fair, or intended results. This includes technical limitations that emerge from the design, development, and operational stages of AI.

  4. Combinations of Intellectual Property Protections: This refers to the integrated application of various intellectual property rights, such as copyrights, patents, trademarks, trade secrets, and design rights, to safeguard the unique features and components of AI systems.

  5. Content Provenance: The Act defines content provenance as the identification, tracking, and watermarking of AI-generated content using a set of techniques to establish its origin, authenticity, and history. This includes the source data, models, and algorithms used to generate the content, as well as the individuals or entities involved in its creation, modification, and distribution.

  6. Data: The Act defines data as a representation of information, facts, concepts, opinions, or instructions in a manner suitable for communication, interpretation, or processing by human beings or by automated or augmented means.

  7. Data Fiduciary: A data fiduciary is any person who alone or in conjunction with other persons determines the purpose and means of processing personal data.

  8. Data Portability: Data portability refers to the ability of a data principal to request and receive their personal data processed by a data fiduciary in a structured, commonly used, and machine-readable format, and to transmit that data to another data fiduciary.

  9. Data Principal: The data principal is the individual to whom the personal data relates. In the case of a child or a person with a disability, this includes the parents, lawful guardian, or lawful guardian acting on their behalf.

  10. Data Protection Officer: A data protection officer is an individual appointed by the Significant Data Fiduciary under the Digital Personal Data Protection Act, 2023.

  11. Digital Office: A digital office is an office that adopts an online mechanism wherein the proceedings, from receipt of intimation or complaint or reference or directions or appeal, as the case may be, to the disposal thereof, are conducted in online or digital mode.

  12. Digital Personal Data: Digital personal data refers to personal data in digital form.

  13. Digital Public Infrastructure (DPI): DPI refers to the underlying digital platforms, networks, and services that enable the delivery of essential digital services to the public, including digital identity systems, digital payment systems, data exchange platforms, digital registries and databases, and open application programming interfaces (APIs) and standards.

  14. Knowledge Asset: A knowledge asset includes intellectual property rights, documented knowledge, tacit knowledge and expertise, organizational processes, customer-related knowledge, knowledge derived from data analysis, and collaborative knowledge.

  15. Knowledge Management: Knowledge management refers to the systematic processes and methods employed by organizations to capture, organize, share, and utilize knowledge assets related to the development, deployment, and regulation of AI systems.

  16. IDRC: IDRC stands for IndiaAI Development & Regulation Council, a statutory and regulatory body established to oversee the development and regulation of AI systems across government bodies, ministries, and departments.

  17. Inherent Purpose: The inherent purpose refers to the underlying technical objective for which an AI technology is designed, developed, and deployed, encompassing the specific tasks, functions, and capabilities that the AI technology is intended to perform or achieve.

  18. Insurance Policy: Insurance policy refers to measures and requirements concerning insurance for research and development, production, and implementation of AI technologies.

  19. Interoperability Considerations: Interoperability considerations are the technical, legal, and operational factors that enable AI systems to work together seamlessly, exchange information, and operate across different platforms and environments.

  20. Open Source Software: Open source software is computer software that is distributed with its source code made available and licensed with the right to study, change, and distribute the software to anyone and for any purpose.

  21. National Registry of Artificial Intelligence Use Cases: The National Registry of Artificial Intelligence Use Cases is a national-level digitized registry of use cases of AI technologies based on their technical & commercial features and inherent purpose, maintained by the Central Government for the purposes of standardization and certification of use cases of AI technologies.


These definitions provide a clear understanding of the scope and intent of AIACT.IN, ensuring that the Act effectively addresses the complexities and challenges associated with the development and regulation of AI systems in India.


Here is a list of some FAQs (frequently asked questions, that are addressed in detail.


Is AIACT.IN Version 2.0 friendly towards promoting AI innovation?

Yes, the Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2.0 appears to strike a balance between promoting AI innovation and ensuring responsible development and deployment of AI technologies. Here are some key aspects that make AIACT.IN Version 2.0 friendly towards AI innovation, explained in simple terms:

  1. Voluntary certification for low-risk AI: The Act establishes a voluntary certification scheme for narrow and medium-risk AI systems based on their use cases. This allows AI developers and companies to innovate freely in low-risk applications without burdensome regulatory requirements, while still having the option to get certified to build trust with users.

  2. Exemptions for startups and research: Narrow or medium risk AI systems developed by startups, SMEs, or research institutions may be exempted from certification requirements in certain cases. This provides flexibility for smaller players and academia to experiment with AI and push the boundaries of innovation.

  3. Regulatory sandboxes: The IndiaAI Development & Regulation Council (IDRC) is tasked with providing regulatory sandboxes and other mechanisms to enable experimentation and innovation in a controlled environment. Sandboxes allow testing of novel AI applications in a safe space without being constrained by regulations.

  4. Open source software: The Act encourages the use of open source software in developing narrow and medium-risk AI to promote transparency, collaboration and innovation. Leveraging the power of open source can accelerate AI development and lower barriers to entry for new players.

  5. Interoperability focus: AIACT.IN recognizes the importance of AI systems being able to work together seamlessly. It calls for developing standards, open APIs and mechanisms to facilitate interoperability between AI systems while respecting IP rights. Interoperable AI fosters an innovative ecosystem.

  6. Tailored IP protections: The Act acknowledges that AI systems require a combination of traditional IP rights and new, tailored concepts to address their unique spatial characteristics. Having a robust yet flexible IP framework provides the necessary protections and incentives for AI innovators.

  7. Collaborative standards development: The IDRC is responsible for developing shared sector-neutral standards for responsible AI development in consultation with open source communities and stakeholders. Collaborative standard-setting ensures that the voice of the industry, including innovative players, is heard.

How does the AIACT.IN Version 2 addresses deepfakes and untested AI models?

Deepfakes:

  • The Act defines "AI-Generated Content" as content that has been created or significantly modified by AI, including text, images, audio, and video, subject to the use case of the AI application. This would cover deepfakes.

  • Content Provenance techniques are required to identify, track and watermark AI-generated content to establish its origin, authenticity and history. This aims to make deepfakes detectable.

  • AI systems that produce or manipulate content are required to have mechanisms to identify the source of the content and maintain a record of its provenance. The identifying information must be accessible to the public.

  • The IDRC will develop guidelines for implementing watermarking and other identifying techniques in AI systems that generate content like deepfakes.

Untested AI Models:

  • The Act prohibits the development, deployment and use of "unintended risk AI systems" - AI that may emerge from complex interactions and pose unforeseen risks. This targets untested, unpredictable AI.

  • High-risk AI systems are subject to extensive testing, validation, and ongoing monitoring requirements throughout their lifecycle to ensure safety, reliability and compliance. Untested high-risk models would not be permitted.

  • Providers of general purpose AI models are required to perform standardized model evaluations, assess risks, and ensure cybersecurity protection. This mandates testing of foundational AI models.

  • AI systems must undergo conformity assessments before being put on the market and throughout their lifecycle. Untested systems could not be deployed.

Explain how are AI technologies classified in AIACT.IN Version 2.

Here is a summary of how AI technologies are classified in the Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2, with some hypothetical examples:

  1. Conceptual methods of classification:

  • Issue-to-Issue Concept Classification: Determining the inherent purpose of AI on a case-by-case basis. E.g. Assessing a facial recognition system's purpose in a specific context like law enforcement vs retail stores.

  • Ethics-Based Concept Classification: Recognizing the ethics-based relationship of AI in sector-specific & sector-neutral contexts. E.g. Evaluating the ethical implications of using AI for medical diagnosis vs movie recommendations.

  • Phenomena-Based Concept Classification: Addressing rights-based issues associated with AI use and dissemination. E.g. Examining privacy rights implications of AI-based surveillance.

  • Anthropomorphism-Based Concept Classification: Evaluating scenarios where AI conceptually simulates human attributes. E.g. Analyzing the implications of AI chatbots mimicking human empathy.

  1. Technical methods of classification:

  • General intelligence applications with multiple stable use cases as per relevant standards

  • General intelligence applications with multiple short-run or unclear use cases as per relevant standards

  • AI applications with one or more associated standalone use cases or test cases

  1. Commercial methods of classification:

  • AI as a Product (AI-P)

  • AI as a Service (AIaaS)

  • AI as a Component (AI-C) integrated into existing products/services

  • AI as a System (AI-S) facilitating integration into existing systems

  • AI-enabled Infrastructure as a Service (AI-IaaS) integrated into digital public infrastructure

  1. Risk-centric methods of classification:

  • Narrow risk AI systems with low risk, e.g. AI-based spam filters

  • Medium risk AI systems with moderate risk, e.g. AI loan application reviewers

  • High risk AI systems with significant risk, e.g. AI medical diagnostic tools

  • Unintended risk AI systems with unforeseen risks from complex interactions, e.g. autonomous weapons

Does or can the AIACT.IN Version 2 legally support open source AI practices and systems? If yes, then how?

Yes, the Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2 does support and encourage open source AI practices and systems in several ways:

  1. Ethics Code for narrow & medium risk AI systems (Section 7): The Ethics Code encourages the use of open-source software (OSS) in developing narrow and medium-risk AI systems to promote transparency, collaboration, and innovation. AI developers are encouraged to release non-sensitive components of their systems under OSS licenses.

  2. Shared sector-neutral standards (Section 16): In developing shared sector-neutral standards for responsible AI development and use, the IDRC is required to leverage open source software and open standards to promote interoperability and transparency. The IDRC shall encourage participation of open source communities in developing AI standards.

  3. Intellectual property protections (Section 15): The use of open source software in AI systems is permitted, subject to the terms of the respective open source licenses. The IDRC is tasked with providing guidance on compatibility of open source licenses with the IP protections for AI systems.

  4. Exemptions for research and open-source AI (EU AI Act): While not directly part of AIACT.IN, the EU's AI Act, which likely influenced the Indian legislation, provides exemptions to certain rules for research activities and AI components provided under open-source licenses, in order to boost AI innovation.

  5. Skill development in open source AI (Section 18): The IDRC is required to encourage development of skills related to open source software development and collaboration in the AI workforce through training programs, certifications and other initiatives.

How does AIACT.IN Version 2 help technology professionals and firms? This is not even a government-proposed draft.

The current state of AI innovation and adoption in the Indian tech ecosystem has both strengths and weaknesses.Strengths:

  • India has a large pool of skilled IT professionals and a growing number of AI startups, providing a strong foundation for AI innovation.

  • The Indian government has launched several initiatives to promote AI research and development, such as the National Strategy for Artificial Intelligence and the National AI Marketplace.

  • The Indian tech ecosystem is seeing increased investment in AI, with several large companies establishing AI research labs and investing in AI startups.

Weaknesses:

  • Lack of high-quality data: AI models require large amounts of high-quality data to train and improve. However, India faces challenges in collecting and managing data due to issues such as data privacy, data security, and data fragmentation.

  • Limited AI talent: While India has a large pool of IT professionals, there is a shortage of AI experts with deep knowledge in areas such as machine learning, natural language processing, and computer vision.

  • Regulatory challenges: The Indian regulatory environment for AI is still evolving, and there is a lack of clear guidelines and standards for AI development and deployment. This can create uncertainty for companies looking to invest in AI.

  • Infrastructure challenges: AI requires significant computational resources and infrastructure. While India has made progress in building its digital infrastructure, there are still challenges in areas such as connectivity, power, and hardware availability.

To address these weaknesses, the Indian tech ecosystem can leverage better AI standards while protecting space for innovation. The AIACT.IN Version 2 document provides a framework for AI development and regulation that can help address some of these challenges. For example, the document emphasizes the importance of shared sector-neutral standards for responsible AI development and use, which can help promote transparency, accountability, and ethical considerations in AI development. Additionally, the document includes provisions for content provenance and identification, employment and skill security standards, and insurance policies for AI technologies, which can help mitigate risks associated with AI adoption. However, it is important to note that the AIACT.IN Version 2 document is not a government-proposed draft

Here is how you can participate in the AIACT.IN discourse:


  1. Read and understand the document: The first step to participating in the discourse is to read and understand the AIACT.IN Version 2 document. This will give you a clear idea of the proposed regulations and standards for AI development and regulation in India. To submit your suggestions to us, write to us at vligta@indicpacific.com.

  2. Identify key areas of interest: Once you have read the document, identify the key areas that are of interest to you or your organization. This could include sections on intellectual property protections, shared sector-neutral standards, content provenance, employment and insurance, or alternative dispute resolution.

  3. Provide constructive feedback: Share your feedback on the proposed regulations and standards, highlighting any areas of concern or suggestions for improvement. Be sure to provide constructive feedback that is backed by evidence and data, where possible.

  4. Engage in discussions: Participate in discussions with other stakeholders in the AI ecosystem, including industry experts, policymakers, and researchers. This will help you gain a broader perspective on the proposed regulations and standards, and identify areas of consensus and disagreement.

  5. Stay informed: Keep up to date with the latest developments in the AI ecosystem, including new regulations, standards, and best practices. This will help you stay informed and engaged in the discourse, and ensure that your feedback is relevant and timely.

  6. Collaborate with others: Consider collaborating with other stakeholders in the AI ecosystem to develop joint submissions or position papers on the proposed regulations and standards. This will help amplify your voice and increase your impact in the discourse.

  7. Participate in consultations: Look out for opportunities to participate in consultations on the proposed regulations and standards. This will give you the opportunity to share your feedback directly with policymakers and regulators, and help shape the final regulations and standards. You can even participate in the committee sessions & meetings held by the Indian Society of Artificial Intelligence and Law. To participate, you may contact the Secretariat at executive@isail.co.in.





Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page