Search Results
Results found for empty search
- Section 29 – Power to Make Rules | Indic Pacific
Section 29 – Power to Make Rules PUBLISHED Previous Next Section 29 - Power to Make Rules (1) The Central Government may, by notification, make rules to carry out the provisions of this Act. (2) In particular, and without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely:— (a) The manner of appointment, qualifications, terms and conditions of service of the Chairperson and Members of the IAIC under sub-section (6) of Section 10; (b) The form, manner, and fee for filing an appeal before the Appellate Tribunal under Section sub-section (2) of Section 26; (c) The procedure to be followed by the Appellate Tribunal while dealing with an appeal under the sub-section (8) of Section 26; (d) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by rules. (3) Every rule made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the rule or both Houses agree that the rule should not be made, the rule shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that rule. Related Indian AI Regulation Sources Information Technology Act, 2000 (IT Act 2000) October 2000
- Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) | Indic Pacific | IPLR | indicpacific.com
MeitY's January 2025 draft implementing regulations operationalizing DPDP Act 2023 provisions through detailed compliance procedures. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) MeitY's January 2025 draft implementing regulations operationalizing DPDP Act 2023 provisions through detailed compliance procedures. Previous Next January 2025 Issuing Authority Ministry of Electronics and Information Technology (MeitY) Type of Legal / Policy Document Secondary Legislation Status Proposed Regulatory Stage Pre-regulatory Binding Value Legally binding instruments enforceable before courts Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 2 – Definitions Section 2 – Definitions Section 18 – Third-Party Vulnerability Reporting Section 18 – Third-Party Vulnerability Reporting Section 19 – Incident Reporting and Mitigation Protocols Section 19 – Incident Reporting and Mitigation Protocols Section 20 – Responsible Information Sharing Section 20 – Responsible Information Sharing
- National Strategy for Artificial Intelligence (#AIforAll) | Indic Pacific | IPLR | indicpacific.com
NITI Aayog's June 2018 foundational policy document establishing India's national AI strategy identifying five priority sectors including healthcare, agriculture, education, smart cities, and mobility. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. National Strategy for Artificial Intelligence (#AIforAll) NITI Aayog's June 2018 foundational policy document establishing India's national AI strategy identifying five priority sectors including healthcare, agriculture, education, smart cities, and mobility. Previous Next June 2018 Issuing Authority NITI Aayog Type of Legal / Policy Document National Strategies Status Enacted Regulatory Stage Pre-regulatory Binding Value Non-binding but institutionally endorsed instruments Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 1 – Short Title and Commencement Section 1 – Short Title and Commencement Section 2 – Definitions Section 2 – Definitions Section 14 – Model Standards on Knowledge Management Section 14 – Model Standards on Knowledge Management
- AI-based Anthropomorphization | Glossary of Terms | Indic Pacific | IPLR
AI-based Anthropomorphization Explainers The Complete Glossary AI-based Anthropomorphization Date of Addition 26 Apr 2024 AI-based anthropomorphization is the process of giving AI systems human-like qualities or characteristics. This can be done in a variety of ways, such as giving the AI system a human-like name, appearance, or personality. It can also be done by giving the AI system the ability to communicate in a human-like way, or by giving it the ability to understand and respond to human emotions. This idea was discussed in the 2020 Handbook on AI and International Law (2021), Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001 (2022), Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002 (2023) and Promoting Economy of Innovation through Explainable AI, VLiGTA-TR-003 (2023). Related Long-form Insights on IndoPacific.App Regulatory Sovereignty in India: Indigenizing Competition-Technology Approaches [ISAIL-TR-001] Learn More Deciphering Artificial Intelligence Hype and its Legal-Economic Risks [VLiGTA-TR-001] Learn More Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024 Learn More Legal Strategies for Open Source Artificial Intelligence Practices, IPLR-IG-004 Learn More Ethical AI Implementation and Integration in Digital Public Infrastructure, IPLR-IG-005 Learn More Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Artificial Intelligence and Policy in India, Volume 5 [AIPI-V5] Learn More Legal-Economic Issues in Indian AI Compute and Infrastructure, IPLR-IG-011 Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Previous Term Next Term terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Section 30 – Power to Make Regulations | Indic Pacific
Section 30 – Power to Make Regulations PUBLISHED Previous Next Section 30 - Power to Make Regulations (1) The IAIC may, by notification, make regulations consistent with this Act and the rules made thereunder to carry out the provisions of this Act. (2) In particular, and without prejudice to the generality of the foregoing power, such regulations may provide for all or any of the following matters, namely — (a) The criteria and process for the classification of AI systems based on their conceptual, technical, commercial, and risk-based factors, as specified in Sections 4, 5, 6, and 7; (b) The standards, guidelines, and best practices for the development, deployment, and use of AI systems, including those related to transparency, explainability, fairness, safety, security, and human oversight, as outlined in Section 13; (c) The procedures and requirements for the registration and certification of AI systems, including the criteria for exemptions and the maintenance of the National Registry of Artificial Intelligence Use Cases, as specified in Sections 11 and 12; (d) The guidelines and mechanisms for post-deployment monitoring of high-risk AI systems, as outlined in Section 17; (e) The procedures and protocols for third-party vulnerability reporting, incident reporting, and responsible information sharing, as mentioned in Sections 18, 19, and 20; (f) The guidelines and requirements for content provenance and identification in AI-generated content, as specified in Section 23; (g) The insurance coverage requirements and risk assessment procedures for entities developing or deploying high-risk AI systems, as outlined in Section 25; (h) Any other matter which is required to be, or may be, prescribed, or in respect of which provision is to be made by regulations. (3) Every regulation made under this Act shall be laid, as soon as may be after it is made, before each House of Parliament, while it is in session, for a total period of thirty days which may be comprised in one session or in two or more successive sessions, and if, before the expiry of the session immediately following the session or the successive sessions aforesaid, both Houses agree in making any modification in the regulation or both Houses agree that the regulation should not be made, the regulation shall thereafter have effect only in such modified form or be of no effect, as the case may be; so, however, that any such modification or annulment shall be without prejudice to the validity of anything previously done under that regulation. Related Indian AI Regulation Sources
- Raj Shamani v. John Doe & Ors., CS(COMM) 1233/2025, Delhi High Court, Order dated November 21, 2025 | Indic Pacific | IPLR | indicpacific.com
The Delhi High Court, through Justice Manmeet Pritam Singh Arora, issued a “John Doe” order protecting podcaster and content creator Raj Shamani’s personality rights against the unauthorised use of his name, image, likeness, voice, and podcast titles via AI, deepfake, fake endorsements, impersonation chatbots, and morphed/defamatory media. Major tech/social platforms (Google, Meta, YouTube, Telegram, etc.) were ordered to take down AI-generated infringing content within 72 hours and disclose basic subscriber data of offenders. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. Raj Shamani v. John Doe & Ors., CS(COMM) 1233/2025, Delhi High Court, Order dated November 21, 2025 The Delhi High Court, through Justice Manmeet Pritam Singh Arora, issued a “John Doe” order protecting podcaster and content creator Raj Shamani’s personality rights against the unauthorised use of his name, image, likeness, voice, and podcast titles via AI, deepfake, fake endorsements, impersonation chatbots, and morphed/defamatory media. Major tech/social platforms (Google, Meta, YouTube, Telegram, etc.) were ordered to take down AI-generated infringing content within 72 hours and disclose basic subscriber data of offenders. Previous Next November 2025 Issuing Authority Delhi High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Related draft AI Law Provisions of aiact.in
- Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report | Indic Pacific | IPLR | indicpacific.com
Reserve Bank of India's August 2025 framework establishing seven guiding principles for responsible and ethical artificial intelligence enablement in financial services sector. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report Reserve Bank of India's August 2025 framework establishing seven guiding principles for responsible and ethical artificial intelligence enablement in financial services sector. Previous Next August 2025 Issuing Authority Reserve Bank of India (RBI) Type of Legal / Policy Document Guidance documents with normative influence Status Enacted Regulatory Stage Pre-regulatory Binding Value Guidance documents with normative influence Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Reimaging and Restructuring MeiTY for India [IPLR-IG-007] Learn More Averting Framework Fatigue in AI Governance [IPLR-IG-013] Learn More Decoding the AI Competency Triad for Public Officials [IPLR-IG-014] Learn More AI Bias & the Overlap of AI Diplomacy and Governance Ethics Dilemmas Learn More Artificial Intelligence, Market Power and India in a Multipolar World Learn More Related draft AI Law Provisions of aiact.in Section 10 – Composition and Functions of the Council Section 10 – Composition and Functions of the Council Section 11 – Registration & Certification of AI Systems Section 11 – Registration & Certification of AI Systems Section 12 – National Registry of Artificial Intelligence Use Cases Section 12 – National Registry of Artificial Intelligence Use Cases Section 13 – National Artificial Intelligence Ethics Code Section 13 – National Artificial Intelligence Ethics Code
- Section 21-A – Data Classification and Localisation Requirements | Indic Pacific
Section 21-A – Data Classification and Localisation Requirements PUBLISHED Previous Next Section 21-A – Data Classification and Localisation Requirements (1) The Central Government shall establish a data classification and tiering system that defines storage, access, and transfer requirements based on data sensitivity and strategic importance. The system shall include the following tiers: (i) Tier 1: Critical National Security Data (a) Characteristics: Includes data with direct national security implications, sensitive government infrastructure data, critical defence information, and biometric/sensitive personal identification data. (ii) Tier 2: Strategic Sectoral Data (a) Strategic Sectors Designated: (i) Healthcare (ii) Financial Services (iii) Critical Infrastructure, and (iv) Emerging Technology Research (iii) Tier 3: Commercial and Research Data (a) Characteristics: Includes non-sensitive commercial data, academic and research collaboration data, and open-source AI training datasets. (2) To promote responsible data management and adherence to localisation requirements among companies, the Central Government shall provide incentives aligned with the entity’s AI classification under Chapter II. Incentives include: (i) Tax Benefits: Available for entities compliant with localisation protocols, with additional consideration given based on the AI system’s classification type under the commercial methods of classification in Section 6. (ii) Expedited Cross-Border Approvals: Reserved for institutions with demonstrated responsible cross-border data management, particularly those operating high-risk AI systems or classified under AI-IaaS and AI-Com as per methods of classification in Section 5 due to their integration with sensitive digital infrastructure. (iii) Recognition Certificates for Exemplary Management Practices: Granted to institutions that demonstrate best practices in data management, security, and AI governance, taking into account methods of classification in Sections 5 and 7. (3) The framework shall be rolled out in phases over 24 months and include: (i) Regular review and recalibration to adapt to emerging technological and policy challenges. (ii) Stakeholder consultation mechanisms to incorporate feedback from industry, academia, and government entities. (iii) Capacity building programs to support entities in implementing and maintaining compliance with these standards. Related Indian AI Regulation Sources
- Prompt Injection | Glossary of Terms | Indic Pacific | IPLR
Prompt Injection Date of Addition 17 October 2025 A security vulnerability classified as the #1 OWASP risk for LLMs where malicious user inputs override system instructions and safety guardrails through carefully crafted natural language commands. This attack vector exploits the fundamental inability of language models to distinguish between system-level instructions and user-provided content, enabling adversaries to manipulate model behavior, extract sensitive information, or bypass ethical constraints. Prompt injection represents a critical socio-technical challenge distinct from traditional cybersecurity vulnerabilities because it operates through semantic manipulation rather than code exploitation. Related Long-form Insights on IndoPacific.App NIST Adversarial Machine Learning Taxonomies: Decoded, IPLR-IG-016 Learn More Previous Term Next Term Explainers The Complete Glossary terms of use This glossary of terms is provided as a free resource for educational and informational purposes only. By using this glossary developed by Indic Pacific Legal Research LLP (referred to as 'The Firm'), you agree to the following terms of use: You may use the glossary for personal and non-commercial purposes only. If you use any content from the glossary of terms on this website in your own work, you must properly attribute the source. This means including a link to this website and citing the title of the glossary. Here is a sample format to cite this glossary (we have used the OSCOLA citation format as an example): Indic Pacific Legal Research LLP, 'TechinData.in Explainers' (Indic Pacific Legal Research , 2023) You are not authorised to reproduce, distribute, or modify the glossary without the express written permission of a representative of Indic Pacific Legal Research. The Firm makes no representations or warranties about the accuracy or completeness of the glossary. The glossary is provided on an "as is" basis and the Firm disclaims all liability for any errors or omissions in the glossary. You agree to indemnify and hold the Firm harmless from any claims or damages arising out of your use of the glossary. If you have any questions or concerns about these terms of use, please contact us at global@indicpacific.com
- Abhishek Bachchan v. The Bollywood Tee Shop & Ors., CS(COMM) 960/2025, Delhi High Court, Order dated September 10, 2025 | Indic Pacific | IPLR | indicpacific.com
Delhi High Court September 2025 interim injunction restraining unauthorized commercial exploitation of actor Abhishek Bachchan's personality rights. India AI Regulation Landscape 101 This is a simple regulatory tracker consisting all information on how India is regulating artificial intelligence as a technology, inspired from a seminal paper authored by Abhivardhan and Deepanshu Singh for the Forum of Federations, Canada, entitled, "Government with Algorithms: Managing AI in India’s Federal System – Number 70 ". We have also included case laws along with regulatory / governance documents, and avoided adding any industry documents or policy papers which do not reflect any direct or implicit legal impact. Abhishek Bachchan v. The Bollywood Tee Shop & Ors., CS(COMM) 960/2025, Delhi High Court, Order dated September 10, 2025 Delhi High Court September 2025 interim injunction restraining unauthorized commercial exploitation of actor Abhishek Bachchan's personality rights. Previous Next September 2025 Issuing Authority Delhi High Court Type of Legal / Policy Document Judicial Pronouncements - National Court Precedents Status In Force Regulatory Stage Regulatory Binding Value Legally binding instruments enforceable before courts Read the Document AI Regulation Visualisation Related Long-form Insights on IndoPacific.App Regularizing Artificial Intelligence Ethics in the Indo-Pacific [GLA-TR-002] Learn More Impact-Based Legal Problems around Generative AI in Publishing, IPLR-IG-010 Learn More Indo-Pacific Research Ethics Framework on Artificial Intelligence Use [IPac AI] Learn More The Global AI Inventorship Handbook, First Edition [RHB-AI-INVENT-001-2025] Learn More Related draft AI Law Provisions of aiact.in Section 21 – Intellectual Property Protections Section 21 – Intellectual Property Protections
- Section 6 – Commercial Methods of Classification | Indic Pacific
Section 6 – Commercial Methods of Classification PUBLISHED Previous Next Section 6 – Commercial Methods of Classification (1) These methods as designated in clause (iii) of sub-section (1) of Section 3 involve the categorisation of commercially produced and disseminated artificial intelligence technologies based on their inherent purpose and primary intended use, considering factors such as: (i) The core functionality and technical capabilities of the artificial intelligence technology; (ii) The main end-users or business end-users for the artificial intelligence technology, and the size of the user base or market share; (iii) The primary markets, sectors, or domains in which the artificial intelligence technology is intended to be applied, and the market influence or dominance in those sectors; (iv) The key benefits, outcomes, or results the artificial intelligence technology is designed to deliver, and the potential impact on individuals, businesses, or society; (v) The annual turnover or revenue generated by the artificial intelligence technology or the company developing and deploying it; (vi) The amount of data collected, processed, or utilized by the artificial intelligence technology, and the level of data integration across different services or platforms; and (vii) Any other quantitative or qualitative factors that may be prescribed by the Central Government or the Indian Artificial Intelligence Council (IAIC) to assess the significance and impact of the artificial intelligence technology. (2) Based on an assessment of the factors outlined in sub-section (1), artificial intelligence technologies are classified into the following categories – (i) Artificial Intelligence as a Product (AI-Pro), as described in sub-section (3); (ii) Artificial Intelligence as a Service (AIaaS), as described in sub-section (4); (iii) Artificial Intelligence as a Component (AI-Com) which includes artificial intelligence technologies directly integrated into existing products, services & system infrastructure, as described in sub-section (5); (iv) Artificial Intelligence as a System (AI-S), which includes layers or interfaces in AIaaS provided which facilitates the integration of capabilities of artificial intelligence technologies into existing systems in whole or in parts, as described in sub-section (6); (v) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) which includes artificial intelligence technologies directly integrated into existing components and layers of digital infrastructure, as described in sub-section (7); (vi) Artificial Intelligence for Preview (AI-Pre), as described in sub-section (8); (3) Artificial Intelligence as a Product (AI-Pro) refers to standalone AI applications or software that are developed and sold as individual products to end-users. These products are designed to perform specific tasks or provide particular services directly to the user; Illustrations (1) An AI-powered home assistant device as a product is marketed and sold as a consumer electronic device that provides functionalities like voice recognition, smart home control, and personal assistance. (2) A commercial software package for predictive analytics is used by businesses to forecast market trends and consumer behaviour. (4) Artificial Intelligence as a Service (AIaaS) refers to cloud-based AI solutions that are provided to users on-demand over the internet. Users can access and utilize the capabilities of AI systems without the need to develop or maintain the underlying infrastructure; Illustrations (1) A cloud-based machine learning platform offers businesses and developers access to powerful AI tools and frameworks on a subscription basis. (2) An AI-driven customer service chatbot service that businesses can integrate into their websites to handle customer inquiries and support. (5) Artificial Intelligence as a Component (AI-Com) refers to AI technologies that are embedded or integrated into existing products, services, or system infrastructures to enhance their capabilities or performance. In this case, the AI component is not a standalone product but rather a part of a larger system; Illustrations (1) An AI-based recommendation engine integrated into an e-commerce platform to provide personalized shopping suggestions to users. (2) AI-enhanced cameras in smartphones that utilize machine learning algorithms to improve photo quality and provide features like facial recognition. (6) Artificial Intelligence as a System (AI-S) refers to end-to-end AI solutions that combine multiple AI components, models, and interfaces. These systems often involve the integration of AI capabilities into existing workflows or the creation of entirely new AI-driven processes in whole or in parts; Illustrations (1) An AI middleware platform that connects various enterprise applications to enhance their functionalities with AI capabilities, such as an AI layer that integrates with CRM systems to provide predictive sales analytics. (2) An AI system used in smart manufacturing, where AI interfaces integrate with industrial machinery to optimize production processes and maintenance schedules. (7) Artificial Intelligence-enabled Infrastructure as a Service (AI-IaaS) refers to the integration of AI technologies into the underlying computing, storage, and network infrastructure to optimize resource allocation, improve efficiency, and enable intelligent automation. This category focuses on the use of AI at the infrastructure level rather than at the application or service level. Illustrations (1) An AI-enabled traffic management system that integrates with city infrastructure to monitor and manage traffic flow, reduce congestion, and optimize public transportation schedules. (2) AI-powered utilities management systems that are integrated into the energy grid to predict and manage energy consumption, enhancing efficiency and reducing costs. (8) Artificial Intelligence for Preview (AI-Pre) refers to AI technologies that are made available by companies for testing, experimentation, or early access prior to wider commercial release. AI-Pre encompasses AI products, services, components, systems, platforms and infrastructure at various stages of development. AI-Pre technologies are typically characterized by one or more of the following features that may include but not limited to: (i) The AI technology is made available to a limited set of end users or participants in a preview program; (ii) Access to the AI-Pre technology is subject to special agreements that govern usage terms, data handling, intellectual property rights, and confidentiality; (iii) The AI technology may not be fully tested, documented, or supported, and the company providing it may offer no warranties or guarantees regarding its performance or fitness for any particular purpose. (iv) Users of the AI-Pre technology are often expected to provide feedback, report issues, or share data to help the company refine and improve the technology. (v) The AI-Pre technology may be provided free of charge, or under a separate pricing model from the company’s standard commercial offerings. (vi) After the preview period concludes, the company may release a commercial version of the AI technology, incorporating improvements and modifications based on feedback and data gathered during the preview. Alternatively, the company may choose not to proceed with a commercial release. Illustration A technology company develops a new general-purpose AI system that can engage in open-ended dialogue, answer questions, and assist with tasks across a wide range of domains. The company makes a preview version of the AI system available to select academic and industry partners with the following characteristics: (1) The preview is accessible to the partners via an API, subject to a special preview agreement that governs usage terms, data handling, and confidentiality. (2) The AI system’s capabilities are not yet fully tested, documented or supported, and the company provides no warranties or guarantees. (3) The partners can experiment with the system, provide feedback to the company to help refine the technology, and explore potential applications. (4) After the preview period, the company may release a commercial version of the AI system as a paid product or service, with expanded capabilities, service level guarantees, and standard commercial terms. Related Indian AI Regulation Sources
- Section 19 – Incident Reporting and Mitigation Protocols | Indic Pacific
Section 19 – Incident Reporting and Mitigation Protocols PUBLISHED Previous Next Section 19 - Incident Reporting and Mitigation Protocols (1) All developers, operators, and users of AI systems shall establish mechanisms for reporting incidents related to such AI systems. (2) Incident reporting mechanisms must be easily accessible, user-friendly, and secure, such as a dedicated hotline, online portal, or email address. (3) Incidents involving high-risk AI systems shall be treated as a priority and reported immediately, but not later than 48 hours from becoming aware of the incident. (4) For other AI systems, incidents must be reported within 7 days of becoming aware of such incidents. (5) All incident reports shall be submitted to a central repository established and maintained by the IAIC. (6) The IAIC shall collect, analyse, and share incident data from this repository to identify trends, potential risks, and develop mitigation strategies. (7) The IAIC shall publish guidelines on incident reporting requirements, including: (i) Criteria for determining incident severity: (a) Critical: Incidents involving high-risk AI systems posing an imminent threat to human life, safety, or fundamental rights; (b) High: Incidents causing significant harm, disruption, or financial loss; (c) Medium: Incidents with moderate impact or potential for risk escalation; (d) Low: Incidents with minimal impact. (ii) Information to Provide in Incident Reports: (a) Detailed description of the incident and its impact; (b) Details of the AI system (type, use case, risk level, deployment stage); (c) For high-risk AI systems: Root cause analysis, mitigation actions, and supporting data. (iii) Timelines and Procedure for Reporting: (a) Critical incidents with high-risk AI systems must be reported within 48 hours; (b) High or medium severity incidents must be reported within 7 days if involving high-risk AI systems, and within 14 days for all other systems; (c) Low severity incidents must be reported monthly. (iv) Confidentiality measures for incident data: (a) All AI systems must ensure to have: (b) Data encryption at rest and in transit; (c) Role-based access controls for incident data; (d) Maintaining audit logs of all data access; (e) Secure communication channels for data transmission; (f) Retaining data as per requirements under cyber and data protection frameworks; (g) Regular risk assessments on data confidentiality; (h) Employee training on data protection and handling. (v) All high-risk AI systems must ensure to have: (a) Proper encryption key management practices; (b) Encryption for removable media with incident data; (c) Multi-factor authentication for data access; (d) Physical security controls for data storage; (e) Redacting/anonymizing personal information; (f) Secure data disposal mechanisms; (g) Periodic external audits on confidentiality; (h) Disciplinary actions for violations. (vi) The following measures are optional for low-risk AI systems: (a) Key management practices (recommended); (b) Removable media encryption (as needed); (c) Multi-factor authentication (recommended); (d) Physical controls (based on data sensitivity); (e) Personal data redaction (as applicable); (f) Secure disposal mechanisms (recommended). (8) All AI system developers, operators, and users shall implement the following minimum mitigation actions upon becoming aware of an incident: (i) Assess the incident severity based on IAIC guidelines; (ii) Contain the incident through isolation, disabling functions, or other measures; (iii) Investigate the root cause of the incident; (iv) Remediate the incident through updates, security enhancements, or personnel training; (v) Communicate incident details and mitigation actions to impacted parties; (vi) Review and improve internal incident response procedures. (9) For AI systems exempted from certification under sub-section (3) of Section 11, the following guidelines shall apply regarding incident reporting and response protocols: (i) Voluntary Incident Reporting: Developers, operators and users of exempted AI systems are encouraged, but not mandatorily required, to establish mechanisms for incident reporting related to such systems. (ii) Focus on High/Critical Incident: In cases where incident reporting mechanisms are established, the focus shall be on reporting high severity or critical incidents that pose a clear potential for harm or adverse impact. (iii) Reasonable Timelines: For high/critical incidents involving exempted AI systems, developers shall report such incidents to the IAIC within a reasonable timeline of 14-30 days from becoming aware of the incident. (iv) Incident Description: Incident reports for exempted AI systems shall primarily include a description of the incident, its perceived severity and impact, and details about the AI system itself (type, use case, risk classification). (v) Confidentiality Measures: Developers of exempted AI systems shall implement confidentiality measures for incident data that are proportionate to the data sensitivity and potential risks involved. (vi) Coordinated Disclosure: The IAIC shall establish coordinated disclosure programs to facilitate responsible reporting and remediation of vulnerabilities or incidents related to exempted AI systems. (vii) Knowledge Sharing: The IAIC shall maintain a knowledge base of reported incidents involving exempted AI systems and share anonymized information to promote learning and improve incident response practices. (10) The IAIC shall provide support and resources to AI entities on request for effective incident mitigation, prioritizing high-risk AI incidents. (11) The IAIC shall have the power to audit AI entities and impose penalties for non-compliance with this Section as per the provisions of this Act. Related Indian AI Regulation Sources Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) April 2011 Reporting for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by market participants January 2019 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) February 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023) April 2023 Digital Personal Data Protection Act, 2023 (DPDPA) August 2023 Draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) January 2025

