top of page

Section 16 – Guidance Principles for AI-related Corporate Governance

PUBLISHED

Section 16 - Guidance Principles for AI-related Corporate Governance

(1)   Entities involved in the development, deployment, and use of artificial intelligence (AI) techniques, tools or methods across their governance structures and decision-making processes must adhere to the following guiding principles as per the National Artificial Intelligence Ethics Code under Section 13:

(i)    Accountability and Responsibility:

(a)    Clear accountability for decisions and actions involving the use of AI techniques must be maintained within the organisation by the appropriate leadership or management.

(b)   Robust governance frameworks must be established to assign roles, responsibilities and oversight mechanisms related to the development, deployment and monitoring of AI systems used for corporate governance purposes.

 

(ii)   Transparency and Explainability:

(a)    AI systems used to aid corporate decision-making must employ transparent models and techniques that enable interpretability of their underlying logic, data inputs and decision rationales

(b)   Comprehensive documentation must be maintained on the AI system’s architecture, training data, performance metrics and potential limitations or biases

(c)    Internal policies, directives and guidelines must be made by entities for impacted stakeholders to access explanations of how AI-driven decisions were made and what factors influenced those decisions

 

(iii) Human Agency and Oversight:

(a)   The use of AI techniques in corporate governance must be subject to meaningful human control, oversight and the ability to intervene in or override AI system outputs when necessary.

(b)   Appropriate human review mechanisms must be implemented, particularly for high-stakes decisions impacting all relevant stakeholders, including employees, shareholders, customers, and the public interest;

(c)   Company or Organisation policies must clearly define the roles and responsibilities of humans versus AI systems in governance and decision-making processes;

 

(iv)  Intellectual Property and Ownership Considerations:

(a)    Corporate entities should establish clear policies and processes for determining ownership, attribution, and intellectual property rights over AI-generated content, inventions, and innovations.

(b)   These policies should recognize and protect the contributions of human creators, inventors, and developers involved in the development and deployment of AI systems.

(c)    Corporations should balance the need for incentivizing innovation through intellectual property protections with the principles of transparency, accountability, and responsible use of AI technologies.

 

(v)   Encouraging Open-Source Adoption:

(a)    Companies and organisations are encouraged to leverage open-source software (OSS) and open standards in the development and deployment of AI systems, where appropriate.

(b)   The use of OSS can promote transparency, collaboration, and innovation in the AI ecosystem while ensuring compliance with applicable laws, regulations, and ethical principles outlined in Section 13.

(c)    Companies and organisations should contribute to and participate in open-source AI communities, fostering knowledge sharing and collective advancement of AI technologies.

 

(2)   For the purposes of these Guidance Principles, the artificial intelligence (AI) techniques, tools or methods across governance structures and decision-making processes shall refer to:

(i)     AI systems that replicate or emulate human decision-making abilities through autonomy, perception, reasoning, interaction, adaptation and creativity, as evaluated under the Anthropomorphism-Based Concept Classification (ABCC) described in sub-section (5) of Section 4;

(ii)   AI systems whose development, deployment and utilisation within corporate governance structures necessitates the evaluation and mitigation of potential ethical risks and implications, in accordance with the Ethics-Based Concept Classification (EBCC) under sub-section (3) of Section 4;

(iii) AI systems that may impact individual rights such as privacy, due process, non-discrimination as well as collective rights, requiring a rights-based assessment as per the Phenomena-Based Concept Classification (PBCC) outlined in sub-section (4) of Section 4;

(iv)  General Purpose AI Applications with Multiple Stable Use Cases (GPAIS) that can reliably operate across various governance functions as per the technical classification criteria specified in sub-section (2) of Section 5;

(v)   Specific Purpose AI Applications (SPAI) designed for specialized governance use cases based on the factors described in sub-section (4) of Section 5;

(vi)  AI systems classified as high-risk under the sub-section (4) of Section 7 due to their potential for widespread impact, lack of opt-out feasibility, vulnerability factors or irreversible consequences related to corporate governance processes;

(vii)AI systems classified as medium-risk under the sub-section (3) of Section 7 that require robust governance frameworks focused on transparency, explainability and accountability aspects;

(viii)               AI systems classified as narrow-risk under the sub-section (2) of Section 7 where governance approaches should account for their technical limitations and vulnerabilities.

 

(3)   For AI systems exempted from certification under Section 11(3), companies and organisations may adopt a lean governance approach, focusing on:

(i)    Establishing basic incident reporting and response protocols as per Section 19, without the stringent requirements applicable to high-risk AI systems.

(ii)   Maintaining documentation and ensuring interpretability of the AI systems to the extent feasible, given their limited risk profile.

(iii) Conducting periodic risk assessments and implementing corrective measures as necessary, commensurate with the AI system’s potential impact.

 

(4)   The IAIC may mandate the application of the guidance principles outlined in this section for certain high-risk sectors, high-risk use cases as per Section 6, or types of entities, where the potential risks associated with the AI system are deemed significant.

(5)   The guidance principles shall be reviewed and updated periodically to reflect advancements in AI technologies, evolving best practices, and changes in the legal and regulatory landscape.       

Related Indian AI Regulation Sources

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021)

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules 2023)

Advisory on AI Intermediaries and Platforms

Report on AI Governance Guidelines Development

Advisory on Prohibition of AI Tools/Apps in Office Devices

Buckeye Trust v. PCIT, ITA No. 1051/Bang/2024 (ITAT Bengaluru Bench 2024-2025)

Policy Regarding Use of Artificial Intelligence Tools in District Judiciary

KMG Wires Private Limited vs. National Faceless Assessment Centre & Others, Writ Petition (L) No. 24366 of 2025, Bombay High Court, Order dated October 6, 2025

Circular on Use of Open/External Artificial Intelligence (AI) Tools for Official Work, File No. Z-11/12/1/Misc.Matter/2024-MSU

India AI Governance Guidelines: Enabling Safe and Trusted AI Innovation

bottom of page