Section 13 – National Artificial Intelligence Ethics Code
PUBLISHED
Section 13 – National Artificial Intelligence Ethics Code
(1) A National Artificial Intelligence Ethics Code (NAIEC) shall be established to provide a set of guiding moral and ethical principles for the responsible development, deployment, and utilisation of artificial intelligence technologies;
(2) The NAIEC shall be based on the following core ethical principles:
(i) AI systems must respect human dignity, well-being, and fundamental rights, including the rights to privacy, non-discrimination and due process.
(ii) AI systems should be designed, developed, and deployed in a fair and non-discriminatory manner, ensuring equal treatment and opportunities for all individuals, regardless of their personal characteristics or protected attributes, including caste and class.
(iii) AI systems should be transparent in their operation, enabling users and affected individuals to understand the underlying logic, decision-making processes, and potential implications of the system’s outputs. AI systems should be able to provide clear and understandable explanations for their decisions and recommendations, in accordance with the guidance provided in sub-section (4) on intellectual property and ownership considerations related to AI-generated content.
(iv) AI systems should be developed and deployed with clear lines of accountability and responsibility, ensuring that appropriate measures are in place to address potential harms, in alignment with the principles outlined in sub-section (3) on the use of open-source software for promoting transparency and collaboration.
(v) AI systems should be designed and operated with a focus on safety and robustness, minimizing the potential for harm, unintended consequences, or adverse impacts on individuals, society, or the environment. Rigorous testing, validation, and monitoring processes shall be implemented.
(vi) AI systems should foster human agency, oversight, and the ability for humans to make informed decisions, while respecting the principles of human autonomy and self-determination. Appropriate human control measures should be implemented;
(vii) AI systems should be developed and deployed with due consideration for their ethical and socio-economic implications, promoting the common good, public interest, and the well-being of society. Potential impacts on employment, skills, and the future of work should be assessed and addressed.
(viii) AI systems that are developed and deployed using frugal prompt engineering practices should optimize efficiency, cost-effectiveness, and resource utilisation while maintaining high standards of performance, safety, and ethical compliance in alignment with the principles outlined in sub-section (5). These practices should include the use of concise and well-structured prompts, transfer learning, data-efficient techniques, and model compression, among others, to reduce potential risks, unintended consequences, and resource burdens associated with AI development and deployment.
(3) The Ethics Code shall encourage the use of open-source software (OSS) in the development of narrow and medium-risk AI systems to promote transparency, collaboration, and innovation, while ensuring compliance with applicable sector-specific & sector-neutral laws and regulations. To this end:
(i) The use of OSS shall be guided by a clear understanding of the open source development model, its scope, constraints, and the varying implementation approaches across different socio-economic and organisational contexts.
(ii) AI developers shall be encouraged to release non-sensitive components of their AI systems under OSS licenses, fostering transparency and enabling public scrutiny, while also ensuring that sensitive components and intellectual property are adequately protected.
(iii)AI developers using OSS shall ensure that their systems adhere to the same standards of fairness, accountability, and transparency as proprietary systems, and shall implement appropriate governance, quality assurance, and risk management processes.
(4) The Ethics Code shall provide guidance on intellectual property and ownership considerations related to AI-generated content. To this end:
(i) Specific considerations shall include recognizing the role of human involvement in developing and deploying the AI systems, establishing guidelines on copyrightability and patentability of AI-generated works and inventions, addressing scenarios where AI builds upon existing protected works, safeguarding trade secrets and data privacy, balancing incentives for AI innovation with disclosure and access principles, and continuously updating policies as AI capabilities evolve.
(ii) The Ethics Code shall encourage transparency and responsible practices in managing intellectual property aspects of AI-generated content across domains such as text, images, audio, video and others.
(iii)In examining IP and ownership issues related to AI-generated content, the Ethics Code shall be guided by the conceptual classification methods outlined in Section 4, particularly the Anthropomorphism-Based Concept Classification to evaluate scenarios where AI replicates or emulates human creativity and invention.
(iv) The technical classification methods described in Section 5, such as the scale, inherent purpose, technical features, and limitations of the AI system, shall inform the assessment of IP and ownership considerations for AI-generated content.
(v) The commercial classification factors specified in the sub-section (1) of Section 6, including the user base, market influence, data integration, and revenue generation of the AI system, shall also be taken into account when determining IP and ownership rights over AI-generated content.
(5) The Ethics Code shall provide guidance on frugal prompt engineering practices for the development of AI systems, ensuring efficiency, accessibility, and the equitable advancement of artificial intelligence, as follows:
(i) Encourage the use of concise and well-structured prompts that specify desired outputs and constraints, minimizing unnecessary complexity in AI interactions;
(ii) Recommend the adoption of transfer learning and pre-trained models to reduce the need for extensive fine-tuning, thereby conserving computational resources;
(iii)Promote the use of data-efficient techniques, such as few-shot learning or active learning, to decrease the volume of training data required for effective model performance;
(iv) Suggest the implementation of early stopping mechanisms to prevent overfitting and enhance model generalisation, ensuring robust performance with minimal training;
(v) Advocate for the use of techniques such as model compression, quantisation, or distillation to reduce computational complexity and resource demands, making AI development more sustainable;
(vi) Require the documentation and maintenance of records on prompt engineering practices, detailing the techniques used, performance metrics achieved, and any trade-offs between efficiency and effectiveness, to ensure transparency and accountability;
(vii) Declare that prompt engineering, as a fundamental practice for optimizing AI systems, constitutes a global commons and a shared resource for the benefit of all humanity, and as such:
(a) Shall not be monetized, commercialized, or subject to proprietary claims, ensuring that the knowledge and techniques of prompt engineering remain freely accessible to all;
(b) Shall be treated as a universal public good, akin to principles established in international agreements governing shared resources, to foster global collaboration and innovation in AI development & education.
(6) The Ethics Code shall provide guidance on ensuring fair access rights for all stakeholders involved in the AI value and supply chain, including:
(i) All stakeholders should have fair and transparent access to datasets necessary for training and developing AI systems. This includes promoting equitable data-sharing practices that ensure smaller entities or research institutions are not unfairly disadvantaged in accessing critical datasets.
(ii) Ethical use of computational resources should be promoted by ensuring that all stakeholders have transparent access to these resources. Special consideration should be given to smaller entities or research institutions that may require preferential access or pricing models to support innovation.
(iii) Ethical guidelines should ensure that ownership rights over trained models, derived outputs, and intellectual property are clearly defined and respected. Stakeholders involved in the development process must have a clear understanding of their rights and obligations regarding the usage and commercialization of AI technologies.
(iv) The benefits derived from AI technologies should be distributed ensuring that smaller players contributing critical resources like proprietary datasets or specialized algorithms are fairly compensated.
(7) Adherence to the NAIEC shall be voluntary for all AI systems, as well as those exempted under the sub-section (3) of Section 11.
(8) Strategic Sector Safeguards: AI systems deployed in strategic sectors, particularly those classified as high-risk under Section 7, shall adhere to heightened ethical standards that prioritize:
(i) Safety Imperative: Developers and operators of AI systems shall design, implement, and maintain robust safety measures that minimize potential harm to individuals, property, society, and the environment throughout the system's lifecycle;
(ii) Security by Design: AI systems shall incorporate security measures from the earliest stages of development to protect against unauthorized access, manipulation, or misuse, with particular emphasis on safeguarding data integrity and system confidentiality;
(iii) Reliability and Resilience: All AI systems shall demonstrate consistent, accurate, and dependable performance through rigorous testing, validation, and continuous monitoring, with enhanced requirements for systems in critical infrastructure or essential services;
(iv) Transparent Operations: AI systems shall implement mechanisms that enable appropriate stakeholder understanding of underlying algorithms, data sources, and decision-making processes, adhering to disclosure needs in line with intellectual property protections;
(v) Accountable Governance: Clear lines of responsibility shall be established for AI system outcomes, with specified channels for redress and remediation in cases of adverse impacts, particularly for systems affecting fundamental rights or public welfare;
(vi) Legitimate Purpose Alignment: AI systems shall be developed and deployed exclusively for purposes that comply with the legitimate uses framework established under Section 7 of the Digital Personal Data Protection Act, 2023 and shall not be repurposed for unauthorized applications without appropriate review.
Related Indian AI Regulation Sources