top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureAbhivardhan

Artificial Intelligence Governance using Complex Adaptivity: Feedback Report, First Edition, 2024



This is a feedback report developed to offer inputs to a coveted paper published by the Economic Advisory Council to the Prime Minister (EAC-PM) of India, entitled “A Complex Adaptive System Framework to Regulate Artificial Intelligence” authored by Sanjeev Sanyal, Pranav Sharma, and Chirag Dudani.


You can access the complete feedback report here.

This report provides a detailed examination of the EAC-PM paper "A Complex Adaptive System Framework to Regulate Artificial Intelligence." It delves into the core principles proposed by the authors, including instituting guardrails and partitions, ensuring human control, promoting transparency and explainability, establishing distinct accountability, and creating a specialized, agile regulatory body.Through a series of infographics and concise explanations, the report breaks down the intricate concepts of complex adaptivity and its application to AI governance. It offers a fresh perspective on viewing AI systems as complex adaptive systems, highlighting the challenges of traditional regulatory approaches and the need for adaptive, responsive frameworks.





Key Highlights:

  1. In-depth analysis of the EAC-PM paper's recommendations for AI regulation.

  2. Practical feedback and policy suggestions for each proposed regulatory principle.

  3. Insights into the unique characteristics of AI systems as complex adaptive systems.

  4. Exploration of financial markets as a real-world example of complex adaptive systems.

  5. Recommendations for a balanced approach fostering innovation and responsible AI development.

Whether you are a policymaker, researcher, industry professional, or simply interested in the future of AI governance, this report provides a valuable resource for understanding the complexities involved and the potential solutions offered by a complex adaptive systems approach.


Download the "Artificial Intelligence Governance using Complex Adaptivity: Feedback Report" today and gain a comprehensive understanding of this critical topic. Engage with the thought-provoking insights and contribute to the ongoing dialogue on responsible AI development.Stay informed, stay ahead in the era of AI governance.


 

The paper proposes a novel framework to regulate Artificial Intelligence (AI) by viewing it through the lens of a Complex Adaptive System (CAS). Traditional regulatory approaches based on ex-ante impact analysis are inadequate for governing the complex, non-linear and unpredictable nature of AI systems.The paper conducts a comparative analysis of existing AI regulatory approaches across the United States, United Kingdom, European Union, China, and the United Nations. It highlights the gaps and limitations in these frameworks when dealing with AI's CAS characteristics.To effectively regulate AI, the paper recommends a CAS-inspired framework based on five guiding principles:

  1. Instituting Guardrails and Partitions: Implement clear boundary conditions to restrict undesirable AI behaviours. Create "partitions" or barriers between distinct AI systems to prevent cascading systemic failures, akin to firebreaks in forests.

  2. Ensuring Human Control via Overrides and Authorizations: Mandate manual override mechanisms for human intervention when AI systems behave erratically. Implement multi-factor authentication protocols requiring consensus from multiple credentialed humans before executing high-risk AI actions.

  3. Transparency and Explainability: Promote open licensing of core AI algorithms for external audits. Mandate standardized "AI factsheets" detailing system development, training data, and known limitations. Conduct periodic mandatory audits for transparency and explainability.

  4. Distinct Accountability: Establish predefined liability protocols and standardized incident reporting to ensure accountability for AI-related malfunctions or unintended outcomes. Implement traceability mechanisms throughout the AI technology stack.

  5. Specialized, Agile Regulatory Body: Create a dedicated regulatory authority with a broad mandate, expertise, and agility to respond swiftly to emerging AI challenges. Maintain a national registry of AI algorithms for compliance and a repository of unforeseen events.

The paper draws insights from the regulation of financial markets, which exhibit CAS characteristics with emergent behaviours arising from diverse interacting agents. It highlights regulatory mechanisms like dedicated oversight bodies, transparency requirements, control chokepoints, and personal accountability measures that can inform AI governance.


Comments