top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureCommunications Team

Deciphering Australia’s Safe and Responsible AI Proposal



This insight by Visual Legal Analytica is a response/ public submission to the Australian Government's recent Proposals Paper for introducing mandatory guardrails for AI in high-risk settings published in September 2024. This insight, authored by Mr Abhivardhan, our Founder, is submitted on behalf of Indic Pacific Legal Research LLP.


 

Key Definitions Used in the Paper


The key definitions provided in the Australian Government's September 2024 High Risk AI regulation proposal reflect a comprehensive and nuanced approach to AI governance:


Broad Scope and Lifecycle Perspective


The definitions cover a wide range of AI systems and models, from narrow AI focused on specific tasks to general-purpose AI (GPAI) capable of being adapted for various purposes. They also consider the entire AI lifecycle, from inception and design to deployment, use, and eventual decommissioning. This broad scope ensures the regulations can be applied to diverse AI applications now and in the future.


Differentiation between AI Models and Systems


The proposal makes a clear distinction between AI models, which are the underlying mathematical engines, and AI systems, which are the ensembles of components designed for practical use. This allows for more targeted governance, recognizing that raw models and end-user applications may require different regulatory approaches.


Emphasis on Supply Chain and Roles


The definitions highlight the complex network of actors involved in the AI supply chain, including developers who create the models and systems, deployers who integrate them into products and services, and end users who interact with or are impacted by the deployed AI. By delineating these roles, the regulations can assign appropriate responsibilities and accountability at each stage.


Recognition of Autonomy and Adaptiveness


The varying levels of autonomy and adaptiveness of AI systems after deployment are acknowledged. This reflects an understanding that more autonomous and self-learning AI may pose different risks and require additional oversight compared to static, narrow AI.


Inclusion of Generative AI


Generative AI models, which can create new content similar to their training data, are specifically defined. Given the rapid advancement and potential impacts of generative AI, such as in generating realistic text, images, and other media, this inclusion demonstrates a forward-looking approach to AI governance.


Overall, the key definitions show that Australia is taking a nuanced, lifecycle-based approach to regulating AI, with a focus on the supply chain, different levels of AI sophistication, and emerging areas like generative AI.


Designated AI Risks in the Proposal


The Australian Government has correctly identified several key risks that AI systems can amplify or create in the "AI amplifies and creates new risks" section of the report:


Amplification of Existing Risks like Bias


The report highlights that AI systems can embed human biases and create new algorithmic biases, leading to systemic impacts on groups based on protected attributes like race or gender. This bias may arise from inaccurate, insufficient, unrepresentative or outdated training data, or from the design and deployment of the AI system itself. Identifying the risk of AI amplifying bias is important, as there are already real-world examples of this occurring, such as in AI resume screening software and facial recognition systems.


New Harms at Multiple Levels


The government astutely recognises that AI misuse and failure can cause harm to individuals (e.g. injury, privacy breaches, exclusion), groups (e.g. discrimination), organisations (e.g. reputational damage, cyber attacks), and society at large (e.g. growing inequality, mis/disinformation, erosion of social cohesion). This multilevel perspective acknowledges the wide-ranging negative impacts AI risks can have.


National Security Threats


The report identifies how malicious actors can leverage AI to threaten Australia's national security through accelerated information manipulation, AI-enabled disinformation campaigns to erode public trust, and lowering barriers for unsophisticated actors to engage in malicious cyber activity. Given the growing use of AI for influence operations and cyberattacks, calling out these risks is prudent and proactive.


Some Concrete Examples of Realised Harms


Importantly, the government provides specific instances where the risks of AI have already resulted in real-world harms, such as AI screening tools discriminating against certain ethnicities and genders in hiring. These concrete examples demonstrate that the identified risks are not just theoretical but are actively materialising.



Response to Questions for Consultation on High-Risk AI Classification


The Australian Government has outlined the following Questions for Consultation on their proposed approach on classifying High-Risk artificial intelligence systems. In this insight, we intend to offer a response to some of these questions based on how the Australian Government has designated their regulatory & governance priorities:


  1. Do the proposed principles adequately capture high-risk AI? Are there any principles we should add or remove? Please identify any:

    1. low-risk use cases that are unintentionally captured

    2. categories of uses that should be treated separately, such as uses for defence or national security purposes.

  2. Do you have any suggestions for how the principles could better capture harms to First Nations people, communities and Country?


  3. Do the proposed principles, supported by examples, give enough clarity and certainty on high-risk AI settings and high-risk AI models? Is a more defined approach, with a list of illustrative uses, needed?

    1. If you prefer a list-based approach (similar to the EU and Canada), what use cases should we include? How can this list capture emerging uses of AI?

    2. If you prefer a principles-based approach, what should we address in guidance to give the greatest clarity?


  4. Are there high-risk use cases that government should consider banning in its regulatory response (for example, where there is an unacceptable level of risk)? If so, how should we define these?


  5. Are the proposed principles flexible enough to capture new and emerging forms of high-risk AI, such as general-purpose AI (GPAI)?

  6. Should mandatory guardrails apply to all GPAI models?

  7. What are suitable indicators for defining GPAI models as high-risk? For example, is it enough to define GPAI as high-risk against the principles, or should it be based on technical capability such as FLOPS (e.g. 10^25 or 10^26 threshold), advice from a scientific panel, government or other indicators?


Response to Questions 1 & 3


The paper proposes the following definition of General Purpose Artificial Intelligence:

An AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems

Our feedback at Indic Pacific Legal Research LLP is that the definition may be considered to cover or co-opt the coverage of "potential risks". However, the definition should not be used as a default legal approach, for a few reasons.


When we attempt to designate an intended purpose of any AI system, it may also help governments reflect upon the substandard or advanced level of their proposed use cases. The propensity of an AI system to be substandard, or advanced as GPAI, entirely relies on how its intended purpose is established with relevant technical parameters. While the definition can be kept as it is - we recommend that the Government must reconsider linking the definition directly to bypass the requirement to determine intended purpose of GPAI, to attribute risk recognition on general-purpose AI systems. Since it is requested in the consultation question whether low-risk use cases may be unintentionally captured, it may be feasible to determine the intended purpose of a GPAI to further attribute high-risk recognition upon it, in a reasonable manner. This helps the regulator to avoid low-risk use cases be unintentionally captured.


The EU/ Canada approaches highlighted in the tables on the containerised recognition of artificial intelligence use cases may be adopted. However, we recommend that guidance-based measures could be helpful in keeping up with the international regulatory landscape, instead of adopting the EU/ Canada approach. The larger rationale of our feedback is that listing use cases can be achieved by creating a repository of recognised AI use cases. However, the substandard/ advanced nature of these use cases, and the outcome and impact of these use cases requires effective documentation with time. That may create some transparency, or else it may lead to a situation where low-risk use cases might be captured in hindsight.


Response to Question 5


The principles need guidance points for further expansion, since the examples mentioned for specific principles do not convince to give a clearer picture on the scope of the principles. However, the recognition of severity & extent of an AI risk is a substantial way to define the threshold of purposive application of these principles, provided it is done in a transparent fashion.


Response to Question 7


The FLOPS criteria to signify GPAI as high-risk is deeply counterproductive, and it would be necessary to involve human expertise and technical indicators to designate any GPAI as high-risk.


For example, we had proposed India's first artificial intelligence regulation as a private effort to the Government of India, to get public feedback, called aiact.in.


You may look at the following parts of aiact.in Version 3 for reference:


 

Section 5 – Technical Methods of Classification


(1)   These methods as designated in clause (b) of sub-section (1) of Section 3 classify artificial intelligence technologies subject to their scale, inherent purpose, technical features and technical limitations such as –

(i)    General Purpose Artificial Intelligence Applications with Multiple Stable Use Cases (GPAIS) as described in sub-section (2);

(ii)   General Purpose Artificial Intelligence Applications with Multiple Short-Run or Unclear Use Cases (GPAIU) as described in sub-section (3);

(iii) Specific-Purpose Artificial Intelligence Applications with One or More Associated Standalone Use Cases or Test Cases (SPAI) as described in sub-section (4);

 

(2)   General Purpose Artificial Intelligence Systems with Multiple Stable Use Cases (GPAIS) are classified based on a technical method that evaluates the following factors in accordance with relevant sector-specific and sector-neutral industrial standards:

(i)    Scale: The ability to operate effectively and consistently across a wide range of domains, handling large volumes of data and users.

(ii)   Inherent Purpose: The capacity to be adapted and applied to multiple well-defined use cases within and across sectors.

(iii) Technical Features: Robust and flexible architectures that enable reliable performance on diverse tasks and requirements.

(iv)  Technical Limitations: Potential challenges in maintaining consistent performance and compliance with sector-specific regulations across the full scope of intended use cases.


Section 7 – Risk-centric Methods of Classification


(4)   High risk AI systems shall be designated based on a risk-centric method that examines their outcome and impact-based risks, considering the following factors:

(i)    Widespread utilization or deployment across critical sectors, domains, and large user groups, where disruptions or failures could have severe consequences;

(ii)   Significant potential for severe harm, injury, discrimination, or adverse societal impacts affecting a large number of individuals, communities, or the public interest;

(iii) Lack of feasible options for data principals or end-users to opt-out of, or exercise meaningful control over, the outcomes or decisions produced by the system;

(iv)  High vulnerability of data principals, end-users or affected entities due to inherent constraints such as information asymmetry, power imbalances, or lack of agency to comprehend and mitigate the risks associated with the system;

(v)   Outcomes or decisions produced by the system are extremely difficult, impractical or impossible to reverse, rectify or remediate in most instances, leading to potentially irreversible consequences.

(vi)  The high-risk designation shall apply irrespective of the AI system’s scale of operation, inherent purpose as determined by conceptual classifications, technical architecture, or other limitations, if the risk factors outlined above are present.

 

Illustration

 

An AI system used to control critical infrastructure like a power grid. Regardless of the system’s specific scale, purpose, features or limitations, any failure or misuse could have severe societal consequences, warranting a high-risk classification.


 

The approaches outlined in Section 5 and Section 7(4) of the proposed Indian AI regulation (aiact.in Version 3) provide a helpful framework for classifying and regulating AI systems based on their technical characteristics and risk profile. Here's why these approaches may be beneficial:


Section 5 - Technical Methods of Classification:

  1. Categorizing AI systems as GPAIS (General Purpose AI with stable use cases), GPAIU (General Purpose AI with unclear use cases), and SPAI (Specific Purpose AI) allows for tailored regulatory approaches based on the system's inherent capabilities and intended applications.

  2. Evaluating factors like scale, inherent purpose, technical features, and limitations helps assess an AI system's potential impact and reach, informing appropriate oversight measures.

  3. Aligning classification with relevant industrial standards promotes consistency and interoperability across sectors.

  4. Distinguishing between stable and unclear use cases recognizes the evolving nature of AI and the need for adaptable regulatory frameworks.


Section 7(4) - Risk-centric Methods of Classification:

  1. Focusing on outcome and impact-based risks ensures that the most potentially harmful AI systems are subject to stringent oversight, regardless of their technical characteristics.

  2. Considering factors like widespread deployment, potential for severe harm, lack of user control, and vulnerability of affected individuals helps identify high-risk applications that warrant additional safeguards.

  3. Recognizing the difficulty of reversing or remediating adverse outcomes emphasizes the need for proactive risk mitigation measures.

  4. Applying the high-risk designation irrespective of scale, purpose, or technical limitations acknowledges that even narrow or limited AI systems can pose significant risks in certain contexts.

  5. The illustrative example of an AI system controlling critical infrastructure highlights the importance of a risk-based approach that prioritizes societal consequences over technical specifications.


However, it's important to note that implementing such a framework would require significant technical expertise, ongoing monitoring, and coordination among regulators and stakeholders. Clear guidelines and standards would need to be developed to ensure consistent application and enforcement.


We have responded to some of the questions from Consultation Questions 8 to 12 as well:


  • Do the proposed mandatory guardrails appropriately mitigate the risks of AI used in high-risk settings? Are there any guardrails that we should add or remove?

  • How can the guardrails incorporate First Nations knowledge and cultural protocols to ensure AI systems are culturally appropriate and preserve ICIP?

  • Do the proposed mandatory guardrails distribute responsibility across the AI supply chain and throughout the AI lifecycle appropriately? For example, are the requirements assigned to developers and deployers appropriate?

  • Are the proposed mandatory guardrails sufficient to address the risks of GPAI? How could we adapt the guardrails for different GPAI models, for example low-risk and high-risk GPAI models?

  • Do you have suggestions for reducing the regulatory burden on small-to-medium sized businesses applying guardrails?


Response to Questions 8 & 11


We agree with the rationale proposed behind the mandatory guardrails, and all the 10 required guardrails, from a larger understanding of incident response protocols, and attributing accountability in AI management.


Feedback on Guardrail 1


Wherever it is discerned that GPAIs are being utilised for public-private partnerships, we recommend that requirements under Guardrail 1, must include mandatory disclosure and publication of the accountability frameworks. However, this cannot be applicable in the case of low-risk AI use cases, hence if intended purpose remains an indicator of evaluation, the obligations under Guardrail 1 must be applied with a sense of proportionality.


Feedback on Guardrails 2, 3 8 & 9


We agree with the approach in these Guardrails to obligate AI companies to ensure that "where elimination is not possible, organisations must implement strategies to contain or mitigate any residual risks". It is noteworthy that the paper recognises that any type of risk mitigation will be based on the organisation’s role in the AI supply chain or AI lifecycle and the circumstances. That makes the implementation of these guardrails effective & focused. Stakeholder participation is also necessary, as outlined in the AI Seoul Summit 2024's International Scientific Interim Report.


Feedback on Guardrail 8


We agree with the approach in this Guardrail, which focuses on transparency between developers and deployers of high-risk AI systems, for several reasons:


  1. Enabling effective risk management: By requiring developers to provide deployers with critical information about the AI system, including its characteristics, training data, key design decisions, capabilities, limitations, and risks, deployers can better understand how the system works and identify potential issues. This transparency allows deployers to proactively respond to risks as they emerge during the deployment and use of the AI system.

  2. Promoting responsible deployment: Deployers need to have a clear understanding of how to properly use the high-risk AI system to ensure it is deployed in a safe and responsible manner. By mandating that developers provide guidance on interpreting the system's outputs, deployers can make informed decisions and avoid misuse or misinterpretation of the AI system's results.

  3. Addressing information asymmetry: There is often a significant knowledge gap between developers, who have intimate understanding of the AI system's inner workings, and deployers, who may lack technical expertise. This guardrail helps bridge that gap by ensuring that deployers have access to the necessary information to effectively manage the AI system and mitigate risks.

  4. Balancing transparency and intellectual property: The guardrail acknowledges the need to protect commercially sensitive information and trade secrets, which is important to maintain developers' competitive advantage and incentivize innovation. By striking a balance between transparency and protection of proprietary information, the guardrail ensures that deployers receive the information they need to manage risks without compromising developers' intellectual property rights.


Response to Questions 10 & 12


The proposed mandatory guardrails in the Australian Government's paper do a good job of distributing responsibility across the AI supply chain and AI lifecycle between developers and deployers:


However, the current approach could be improved in a few ways:

  • More guidance may be needed on how responsibilities are divided when an AI system involves multiple developers and deployers. The complexity of modern AI supply chains can make accountability challenging.

  • Some guardrails, like enabling human oversight and informing end-users, may require different actions from developers vs. deployers. The requirements could be further tailored to each role.

  • Feedback from developers and deployers should be proactively sought to identify any misalignment between the assigned responsibilities and their actual capabilities to manage risks at different lifecycle stages.


To reduce the regulatory burden on small-to-medium enterprises (SMEs), we suggest:

  1. Providing templates, checklists, and examples to help SMEs efficiently implement the guardrails. Ready-made accountability process outlines, risk assessment frameworks, and testing guidelines would be valuable.

  2. Offering tiered requirements based on SME size and AI system risk level. Lower-risk systems or smaller businesses could have simplified record-keeping and less frequent conformity assessments.

  3. Establishing a central AI authority (maybe on an interlocutory or nodal basis) to provide guidance, tools, and oversight. This one-stop-shop would reduce the burden of dealing with multiple regulators.

  4. Facilitating access to shared testing facilities, data governance tools, and expert advisors. Pooled resources and support would help SMEs meet the guardrails cost-effectively.

  5. Phasing in guardrail requirements gradually for SMEs. An extended timeline with clear milestones would ease the transition.

  6. Providing financial support, such as tax incentives or grants, to help SMEs invest in AI governance capabilities. Subsidised training would also accelerate adoption.


Finally, we have responded to some of the Consultation Questions 13-16 provided below as well:


  • Which legislative option do you feel will best address the use of AI in high-risk settings? What opportunities should the government take into account in considering each approach?

  • Are there any additional limitations of options outlined in this section which the Australian Government should consider?

  • Which regulatory option/s will best ensure that guardrails for high-risk AI can adapt and respond to step-changes in technology?

  • Where do you see the greatest risks of gaps or inconsistencies with Australia’s existing laws for the development and deployment of AI? Which regulatory option best addresses this, and why?


Response to Question 13


We propose that Option 3, could be a reasonable option and would best address the use of AI in high-risk settings in Australia. Here are the key reasons:

  1. Comprehensive coverage: A dedicated AI Act would provide consistent definitions of high-risk AI and mandatory guardrails across the entire economy. This avoids the gaps and inconsistencies that could arise from a sector-by-sector approach under Option 1. The AI Act could also extend obligations to upstream AI developers, not just deployers, enabling a more proactive and preventative regulatory approach.

  2. Regulatory efficiency: Developing a single AI-specific Act is more efficient than individually amending the multitude of existing laws that touch on AI under Option 1 or Option 2's framework legislation. It allows for a cohesive, whole-of-economy approach.

  3. Dedicated enforcement: An AI Act could establish an independent AI regulator to oversee compliance with the guardrails. This dedicated expertise and enforcement would be more effective than relying on existing sector-specific regulators who may lack AI-specific capabilities.


However, the government should consider the following when designing an AI Act:


  • Interaction with existing laws: The AI Act should include carve-outs where sector-specific laws already impose equivalent guardrails, to avoid duplication. Close coordination between the AI regulator and existing regulators will be essential.

  • Compliance burden: The AI Act should incorporate mechanisms to reduce the regulatory burden on SMEs, such as tiered requirements based on risk levels. Practical guidance and shared resources would help SMEs build governance capabilities.

  • Responsive regulation: The AI Act should be adaptable to the rapid evolution of AI. Regular reviews, expert input, and agile rulemaking will ensure it remains fit-for-purpose.



Comments


<