top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

The French, Italian and German Compromise on Foundation Models of GenAI

The author is pursuing law studies at National Law University, Odisha and is a former Research Intern at the Indian Society of Artificial Intelligence and Law.

 
 

Almost every economy around the world is trying to curate a model that would put guardrails on the AI technology that is rapidly developing all around the world and is being increasingly used by people. Like other economies, the European Union (EU) is trying to lead the world in developing AI technology and in coming up with an efficient and effective way to regulate it. In mid-2023, the EU passed one of the first major laws to regulate AI which was a model that would aid policymakers. The European Parliament, for example, had passed a draft law, the EU AI Act, which would impose restrictions on the technology’s riskiest uses.


Unlike the United States, which has taken up the challenge to create such a model quite recently, the EU has been trying to do so for more than two years. They took it up with greater urgency after the  release of ChatGPT in 2022.


On 18 November 2023, Germany, France, and Italy reached an important pact on AI regulation and released a joint non-paper that countered some basic approaches undertaken by the EU AI Act. They suggested alternate approaches that they claim would be more feasible and efficient. The joint non-paper underlines that the AI Act must aim to regulate the application of AI and not the technology itself because innate risks lie in the former and not in the latter.


The joint non-paper highlights some key areas in which they beg to differ from the point of view of the AI Act passed by the Parliament. The highlights are:


  1. Fostering innovation while balancing responsible AI adoption within the EU

  2. The joint paper pushes for mandatory self-regulation for foundation models and advocate for stringent control over AI's foundational models, aiming to enhance accountability and transparency in the AI development process.

  3. While the EU AI Act targets only major AI producers, the joint paper advocates for a universal adherence to avoid compromising trust in the security of smaller EU companies.

  4. Immediate sanctions for defaulters of codes of conduct are excluded but a future sanction system is proposed.

  5. The focus is on regulating the application of AI and not the AI technology itself. Therefore, the development process of AI models should not be subject to regulation.


What are Foundation Models of AI?


Foundation models, also called general purpose AI, are AI systems which can be used to conduct a wide range of tasks that pertain to various fields such as understanding language, generating text and images, and conversing in natural language. This can be done so without majorly modifying and fine-tuning them. They can be used through several innovative methods. They are deep learning neural networks that change the approach adopted by machine learning. Data scientists use foundation models as starting points of developing AI instead of starting from scratch. This makes the process fast and cost-effective.


The European Union AI Act


The EU AI Act is a comprehensive framework of law that governs the sale and use of AI in EU. It sets consistent standards for AI systems across EU. It tries to address the risks of AI through obligations and standards that intend to safeguard the safety and fundamental rights of citizens in the EU and globally.


It works as a part of a wider legal and policy framework and regulates different aspects of the digital economy which includes General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act. It moves away from a “one law fixes all” approach to this emerging AI regime.


Risk-based Approach of the AI Act versus the joint pact of FR, IT and DE


One of the key aspects of the AI Act is to regulate foundation models that support a wide range of AI applications. The most prominent AI companies like OpenAI, Google DeepMind, and Meta develop such foundation models. The AI Act aims to regulate these models by running safety tests and apprise the governments of their results in order to ensure accountability and transparency and to mitigate risks. The recent U.K. AI Safety Summit focused on risks associated with the most advanced foundation models. So basically, it is the technology that is regulated in order to manage risks.


The joint non-paper aims to change this narrative of regulating the technology to regulating the application of the technology. Developers of foundation models would have to publish the kinds of testing done to ensure their model is safe. No sanctions would be applied initially to companies who did not publish this information as per the code of conduct, but the non-paper suggests a future sanction system could be set up in future.


The three countries oppose the “two-tier” approach to foundation model regulation, originally proposed by the EU because it would impose stricter regulations on the most capable models expected to have the largest impact. It will hit on the legs of innovation and hamper the growth of AI technologies. French company Aleph Alpha and German company Mistral AI, which are the most prominent AI companies of Europe, are opposed to this approach of risk management.


The discourse on how regulation should happen and adopting an optimal strategy would require a close look at the pros of cons of each approach i.e. regulation of the technology (EU’s two tier approach) and regulation of the applications of the technology (Franco-German-Italian view).


Regulating the technology itself


Pros:

  1. Directly regulating the technology establishes uniform and consistent standards which all applications can adopt. This gives clarity in compliance.

  2. Technological regulation can prevent creation and use of harmful and malicious AI applications.

Cons:

  1. Strict regulations on the technology will hamper and stifle innovation by limiting the exploration of potentially beneficial applications.

  2. Technological advancements happen much faster than prescriptive regulations. The very purpose of regulations will be defeated if it fails to regulate the advanced versions of the technology.


Regulating the application of technology


Pros:

  1. When regulation is application-specific, it allows for a flexible approach where rules of risk mitigation can be tailored according to different applications and their uses instead of a one size fits all approach which is impractical.

  2. Regulations based on application can help to focus on responsible use of the technology and on measures of accountability in case potential risks manifest themselves. It still permits smooth flow of innovation.

Cons:

  1. A risk of ignoring certain aspects of AI technology remains if the regulation is solely focused on applications, which may leave for misuse or unintended effects.


Desirably, a balanced approach that combines elements of both technology-focused and application-focused regulation can be most effective. This can be done by curating rules and standards for the technology itself and also framing regulations specific to the potential risks associated with their application.


But a case must be made which opines that regulation of the application of technology is a better way to go because, in a world where use of technology in the day to day life has become the norm for many, stifling innovation, which is a direct result of regulating technology, can be a bad idea. Technological advancements are required because everyone has the psychological tendency to get their work done easily and quickly, especially when society is evolving to increase employment and involvement of people in skilled work.


Focus must be on ensuring the large masses of people benefit from the boons of the AI technology while ensuring that incentives for bad actors are minimised through regulation of the use of AI tools. This is to be done by increasing accountability on its use, by giving proper guidance on how to use it efficiently and ethically and how to prevent potential harms that can arise from uninformed or irresponsible use.


The Model Cards requirement under mandatory self-regulation


The non-paper proposes regulating specific applications rather than foundation models, aligning more with the risk-based approach. It requires defining model cards as a means to achieve such an approach. Defining model cards is a mandatory element of self-regulation. This means foundation model developers would have to define model cards, including technical documentation that presents information about trained models in an accessible way, following best practices within the developers’ community.


Defining model cards promotes the principle of ‘transparency of AI’. Model cards require inclusion of limits on intended uses, potential limitations, biases, and security assessments. But it only advises users to make decisions of purchasing or not. But when it comes to recognising the transparency, accountability, and responsible AI criteria, then many users might find it highly complex to comprehend due to the technical nature of AI applications. They will not be able to adequately interpret the information on model cards.


Model cards are more accessible to developers and researchers with a high level of education in AI. So there arises an imbalance of power between developers and users regarding the understanding of AI. Standardization of information remains an anomaly if model cards are used. Providing a high volume of information in model cards may confuse users. Maintaining a balance between transparency and simplicity is crucial. Users may not be aware of the existence of model cards or may not take the time to review them, especially in cases where AI systems are very complex.


The model card requirement may lack feasibility because there is no scope of external monitoring over its elements. It is inflexible to the pace in which technology develops and so its information can get outdated, resulting in stifling innovation by binding new technologies with outdated compliances and information.


bottom of page