Foundation Models Pose a Hurdle
EU lawmakers are struggling to agree on how to regulate artificial intelligence (AI) systems like ChatGPT, in a threat to landmark legislation aimed at keeping AI in check.
As negotiators meet on Friday for crucial discussions ahead of final talks scheduled for Dec. 6, ‘foundation models’, or generative AI, have become the main hurdle in talks over the European Union’s proposed AI Act, said the sources, who declined to be identified because the discussions are confidential.
What are Foundation Models?
Foundation models are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks. They are seen as the future of AI, but they also pose potential risks, such as being used to generate harmful content or to manipulate people.
EU AI Act
The EU AI Act is a proposed law that would regulate the development and use of AI in the EU. It aims to ensure that AI is used safely and ethically, and to protect people’s rights.
Negotiations on the AI Act
Negotiations on the AI Act have been ongoing for two years. The bill was approved by the European parliament in June. The draft AI rules now need to be agreed through meetings between representatives of the European Parliament, the Council and the European Commission.
Challenges to Reaching an Agreement
One of the main challenges to reaching an agreement on the AI Act is how to regulate foundation models. Some experts and lawmakers have proposed a tiered approach, with stricter rules for high-risk models. However, others have argued that all foundation models should be subject to the same rules.
Another challenge is the use of AI by law enforcement agencies. Some lawmakers are concerned that AI could be used to violate people’s rights, such as by using facial recognition to track people without their consent.
Impact of a Delayed Agreement
If EU lawmakers cannot agree on the AI Act by the end of the year, it could be delayed or even shelved. This would be a major setback for the EU’s efforts to regulate AI.
The Future of AI Regulation
The future of AI regulation is uncertain. However, the EU AI Act is a significant step forward in the effort to establish global standards for AI. The outcome of the negotiations on the AI Act will have a major impact on the development and use of AI in the EU and around the world.
In addition to the challenges mentioned above, EU lawmakers are also divided over the definition of AI, the fundamental rights impact assessment, law enforcement exceptions, and national security exceptions.
The definition of AI is a key issue because it will determine which systems are subject to the AI Act. Some lawmakers have proposed a broad definition that would cover a wide range of systems, while others have proposed a narrower definition that would only cover systems that pose a high risk of harm.
The fundamental rights impact assessment is another important issue because it will require companies to assess the impact of their AI systems on people’s fundamental rights. This could include rights such as privacy, freedom of expression, and non-discrimination.
Law enforcement exceptions are also a contentious issue. Some lawmakers have proposed that law enforcement agencies should be exempt from some of the requirements of the AI Act, while others have argued that all AI systems should be subject to the same rules.
National security exceptions are another area of disagreement. Some lawmakers have proposed that national governments should be able to exempt AI systems from the AI Act if they are used for national security purposes, while others have argued that national governments should not be able to override the requirements of the AI Act.
The outcome of the negotiations on the AI Act is still uncertain. However, the bill is a significant step forward in the effort to establish global standards for AI. The EU AI Act is likely to have a major impact on the development and use of AI in the EU and around the world.
FAQ
The EU AI Act is a proposed law that would regulate the development and use of artificial intelligence (AI) in the EU. It aims to ensure that AI is used safely and ethically, and to protect people’s rights.
Foundation models are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks. They are seen as the future of AI, but they also pose potential risks, such as being used to generate harmful content or to manipulate people.
One of the main challenges to reaching an agreement on the EU AI Act is how to regulate foundation models. Some experts and lawmakers have proposed a tiered approach, with stricter rules for high-risk models. However, others have argued that all foundation models should be subject to the same rules.
If EU lawmakers cannot agree on the AI Act by the end of the year, it could be delayed or even shelved. This would be a major setback for the EU’s efforts to regulate AI.
The future of AI regulation is uncertain. However, the EU AI Act is a significant step forward in the effort to establish global standards for AI. The outcome of the negotiations on the AI Act will have a major impact on the development and use of AI in the EU and around the world.
The definition of AI under the EU AI Act is a key issue because it will determine which systems are subject to the AI Act. Some lawmakers have proposed a broad definition that would cover a wide range of systems, while others have proposed a narrower definition that would only cover systems that pose a high risk of harm.