The European Commission has detailed the central role that its new AI office will play in overseeing how AI is deployed in the Europe and the enforcement of the new EU AI Act, which has passed through several legislative stages on the way to a final vote in the EU parliament, scheduled for April.
The act is designed to provide AI developers and organisations deploying AI with clear requirements and obligations regarding its uses. It promises to be the world’s most significant legislation governing the sector applied to date, operating at a supranational level across the bloc – and with ramifications for how AI is used globally.
Legal experts say the EU’s AI act echoes the framework of the EU’s existing General Data Protection Regulation (GDPR), but applied to AI, by imposing requirements on companies, which could result in multi-million-euro penalties if they fail to comply.
The act covers far more than just headline-grabbing technology, such as ChatGPT, and is intended to foster “trustworthy” AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles. As such, it will affect a range of industries from the energy sector to manufacturing and finance. The legislation takes a risk-based approach, with AI systems placed into one of four categories from unacceptable risk via high risk and limited risk down to minimal risk. The EU stresses it expects most uses of AI to fall into the minimal risk category.
The AI office will be a key point of contact for companies seeking clarification on their responsibilities with regard to the act and how the act will be enforced. Among its many tasks, the office will contribute to the coherent application of the AI Act, develop tools to evaluate AI classification models, draw up codes of practice to flesh out rules in cooperation with AI developers and scientists and monitor compliance with the regulation. The AI office is also tasked with strengthening the development and use of trustworthy AI and fostering international cooperation.
Due to its complexity and scope, the legislation has had a protracted three-year evolution, with some EU states seeking more time to examine how it would work in practice and . However, a political agreement was reached on the legislation in December, a final version of the text was formally presented in late January, and ambassadors from all the EU’s 27 member states approved it in February.
Assuming it is voted into law by the European Parliament in April, practices prohibited by the act will be banned six months later with other rules coming into force over the next three years.
SME support package
Mindful of the complexities faced by smaller companies seeking to develop and deploy AI solutions, the EU has also launched a package of measures to support European startups and SMEs in developing trustworthy AI that “respects EU values and rules”.
These include a proposal to make Europe's supercomputers available to innovative European AI startups to train their AI models to conform to best practice, and the launch of “AI Factories” to serve as one-stop shops for start-ups to develop advanced AI models and industrial applications.
“We are committed to innovation of AI and innovation with AI. And we will do our best to build a thriving AI ecosystem in Europe,” Margrethe Vestager, executive vice-president for a Europe Fit for the Digital Age, said on launching the measures.
EU institutions are striving to ensure the bloc is well-placed to take a lead in developing AI technology and standards. In January, TwinEU, a €25m, three-year project to develop a digital twin of the entire European electricity grid, backed by the EU’s Horizon Europe programme, was formally inaugurated, paving the way for collaboration between 75 partners across 11 countries.
(Photo: European Commission, Brussels/Shutterstock)