EU needs to emulate UK's AI regulations to maintain competitive edge
The European Union (EU) and the United Kingdom (UK) are taking different approaches to regulating artificial intelligence (AI), each with its unique advantages and potential drawbacks.
The EU's AI Act, a comprehensive legal framework, focuses on safety, fundamental rights, and ethical standards. It categorises AI systems into Unacceptable, High, Limited, and Minimal Risk, and imposes stringent requirements on high-risk AI use cases. The Act's extra-territorial application aims for harmonised regulation across member states, but it may stifle innovation due to its complexity and strictness.
On the other hand, the UK's pro-innovation regulation emphasises adaptability, transparency, accountability, fairness, and contestability. It supports AI development through flexible regulation, allowing automated decision-making in some contexts. The UK's approach encourages rapid AI innovation and investment, but it may lack the robustness of protections compared to the EU's approach.
A study benchmarking countries based on their AI development ranks the UK higher than any of the EU's 27 member states. However, the EU risks losing further ground in AI development due to its heavy-handed approach to tech regulation. The UK's AI white paper has a more flexible definition of AI, based on features such as data training, inference, and output generation with little to no human oversight.
The EU delegates entire sectors as "high risk" and enforces the rules through a supranational, not a specialized sectoral, authority. In contrast, the UK assigns enforcement responsibility to sectoral regulators, such as the Health and Safety Executive or the Financial Conduct Authority.
Both frameworks have their flaws. The EU's AI Act contains impractical requirements such as "error-free" data sets and impossible interpretability requirements. Meanwhile, the UK's approach risks insufficient regulation leading to potential ethical or safety issues.
The UK government needs to recognise that no technologies are risk-free and clarify that risk for AI systems should be comparable to what government allows for other products on the market. The EU should align its approach to AI closer to the UK's to enable interoperability and limit damage to AI development and adoption.
The UK's pro-innovation regulation of digital technologies may provide unique benefits like regulatory sandboxes and light-touch regulation to companies that settle on its shores. The UK's approach focuses on addressing concerns through existing legislation, unlike the EU's rush to create a new law.
The future of AI is critical for both the EU and the UK. The EU finds itself at a critical juncture regarding the future of AI and needs to get its policy right. The EU should focus on applying existing regulations to AI, monitoring how the technology develops, and further studying the potential negative impacts of hasty regulation of this emerging technology.
In essence, the EU AI Act's strength lies in its comprehensive, rights-focused, and harmonized regulatory framework aimed at trustworthy AI, but risks slowing innovation due to its strictness and complexity. Conversely, the UK’s approach prioritizes fostering innovation with lighter regulation and adaptive legislation, potentially boosting growth and investment but with a trade-off in terms of the robustness of protections and international regulatory consistency.
[1] European Commission, Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act), Brussels, 21.4.2021, COM(2021) 206 final. [2] UK Government, AI Strategy, London, 18.10.2022,
- The EU's AI Act, a comprehensive legal framework for artificial intelligence, prioritizes safety, fundamental rights, and ethical standards, categorizing systems into different risk levels and imposing strict requirements on high-risk AI use cases, potentially stifling innovation due to its complexity and stringent nature.
- In contrast, the UK's pro-innovation regulation emphasizes adaptability, transparency, accountability, fairness, and contestability, supporting AI development through flexible regulation, allowing automated decision-making in some contexts, which may encourage rapid innovation and investment but lack robust protections compared to the EU's approach.
- A study ranking countries based on AI development places the UK higher than any of the EU's 27 member states, raising concerns that the EU's heavy-handed approach to tech regulation may further reduce its ground in AI development.
- The EU delegates entire sectors as "high risk" and enforces the rules through a supranational authority, while the UK assigns enforcement responsibility to sectoral regulators, such as the Health and Safety Executive or the Financial Conduct Authority.
- Both frameworks have their flaws: the EU's AI Act contains impractical requirements, such as "error-free" data sets and impossible interpretability requirements, while the UK's approach risks insufficient regulation leading to potential ethical or safety issues.
- The UK government needs to recognize that no technologies are risk-free and clarify that risk for AI systems should be comparable to what government allows for other products on the market. The EU should align its approach to AI closer to the UK's to enable interoperability and limit damage to AI development and adoption.