Join the PubAffairs Network

Established in January 2002, PubAffairs is the premier network and leading resource for the public affairs, government relations, policy and communications industry.

The PubAffairs network numbers over 4,000 members and is free to join. PubAffairs operates a general e-Newsletter, as well as a number of other specific group e-Newsletters which are also available to join by completing our registration form.

The PubAffairs e-Newsletters are used to keep members informed about upcoming PubAffairs events and networking opportunities, job vacancies, public affairs news, training courses, stakeholder events, publications, discount offers and other pieces of useful information related to the public affairs and communications industry.

Join the Network

The EU will soon introduce its AI Act, the world’s first regulation on rapidly developing AI technology, which is expected to take full legal effect from Spring 2026. The UK on the other hand, has been less eager to introduce legislation to regulate AI due to concerns that a prescriptive approach would ‘stifle innovation’. Conventional wisdom suggests that the AI Act could have a ‘Brussels effect’ – the indirect influence on other jurisdictions to align with EU law – similar to the EU’s GDPR rules which took full effect in 2018 and has become the global gold standard for data protection.

That said, post-Brexit the UK has diverged from the EU in a number of policy areas, and on AI the jury is still out on whether the UK will converge with the AI Act as the UK has embraced a ‘pro-innovation’ approach compared to the EU’s more rigid regulatory framework, which aligns in some ways and differs in others.

The UK and EU approaches are similar insofar as they both adopt a ‘risk-based’ approach to the regulation of AI, although the EU’s AI Act provides a prescriptive legal framework which categorises risk levels, whereas the UK’s principles are far looser and rely on regulators to assess AI-specific risks as they see fit.

The UK policy has a de-centralised and vertical approach consisting of five ‘principles’ with an emphasis on safety and transparency to guide sector-specific regulators to manage AI in their area of expertise. For example, the Financial Conduct Authority (FCA) will be expected to regulate AI in the financial services sector. This is a relatively loose non-legislative approach to regulating AI compared to the EU AI Act’s introduction of four risk categories for AI models: ‘minimal/none’, ‘limited’, ‘high’ and ‘unacceptable’, the latter of which will be banned.

The AI Act on the other hand, adopts a centralised and horizontal approach, meaning authority largely rests at the EU rather than Member State level and outlines rules for AI across all sectors. The latest draft of the AI Act introduces maximum fines of up to 7% of global annual turnover, or 30m euros, for the most severe breaches of provisions on the use of prohibited AI practices. Meanwhile, the UK’s approach to regulating AI does not include any financial repercussions for violations.

The magnitude of AI developments is unquantifiable and is expected to impact almost every sector, not least agrifoods. The European Parliament signed off the AI Act on 13th March, meaning it remains only for the Council to provide its seal of approval – expected in the coming weeks – for the regulation to be finalised. In the UK, sector-specific regulators will publish their AI strategy plans by 30th April, providing companies with clarity on their obligations.

With rapid innovations in AI technology in the coming months and years, it is certain that the application of the UK and EU’s regulatory approaches will have extensive ramifications on a vast array of businesses on both sides of the Channel.


by Grant Dunnery, Associate Consultant