
Written by Joanne Hughes, Policy & Compliance Specialist at Policy Pros
Last reviewed:
EU AI Act and UK Businesses - What You Need to Do Before August 2026
The EU Artificial Intelligence Act has been in phased force since 2 February 2025. Prohibited practices and AI literacy obligations applied first, general-purpose AI model duties followed in August 2025, and the most demanding tier - obligations for high-risk AI systems - applies from 2 August 2026.
UK businesses are not exempt. The Act has deliberate extraterritorial reach: a UK company whose AI system, model, or output is placed on the EU market or used in the EU is in scope, regardless of where the company is registered.
Layered on top of that, the UK runs its own principles-based regime, enforced through existing regulators, and ISO/IEC 42001 has emerged as the practical management-system standard that bridges the two. This guide sets out what UK businesses actually need to do.
This article works alongside our existing AI governance policies and artificial intelligence policy guidance, and complements our coverage of the UK Cyber Resilience Pledge launched in April 2026.
Why the EU AI Act Applies to UK Businesses
The EU AI Act's territorial scope works the same way as GDPR. The regulation reaches non-EU providers and deployers whose AI systems or outputs touch the EU market. In practice, a UK business is in scope if any of the following are true:
- It places an AI system on the EU market (sells, licenses, or makes available)
- It puts an AI system into service in the EU
- The output of its AI system is used in the EU - even if the system itself sits in the UK
- It is a deployer of an AI system established in the EU
That last category catches more UK businesses than expected. A UK SaaS product whose recommendation engine generates results consumed by an EU customer is in scope.
A UK consultancy using an AI model to produce reports delivered to EU clients is in scope. The Act does not require an EU entity, EU servers, or EU staff.
The Four Risk Tiers Under the EU AI Act
The Act classifies AI systems by risk and imposes obligations proportional to that risk.
Prohibited AI
Social scoring, real-time biometric identification in public spaces, and manipulative or exploitative AI are banned. These prohibitions have been in force since 2 February 2025.
High-Risk AI
Recruitment and HR systems, credit scoring, insurance underwriting, education assessment, critical infrastructure, and law enforcement support all fall into the high-risk tier. Obligations include conformity assessment, a quality management system, technical documentation, post-market monitoring, and EU database registration. High-risk obligations apply from 2 August 2026.
Limited-Risk AI
Chatbots, emotion-recognition systems, and AI-generated content fall into the limited-risk tier. The main obligations are transparency duties: disclosing AI use to users and labelling AI-generated content.
Minimal-Risk AI
Spam filters, AI in video games, and basic recommendation systems sit in the minimal-risk tier with no mandatory obligations.
General-purpose AI models (foundation models) sit alongside this with their own dedicated regime that has applied since 2 August 2025. For most UK SMEs, the practical question is whether any AI use falls into the high-risk tier - particularly recruitment, HR, and credit-related uses, which are the most common entry points.
What High-Risk Obligations Actually Require
The high-risk tier is where the compliance burden bites. Providers and deployers of high-risk AI systems must put in place:
- A conformity assessment before placing the system on the market
- A quality management system covering AI development, deployment, and monitoring
- Technical documentation sufficient to demonstrate compliance
- Post-market monitoring of system performance and incidents
- EU database registration for the system
- A risk management system running across the lifecycle
- Data governance covering training, validation, and test data quality
- Human oversight mechanisms appropriate to the system's risk profile
- Accuracy, robustness, and cybersecurity measures
- Logging and transparency to enable supervisory authority oversight
Penalties for non-compliance reach €35 million or 7% of global annual turnover, whichever is higher - the highest of any EU regulation.
The UK Regime: Five Principles, Existing Regulators
The UK has not introduced a single AI Act. Instead, it operates a principles-based regime in which existing regulators apply five cross-sectoral principles within their existing remits.
The five UK AI principles:
- Safety, security and robustness - AI systems should function in a robust, secure, and safe way throughout the AI lifecycle.
- Appropriate transparency and explainability - users and affected parties should be able to understand how AI systems are making decisions.
- Fairness - AI systems should not undermine legal rights, discriminate unfairly, or produce unfair commercial outcomes.
- Accountability and governance - effective oversight with clear lines of accountability across the supply chain.
- Contestability and redress - affected parties should be able to contest AI decisions and access meaningful routes to redress.
The regulators applying them are the ICO (data protection), the FCA (financial services), the MHRA (medical devices and health products), the CMA (competition and consumer), and the EHRC (equality and human rights). Each regulator already has the statutory powers to enforce against AI failures within its remit through UK GDPR, the Equality Act 2010, the Consumer Rights Act 2015, and sector-specific regimes.
The UK's voluntary framing is misleading. There is no AI Act, but there is no enforcement gap either - existing regulators can already investigate and penalise AI failures.
Layered on this is the AI Code of Practice Regulations 2026 (SI 2026/425), which came into force on 12 May 2026 and requires the Information Commissioner to prepare a code of practice on AI and automated decision-making, with specific provisions for children's data and Article 22C UK GDPR.
Source: EfficiencyAI - UK AI regulatory principles.
ISO/IEC 42001: The Practical Bridge
ISO/IEC 42001:2023 is the world's first AI management system standard. For UK businesses navigating both the EU AI Act and the UK principles regime, it is the most practical framework for evidencing governance.
The standard:
- Follows a Plan-Do-Check-Act structure across 10 clauses (clauses 1-3 set context, 4-7 establish governance, 8-10 cover operations)
- Includes 42 control objectives across 9 topics in Annex A
- Covers AI risk assessment, operational controls, post-market monitoring, incident reporting, and continuous improvement
- Addresses fairness, explainability, transparency, and data governance directly
- Is sector-agnostic and works at any scale
For EU AI Act compliance, ISO 42001 maps closely to the high-risk tier obligations - particularly the quality management system, post-market monitoring, and risk management requirements. It does not replace the conformity assessment, but it provides the management-system spine on which conformity is built.
For UK regime compliance, ISO 42001 directly evidences the accountability and governance principle and provides documentation that the ICO, FCA, and other regulators expect when investigating AI use.
Source: ISO 42001 explained.
What UK Businesses Need to Document
Whether or not you formally certify to ISO 42001, the following documentation is the minimum operational baseline for businesses using AI in 2026.
AI Governance Policy
Sets out how the business identifies, assesses, and manages AI risk; who is accountable; and what approval is required before AI is deployed.
AI Use Register
Inventory of every AI system used or deployed, including purpose, data inputs, decision outputs, and risk classification under the EU AI Act tiers.
AI Risk Assessment Template
Completed for each AI system before deployment and reviewed periodically. Maps to the EU AI Act conformity assessment for high-risk systems.
Data Governance Documentation
Covers training data, validation data, and test data quality. The EU AI Act high-risk tier explicitly requires this; the UK principles regime makes it a fairness obligation.
Human Oversight Procedure
Documents the human-in-the-loop or human-on-the-loop arrangements for material AI decisions. Particularly important for HR, recruitment, and customer-facing applications.
Incident Response Procedure
Covers AI-specific incidents (model drift, biased outputs, harmful generations) as well as general security incidents.
Transparency and Disclosure Statements
What users are told about AI use. The EU AI Act limited-risk tier requires disclosure; the UK fairness principle implies it.
Vendor and Supplier Due Diligence
Covers third-party AI tools, models, and APIs used by the business. Most AI compliance failures originate in third-party tools - and your AI usage policies should reference this directly.
Where This Fits in the Wider 2026 Picture
The EU AI Act and the UK principles regime do not operate in isolation. UK businesses are also navigating:
- The AI Code of Practice Regulations 2026 (in force from 12 May 2026)
- The Crime and Policing Act 2026, which brings AI chatbots into illegal content rules and imposes 48-hour content takedown duties
- The UK Cyber Resilience Pledge launched at CYBERUK on 22 April 2026, which expects board-level governance evidence for cyber risk - increasingly extended to AI
- A potential UK AI Bill, currently expected (but not confirmed) for the May 2026 King's Speech
The common thread across all of these is documented governance. Businesses with a single, coherent AI governance framework will satisfy multiple regimes simultaneously. Businesses building separate compliance silos for each regulation will not.
Three Practical Actions to Take Now
- Build an AI use register. You cannot manage what you have not inventoried. The first step for every business is a complete list of AI systems used or deployed, including third-party tools embedded in other software.
- Classify each system against the EU AI Act tiers. Identify any high-risk uses (HR, recruitment, credit, insurance, education, critical infrastructure). Anything in this tier needs structured compliance work before 2 August 2026.
- Adopt ISO 42001 as your governance backbone. Even without formal certification, using ISO 42001 as the structure for your AI policies, risk assessments, and management-system documentation gives you a defensible position under both the EU AI Act and the UK principles regime.
How Policy Pros Can Help
Policy Pros writes the AI governance documentation businesses need to operate under both the EU AI Act and the UK principles regime. We produce AI governance policies, AI use registers, risk assessment templates, human oversight procedures, and the supplier due diligence documents that catch the most common compliance gap.
If your existing artificial intelligence policies need updating to reflect the EU AI Act, the UK principles regime, or ISO 42001 alignment, our policy review service can identify what needs changing and deliver updated documents on a fixed-price basis.