Is your AI use case going to get you in trouble?
Describe one use case. Get the EU AI Act tier, the obligations that apply, the UK GDPR overlap, and the first 30 days of work. Written for UK SMBs, not enterprise transformation programmes.
The four risk tiers, in plain English
Minimal risk
Back-office productivity, spam filters, internal drafting aids with a human in the loop. No specific obligations beyond the defaults you'd expect from any software: reasonable testing, basic transparency with staff, no weird data practices. This is where most SMB AI usage actually lives.
Limited risk
Customer-facing chatbots, AI-generated content shown publicly, deep fakes. The main obligation is transparency: users must know they’re interacting with AI, and AI-generated content must be labelled. Proportionate, not onerous.
High risk
Systems that make or materially shape decisions about people in credit, employment, education, essential services, law enforcement, migration, or safety. Here the Act bites: a risk management system, technical documentation, human oversight, logging, conformity assessment before launch. Most SMBs who think they’re high risk are actually limited risk; most who are genuinely high risk have usually misjudged the scope.
Unacceptable
Social scoring, manipulative subliminal techniques, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, untargeted scraping for facial recognition. Banned. If the tool puts you here, something's gone wrong with the use case definition; re-read the output, re-run with better inputs.
What a defensible AI deployment looks like
Whatever tier you land in, a few things are worth doing regardless:
- A one-page AI use register. What systems, for what purpose, with what data, reviewed by whom, last updated when.
- A human-in-the-loop rule with teeth. Not “we’ll check sometimes” but a policy that says which outputs require review before action and who's accountable.
- Vendor paperwork. DPAs with OpenAI, Anthropic, or whichever provider you’re using. Check data retention terms and whether training-on-prompts is switched off.
- Lightweight evals. A small golden set of representative inputs you run every time you change models or prompts. Catches regressions that matter.
Questions buyers actually ask
Directly, only if your AI is placed on the EU market, serves EU users, or processes data about EU persons. Indirectly, it applies to almost everyone selling into medium or large UK customers, because procurement teams increasingly require AI Act-equivalent controls regardless of geography. UK government is also signalling alignment via the AI Safety Institute and an expected UK AI Bill. Treat it as the operating baseline rather than the exception.
Want to walk through this with a human?
20-minute call. Bring the output, I'll help you decide what to fix first and what can wait.
Book a call