The scorecard
Our key business data lives in systems we can query (CRM, database, data warehouse), not in email threads and spreadsheets.
Rate how true this is for your business today.
Why these five categories
Most AI readiness frameworks come from enterprise consultancies and measure things SMBs don't have (“AI centre of excellence”, “cross-functional steering committee”). This scorecard uses the five things that actually kill SMB AI projects, in rough order of how often they do.
- Data. If your customer records are in five places and none of them are authoritative, no AI will save you. It'll just hallucinate faster.
- Process. You can't automate what you can't describe. Undocumented workflows become undocumented automations that break silently.
- People. The single best predictor of a successful AI build is whether one named human on your team actually wants it to work. Committees don't ship.
- Tech. Legacy systems without APIs turn every build into a scraping project. Fixable, but expensive.
- Governance. Not the biggest day-one killer, but the biggest reputation killer. Unreviewed AI output going to clients is how you end up on LinkedIn for the wrong reasons.
What each score tier means
High readiness (60–75)
You have the operational hygiene to deploy AI automation successfully. Pick one high-leverage workflow, scope it tight, ship in four to eight weeks, measure the result before rolling out more. The failure mode at this tier is trying to do five projects at once instead of one that actually lands.
Moderate readiness (45–59)
You can start, but one or two categories will bite you. Look at your lowest-scoring category and either fix it first or scope a pilot that doesn't depend on it. A moderate score is where most UK SMBs sit; it's not a blocker, but it's a warning about which parts of the build will be painful.
Low readiness (30–44)
AI isn't your first problem. Invest in data consolidation, process documentation, or a named owner before scoping a real build. You can still run a small pilot to learn, but a full AI programme at this tier will consume its budget fixing the foundations it should have started with.
Not yet ready (15–29)
Don't spend on AI this quarter. Fix the basics first: consolidate where your data lives, write down how work actually gets done, decide who owns what. Come back when those are in place. A good agency will tell you this. A bad one will sell you the AI anyway.
How to improve your weakest category
The scorecard is only useful if you act on it. A few practical moves per category:
- Weak data? Pick one entity (customers, deals, jobs), declare a system of record, migrate the rest into it, and write down who can edit it. Don't try to clean everything; clean the one thing AI will touch first.
- Weak process? For each of your top three workflows, write the steps in one document. Not a 40-page SOP. A bulleted list a new hire could follow. If you can't write it, you don't understand it.
- Weak people? Name a single owner for AI and automation decisions. Give them a budget and a month to propose one pilot. If no one on the team will take it, you're not ready.
- Weak tech? Inventory your stack. Mark what has an API, what doesn't, and what's on the chopping block. Automate only the API-accessible ones.
- Weak governance? Write a one-page policy: what AI outputs get reviewed, by whom, and what the success metric is. If you can't answer those three, don't deploy.
Questions buyers actually ask
45+ is the realistic floor. Below that, you'll spend the project budget fixing foundational gaps (missing data, undocumented processes, unclear ownership) rather than building anything useful. Above 60, you can ship confidently. Between 45 and 60, scope a single pilot rather than a transformation programme; prove it works on one process before rolling out.
Want a second opinion on your lowest-scoring category?
20-minute call, no slides. Bring your score and I'll tell you the single next step that moves the needle, plus whether AI is the right answer at all.
Book a call