HARTZAI
Book Discovery Call
AI governance, risk & responsible AI

Use AI Safely, Without Slowing Progress

Hartz AI conducts AI governance audits and builds compliance frameworks for UK SMEs. Rivka Abecasis leads AI compliance at Hartz AI, specialising in UK GDPR and data protection for AI systems.

Hartz AI helps UK SMEs use AI responsibly and stay compliant with evolving UK regulations. We deliver governance audits, risk assessments, policy frameworks and compliance training. Our Head of AI Compliance, Rivka Abecasis, specialises in UK GDPR and data protection for AI systems.

What is AI governance and why does your business need it?

AI governance is the set of policies, processes and responsibilities that ensure your organisation uses artificial intelligence safely, ethically and in line with regulation. Hartz AI helps SMEs comply with UK AI regulations through governance audits that typically cover 40+ risk areas.

Without governance, AI adoption creates invisible risk. Staff paste confidential data into public tools. Automated decisions go unchecked. Regulatory obligations under GDPR and emerging AI frameworks get missed until a breach forces the conversation.

The UK government's pro-innovation approach to AI regulation expects organisations to self-govern proportionately. For SMEs, that means lightweight but deliberate controls - not enterprise-scale compliance programmes.

£42M+
In GDPR fines issued by the ICO since 2018
ICO Enforcement
18%
Of UK businesses have an AI usage policy
CIPD 2024
79%
Of consumers would stop using a company that misused AI
PwC 2024
40+
Risk areas covered in a Hartz AI governance audit
Hartz AI internal data

What does UK regulation currently require for AI governance?

The UK does not yet have a single AI-specific law. Instead, it takes a principles-based approach where existing regulators - the ICO, FCA, Ofcom, and others - apply five cross-cutting principles to AI within their sectors.

Those five principles are safety, transparency, fairness, accountability, and contestability. The DSIT AI regulation white paper sets this framework out clearly. Businesses are expected to demonstrate they have considered each principle for every AI system they deploy.

GDPR remains the most immediately enforceable regulation affecting AI use. The ICO's guidance on AI and data protection requires lawful basis for automated processing, data protection impact assessments for high-risk AI, and meaningful human review of automated decisions.

The UK AI Safety Institute was established in 2023 to evaluate AI risks and develop safety standards. The Bletchley Declaration (2023), signed by 28 countries including the UK, committed to international cooperation on AI safety.

The EU AI Act classifies AI systems by risk level, with requirements that affect UK businesses trading with EU customers. Even if you operate only in the UK, understanding this classification system helps you future-proof your governance framework.

5
cross-cutting principles in the UK AI regulatory framework
DSIT White Paper
£17.5M
largest ICO fine for AI-related GDPR breach in 2023
ICO Enforcement

What does an AI governance framework look like for an SME?

An SME governance framework has four components: an acceptable use policy, a risk assessment process, team training, and a named person responsible for AI decisions. It does not need a committee, a dedicated department, or a six-figure budget.

Your acceptable use policy defines which AI tools are approved, what data can be shared with them, and where human review is required. It should be written in plain English and fit on two pages. Staff who never read it cannot follow it.

Risk assessment means evaluating each AI tool against a short checklist: what data does it access, what decisions does it influence, what happens if it gets something wrong, and who is accountable. The ICO's DPIA guidance provides a structured approach you can adapt.

Training ensures your team understands the policy and can spot common risks like data leakage through public AI tools, hallucinated outputs presented as fact, and bias in AI-generated recommendations. A half-day workshop establishes baseline competence.

What are the risks of getting AI governance wrong?

The risks fall into three categories: regulatory, operational, and reputational. For most SMEs, operational risk - staff unknowingly leaking data or making flawed decisions based on AI output - is the most immediate and likely concern.

Regulatory risk centres on GDPR non-compliance. If you process personal data through AI systems without proper safeguards, the ICO can impose significant fines. The ICO's enforcement record shows increasing scrutiny of automated decision-making across all organisation sizes.

Operational risk materialises when staff use public AI tools for confidential work, rely on AI-generated content without verification, or deploy AI in customer-facing processes without adequate testing. These failures are preventable with basic governance controls.

Reputational risk grows as customers and clients become more aware of AI use. Businesses that cannot demonstrate responsible AI practices face harder questions from procurement teams, regulators, and the public. Early governance is insurance against future scrutiny.

61%
of UK employees admit to using unapproved AI tools at work
DSIT Workforce Survey
3 in 5
UK businesses say AI governance is a board-level concern
UK AI Safety Institute

How do you start building AI governance in your organisation?

Start by mapping what AI tools your organisation already uses - including the ones nobody officially approved. A simple audit of current usage gives you the baseline you need to write a proportionate policy and identify immediate risks.

Next, draft a short acceptable use policy. It should state which tools are approved, what data must not be entered into public AI systems, and who to contact with questions. Circulate it, discuss it with your team, and update it based on their feedback.

Then run a basic risk assessment on your highest-impact AI use cases. The UK government's guide to understanding AI provides a useful starting point for identifying where AI decisions affect people, data, or business-critical processes.

Finally, invest in training. A single workshop gives your team the vocabulary, awareness, and confidence to use AI responsibly. Governance that lives only in a policy document and never reaches your people is governance in name only.

Governance snapshot

Example SME Baseline

Before Hartz AI

  • Written AI policyMissing
  • Use of public AI toolsUntracked
  • AI risk awarenessLow & informal

After a governance sprint

4-8 weeks
  • Clear, practical AI policy everyone understands
  • Risk register and approval rules for new AI tools
  • Team training on safe, compliant AI usage

How Does Hartz AI Help With AI Compliance?

What we help you put in place

We translate regulation, standards and best practice into manageable steps so you feel confident saying "yes" or "no" to AI in your organisation.

AI Policies & Guardrails

Clear, plain-English policies that explain where AI can be used, how data should be handled, and when to escalate decisions.

  • Acceptable use for public & internal AI tools
  • Data handling, confidentiality & privacy rules
  • Roles, responsibilities & sign-off paths

Risk Assessments & Controls

Practical assessments of your AI use-cases so you can identify, prioritise and manage key risks before they become issues.

  • AI risk register & impact mapping
  • Controls for bias, hallucinations & over-reliance
  • Third-party tool and vendor assessment

Training & Culture Change

Workshops and playbooks that help your teams recognise AI risk, make good decisions and know when to slow down and check.

  • AI governance training for leaders & teams
  • Practical scenarios for your sector
  • Ongoing support as policies bed in

Core Governance & Responsible AI Services

What AI Governance Services Does Hartz AI Offer?

Choose a focused sprint, ongoing advisory, or a mix of both. Every engagement is tailored to your risk appetite, sector and existing controls.

AI Policy & Guardrails Sprint

A 4-6 week sprint to create or refresh your AI policy, usage rules and internal guidance. Built in plain English, with examples your team can actually use.

  • Workshops with key stakeholders
  • Draft & iterate policy documents
  • Internal launch & Q&A session
Discuss a policy sprint

AI Risk & Impact Assessment

A focused look at your current and planned AI use-cases, highlighting key risks and proposing proportionate controls.

  • Workshops with process owners
  • AI risk register & prioritisation
  • Control recommendations & roadmap
Talk about risk assessments

AI Governance & Compliance Training

Interactive sessions that help your teams understand what 'good' looks like when using AI, and how to spot when something isn't right.

  • Custom scenarios for your sector
  • Clear "do / don't" examples
  • Follow-up resources and playbooks
Explore training options

Ongoing Governance Advisory

A steady partner to sense-check new AI ideas, support board conversations and keep your guardrails up to date.

  • Regular check-ins with leadership
  • Review of new tools and use-cases
  • Updates as regulations evolve
Learn about fractional CAIO support

AI Project & Vendor Assurance

Independent, plain-speaking review of AI projects and suppliers, so you can ask sharper questions and negotiate better terms.

  • Review of vendor claims & limitations
  • Data, privacy and security considerations
  • Red-flag list and mitigation options
Request project assurance

Custom Governance & Risk Programme

A joined-up programme that blends policy, training, risk assessments and advisory support over several months.

  • Tailored roadmap for your organisation
  • Mix of workshops, sprints & advisory
  • Designed to build internal capability
Design a governance programme

Looking for Frameworks and Templates?

Alongside services, we maintain an AI Governance & Responsible AI Hub with explainers, templates and deeper articles. It gives you reference material you can share internally, even before any formal project starts.

Visit the Governance & Responsible AI Hub

Do SMEs Really Need AI Governance?

Who this is for

Most of our governance work is with:

  • UK SMEs with 10-500 staff starting to use AI at scale
  • Professional services firms handling sensitive client data
  • Charities, membership bodies and education providers
  • Leaders who want confidence before saying "yes" to AI

You don't need an internal legal or data science team. We start from where you are and design something that's realistic for your size, sector and risk appetite.

How we like to work

Governance That Fits Your Organisation

Plain-English First

We avoid jargon and write in language your teams can actually use, not just file away.

Proportionate, Not Perfectionist

We focus on the few controls that matter most for you, rather than recreating a big-tech governance function.

Capability, Not Dependency

Our job is to leave you able to run governance yourself, with the option to call us back when needed.

Related Service

Governance is part of your AI strategy

AI governance does not exist in isolation. It sits within a broader AI strategy that covers readiness, roadmapping, and implementation. Our consultancy service helps you shape the full picture so governance becomes a natural part of your AI journey.

Explore AI Consultancy
Related Service

Your CAIO should own governance

A fractional Chief AI Officer gives your organisation ongoing leadership across AI strategy, governance, and risk. If you want governance that evolves as your AI use grows, a CAIO provides the continuity and authority to make it stick.

Explore Fractional CAIO

Common questions

What Do People Ask About AI Governance?

The UK does not have a single AI-specific law, but existing regulation applies. GDPR requires lawful basis for automated processing and data protection impact assessments for high-risk AI. The UK regulatory framework expects organisations to demonstrate safety, transparency, fairness, accountability, and contestability when deploying AI systems.
At minimum, you need an acceptable use policy that defines approved tools and data handling rules, a basic risk assessment for your highest-impact AI use cases, and a named person responsible for AI decisions. This can be established in four to six weeks and does not require specialist legal or technical staff.
GDPR applies whenever AI processes personal data. You need a lawful basis for processing, must conduct a data protection impact assessment for high-risk AI, and must provide meaningful human review of automated decisions that significantly affect individuals. The ICO publishes specific guidance on AI and data protection that applies to all organisation sizes.
Yes, and that is often the most practical approach for SMEs. Your data protection, acceptable use, and information security policies can be extended with AI-specific clauses. This avoids creating a separate governance structure and ensures AI rules sit alongside the policies your team already follows.
If you sell products or services into the EU, or use AI systems developed by EU-based providers, the EU AI Act may apply to your business. Even if you trade only in the UK, the Act is shaping global expectations for AI governance. Understanding its risk-based classification system helps you future-proof your own governance framework.

A practical first step

Book a Governance Assessment

We will review your current AI usage, identify governance gaps, and recommend proportionate next steps. No heavy legal language. No pressure. Just a clear picture of where you stand and what to do next.