AI system mapping Risk classification (AI Act lens) GDPR data-flow review Engineering roadmap

AI Risk & Compliance Audit

A structured review of how your AI system actually works - models, data, decisions, and controls - aligned with the EU AI Act’s risk-based logic and GDPR data protection realities.

Output: a clear risk register + prioritized implementation plan your engineering team can execute (or we can).

Free intro call. Not legal advice. Engineering-first scoping.

Built by engineers

Built by engineers (not a legal blog)

We’re a small team of senior engineers and product builders who ship software in production - including AI systems and data-heavy applications.

Our focus is practical compliance: we map real data flows, identify the risk drivers that matter, and turn findings into an implementation-ready plan (docs, controls, tickets).

  • Engineering-first, not paperwork-first
  • EU AI Act + GDPR lens, grounded in real system architecture
  • Clear outputs: risk profile, docs, backlog of actions

We don’t provide legal advice - we help teams prepare, validate, and implement.

What this audit is

  • System-first: we look at the end-to-end AI system (not only the model).
  • Risk-based: we classify risks and obligations using the EU AI Act logic (what matters depends on risk tier).
  • GDPR-aware: we analyze personal data exposure across training, inference, outputs, and logs.
  • Actionable: you get a prioritized roadmap tied to concrete engineering tasks.

What this audit is not

  • Not legal certification. We provide technical and operational support, not legal advice.
  • Not a generic code review. We focus on architecture, data flows, decisions, controls, and governance.
  • Not “fairness only”. Fairness is one dimension; we also cover privacy, security, traceability, oversight, and ops.
  • Not exploratory PoC work. This is for teams shipping production AI.
Legal boundary: We can work alongside your counsel (or introduce partners) to validate legal interpretations. Our scope is technical, operational, and implementation-focused.

What we review

Four lenses that cover most real-world AI risk and compliance failure points.

1) Model & System

System boundaries, model roles, prompts/tools, RAG, integrations, decision points, failure modes.

2) Data & Privacy (GDPR)

Data categories, personal data touchpoints, lawful basis assumptions, retention, minimization, processor/vendor chain.

3) Risk & Governance (AI Act lens)

Risk classification, impact analysis, oversight, accountability, documentation needs, controls per risk tier.

4) Security & Operations

Logging/audit trails, access control, monitoring, incident response, evaluation, change management, MLOps.

LLMs & Agents (if applicable)

Prompt injection risks, tool permissions, data leakage paths, output safeguards, evaluation strategy.

Third-party AI & Vendors

Model/provider dependencies, data transfer implications, DPAs, subprocessors, contractual + technical controls.

Deliverables

Designed for CEOs/CTOs and engineering teams - usable immediately.

  • Executive summary: risk posture, key gaps, recommended plan
  • AI system map: components, data flows, decision points
  • Risk register: categorized risks with severity and rationale
  • Risk tier indication: using EU AI Act-style logic (what obligations are likely relevant)
  • GDPR exposure map: training/inference/logging touchpoints, vendors, transfers
  • Prioritized roadmap: what to fix first, with estimated effort
  • Implementation plan: concrete control backlog (tickets-ready)
  • Artifacts list: what documentation/governance you likely need next

Process & timeline

Typical engagement: 1–2 weeks. Faster if your system is well-documented.

1

Kickoff + scoping (Day 1)

Define AI system boundaries, key use cases, data sources, and stakeholders.

2

System review (Days 2–6)

Architecture walkthroughs, data-flow mapping, control review, vendor dependency analysis.

3

Risk classification + roadmap (Days 6–9)

Risk register, AI Act lens classification, GDPR exposure map, prioritized remediation plan.

4

Readout + next steps (Days 10–14)

Exec readout + engineering handoff. Optional: implementation kickoff for controls and compliance artifacts.

Common triggers

  • Preparing for enterprise procurement / vendor assessment
  • Shipping to EU customers (or adding EU user base)
  • Introducing LLM/agent features with data access
  • Security team asks for traceability, monitoring, and controls
  • Legal asks “are we high-risk?” and engineering needs a concrete plan

What happens after the audit

  • Compliance track: EU AI Act & GDPR documentation + governance
  • Engineering track: implement controls (logging, oversight, minimization, monitoring)
  • Ongoing support: release reviews + questionnaires + updates

Pricing

Fixed-scope options are best for predictability. Complex multi-product setups are scoped as custom.

AI Audit - Standard

€4k–€10k • 1–2 weeks

For a single AI product/system with clear ownership and accessible documentation.

AI Audit - Deep

€12k–€25k • 2–4 weeks

For multi-module systems, multiple vendors/models, or complex data pipelines.

Ongoing Support

€3k–€15k / month

Release reviews, questionnaires, control monitoring, and incremental improvements.

Book a consultation

Share your product context and we’ll propose the right scope in one call.

If you’re not ready yet, you can also start with the Free AI Risk Assessment.