EU AI Act & GDPR - DIY Compliance Starter Kit

A practical toolkit for AI teams who want to understand compliance risk on their own - without legal overload or guesswork.

EU AI Act & GDPR - DIY Compliance Starter Kit

00. How to use this kit

A fast path through risk classification, GDPR data flows, documentation, and controls.

Purpose. This kit helps you understand and reduce AI-related regulatory risk under the EU AI Act and GDPR at a practical level. It is not legal advice and does not certify compliance.

Recommended path (60–120 minutes):

  1. Classify AI risk (Section 02)
  2. Map personal data flows (Section 03)
  3. Document your AI system (Section 04)
  4. Review basic controls (Section 05)

When to stop. Stop DIY and seek validation if (a) risk is medium/high, (b) outputs affect individuals materially, (c) sensitive data is involved, (d) enterprise procurement requires formal answers, or (e) you are unsure about any key decision.

01. EU AI Act - Practical overview

Risk-based thinking, what drives obligations, and the most common misconceptions.

The EU AI Act follows a risk-based approach. Obligations depend on how AI is used in your product, not on generic labels.

What increases risk most

  • Use in regulated domains (employment, finance, healthcare, education, public services)
  • Automated or hard-to-challenge outcomes
  • Limited human oversight
  • Use of personal or sensitive data
  • Opaque or hard-to-explain outputs

Common misconception

“We use an LLM API, so compliance is the provider’s problem.” In practice, you remain responsible for how AI is used in your product and how data flows through vendors.

02. AI risk classification

A checklist + simple interpretation logic (low / medium / potentially high).

Answer the checklist below. If you have any “High” signal, treat your system as potentially high risk until validated.

Checklist

  • Is AI a core function of the product?
  • Does the system make or recommend decisions about individuals?
  • Are outputs used without mandatory human review?
  • Is the system used in employment, finance, healthcare, education, or public services?
  • Can outputs materially affect access, eligibility, or opportunities?
  • Does the system process personal data?
  • Does it process sensitive data (health, biometrics, children, etc.)?
  • Is the system difficult to explain to a non-technical audience?

Interpretation

  • Mostly Low: low exposure; keep basic documentation.
  • Mixed / Medium: expect documentation + controls; plan a structured review.
  • Any High: stop DIY and validate with an audit.

03. GDPR for AI systems

Where GDPR appears in AI workflows and how to map data flows quickly.

GDPR typically applies at multiple stages: training data, prompts/inputs, inference, logs, feedback loops, and vendors.

Common blind spots

  • Prompts containing personal data
  • Logs stored indefinitely
  • Unclear data sharing with LLM vendors
  • No retention/deletion rules for AI-related data

Data flow mapping (10 minutes)

Create a simple table with columns: Data source, Personal data?, Stage, Vendor/System, Retention, Notes.

DPIA hint

A DPIA is more likely if AI evaluates individuals, decisions have significant effects, sensitive data is involved, or processing is large-scale. If unsure, assume “yes” until validated.

04. AI system documentation templates

Copy/paste structures for system description and customer-facing AI FAQ.

AI System Description (copy/paste)

System name

What the system does (plain language, no marketing)

What the system does NOT do (clear boundaries)

Inputs (data sources, user input, vendors)

Outputs (what users receive)

Human oversight (who can intervene and how)

Known limitations (accuracy, bias, edge cases)

Customer-facing AI FAQ (starter questions)

  • Do you use AI in your product?
  • Is decision-making automated?
  • Is personal data processed?
  • Which AI vendors are involved?
  • Can users challenge or appeal AI outcomes?

05. Controls & governance (MVP)

Minimum viable controls: logging, oversight, monitoring, incident handling, vendor management.

Minimum controls checklist

  • Access control (who can change prompts/models/settings)
  • Logging of model outputs and key decisions
  • Human override / escalation path
  • Monitoring & periodic review (quality + safety)
  • Incident handling (how you respond to failures)
  • Vendor management (what data vendors see and retain)

Implementation priority

Prioritize high risk / low effort items first. Avoid over-engineering controls that do not reduce real risk drivers.

06. When to stop & next steps

Clear criteria for DIY vs audit + a sensible next step path.

DIY is usually enough if risk is low, there are no automated decisions affecting individuals, sensitive data is not involved, and customers do not request formal validation.

Consider an audit if risk is medium/high, sales/procurement is blocked, you need external validation, or you want confidence instead of guesses.

Credit note: if you proceed with an AI Audit, the kit fee can be credited towards the audit.

Want the full kit?

Get the complete DIY AI Compliance Starter Kit for €500 (one-time). Includes templates, checklists, and decision tools. The kit fee is credited if you later book an AI Audit.

Built by engineers

Built by engineers (not a legal blog)

We’re a small team of senior engineers and product builders who ship software in production - including AI systems and data-heavy applications.

Our focus is practical compliance: we map real data flows, identify the risk drivers that matter, and turn findings into an implementation-ready plan (docs, controls, tickets).

  • Engineering-first, not paperwork-first
  • EU AI Act + GDPR lens, grounded in real system architecture
  • Clear outputs: risk profile, docs, backlog of actions

We don’t provide legal advice - we help teams prepare, validate, and implement.

Who this is for?

AI startups & SaaS teams, founders, CTOs, product leaders, companies targeting the EU market, teams not ready for a full audit yet.

Not legal advice. Kit price is credited if you later proceed with an AI audit.