Many explain regulation. We implement it.

AI Act, GDPR, and NIS2 impose requirements on your AI system. Which ones apply to you, what they mean, and by when they must be implemented is unclear for most organisations.

We create clarity and deliver implementation: risk classification, data protection, documentation, training. Legally backed by Taylor Wessing.
Why

If you want to scale AI, you need rules that scale with it

AI only creates real value when it goes beyond individual pilot projects. Into new value chains, into customer processes, into decisions with economic weight. That requires a framework: Who is allowed to do what? Which data flows where? How is documentation, classification, and training handled?

AI Act, GDPR, and NIS2 define that framework. Governance translates it into structures that keep your organisation actionable. Not as a constraint. But as a prerequisite for AI to move beyond the experiment.

Services

From classification to implementation

AI Act, GDPR, and NIS2 result in concrete requirements for your AI system. We implement them. Nine service areas that together form a complete governance framework.

Technical Documentation

The AI Act requires comprehensive documentation of high-risk AI systems. We create technical documentation: system architecture, training data, performance metrics, risk assessments. Traceable for auditors, usable for your team.

Competency Certification

Since February 2025, organisations must ensure that all employees working with AI have sufficient AI competency. Our workshop formats deliver the knowledge and conclude with a certificate.

Access Controls

Who is allowed to use, train, or modify which AI system? We define and implement authorization concepts for your AI systems. Role-based, documented, and compliant with GDPR and NIS2.

Anonymization

Personal data in training data and inference is one of the most common compliance pitfalls. We implement anonymization and pseudonymization algorithms that ensure GDPR compliance while preserving data quality.

Encryption

Data in training, inference, and storage must be protected. We implement encryption across the entire AI pipeline, at rest and in transit. A core requirement under NIS2 and GDPR.

Traceability

We make AI decisions explainable and build logging and audit trails into your systems that automatically trace every output back to its inputs, model version, and parameters. So you can always understand how a result was produced.

Model Versioning

Which model was running when, with which data? We establish versioning and change management for models and training data. So you can always prove what was in production.

Monitoring

AI systems change during operation through model drift, declining data quality, or performance drops. We set up monitoring systems that automatically detect and report deviations.

Auditing

Compliance must be verifiable. We prepare your AI systems for audits and conduct structured assessments. Together with Taylor Wessing, we offer an AI Act Audit that combines law and technology.
Cooperation

Unique consulting for a new kind of regulation

Complex regulation requires new approaches. In cooperation with Taylor Wessing, one of the leading law firms for IT and data protection law, we combine legal classification and technical implementation. For governance that doesn't end in a report, but runs in your system.

Schedule a Call
Our Project Formats

Your path from compliance gap to audit-ready

From competency certification to conformity assessment: three formats that directly address the requirements of the AI Act and GDPR.

AI Act Crashkurs

Was der AI Act für Sie bedeutet – rechtlich fundiert, technisch greifbar.
AI Act verstehen

KI Management Crashkurs

Ein strukturierter Einstieg in KI. Klarheit statt Hype. Wissen statt Aktionismus.
Jetzt Wissen aufbauen

AI Compliance

IT Security, GDPR, and EU AI Act — Covered

We develop, operate, and support AI in Germany in accordance with ISO 27001. Encryption, anonymization, clear architecture, and auditable documentation ensure that data protection, IT security, and regulatory requirements are met.

AI Made in Germany - Development, Support & Hosting
GDPR compliant, AI ACT compliant, Nis 2 compliant
LLMs hosted in Europe
Enterprise-class security, ISO27001 compliant

Cases

AI Governance in practice

700 Members, One AI

How an Association Made AI Document Management Affordable for 700 Members

700+

Member firms with access to AI search
Learn more

Digital Strategy for 1.2 Million Members

Digital Strategy for ADAC Hansa: When the Core Service Loses Relevance

100%

approval from sounding board and leadership
Learn more

360° Customer View for Sales

360° Customer View with AI: Data-Driven Sales for 1.2 Million Customers

2x

Doubling of sales conversion probability
Learn more
PLAN D Logo

From AI Hesitation to an AI Roadmap

AI Strategy for FinTech: How a Scale-Up Built an Investor-Ready AI Roadmap

2

Intensive days AI Ideation Workshop
Learn more

Price Prediction in Seconds

From 10 Years of Transaction Data to a Binding Real-Time Price Prediction

24h → 1 Sec.

Process acceleration of valuation
Learn more
PLAN D Logo

Repair Costs in Seconds

AI Prediction of Repair Costs in Motor Claims Management

93%

Faster claims processing
Learn more
PLAN D Logo

Data Strategy Instead of Data Silos

Data Strategy for Financial Services: From 50 Data Sources to an AI-Ready Lakehouse Platform

6 Months

From assessment to production platform
Learn more
PLAN D Logo

50 Million Euros Through Data

Procurement Optimization in Motor Insurance

~50 Mio. €

Savings per year through AI
Learn more

Expert Knowledge at the Touch of a Button

AI Assistant in Customer Service with RAG System

100

Days from idea to MVP
Learn more

A Digital Future for the Energy Transition

AI-Driven Digital Transformation Strategy: How a Federal Enterprise Modernized Its Operations

7

Months from as-is analysis to roadmap
Learn more
PLAN D Logo

AI Calculates Hail Damage

Hail Damage Calculated in Milliseconds: How AI Helps Insurers Manage Mass Claims

40.000+

hail damage claims processed via the AI system per year
Learn more
PLAN D Logo

Computer Vision in Claims Management

AI Image Recognition in Motor Claims: Damage Assessment in Seconds Instead of Days

93 %

Prediction accuracy in component detection, at assessor level
Learn more

Mit Daten Leben retten

KI in der Medizin: Datenanalyse in der Notfallversorgung

1,3 Stunden

schnellere Behandlung pro Schlaganfall
Learn more

Remote Videobesichtigung von Kfz Schäden

Remote Videobesichtigung von Kfz Schäden einer Versicherungsnehmerin

100.000 Euro

Projektvolumen pro bono
Learn more

Omnikanal im Versicherungsvertrieb

Gemeinsam mehr erreichen: Omnikanal im Versicherungs-Vertrieb

Learn more

Questions & Answers

AI governance is the organizational and technical framework that governs how your company develops, operates, and controls AI systems. This includes responsibilities, access controls, documentation, monitoring, and auditability.

Without governance, there is no foundation to scale AI beyond pilot projects. Since the AI Act, many of these measures are legally required. At the same time, GDPR and NIS2 impose their own requirements on AI systems. Governance brings all three regulations into an actionable framework.

AI strategy defines where and why your company wants to deploy AI: goals, use cases, prioritization, roadmap. AI governance regulates how that deployment is controlled and implemented in a compliant manner: roles, policies, documentation, monitoring, auditing.

Strategy answers the question "What do we do with AI?" Governance answers "How do we make sure we do it right?" Both are connected, but governance typically becomes relevant when AI goes into production or regulatory requirements take effect.

Personal data in training data requires a legal basis under Art. 6 GDPR, typically legitimate interest or consent. Additionally, the principles of data minimization and purpose limitation apply.

In practice, we rely on anonymization and pseudonymization before data enters training. Maintaining data quality in the process is critical. Synthetic data can be an alternative when original data cannot be used. Which method fits depends on the use case and data situation.

A Data Protection Impact Assessment (DPIA) under Art. 35 GDPR is mandatory when processing is likely to result in a high risk to the rights and freedoms of individuals. With AI systems, this is frequently the case: automated decision-making, profiling, processing of sensitive data, or large data volumes are typical triggers.

In practice, most production AI systems require a DPIA. We recommend conducting it early, not just before go-live. The DPIA documents risks and countermeasures and is one of the first documents requested during a supervisory authority review.

NIS2 obligates companies in critical and important sectors to implement comprehensive cybersecurity measures. For AI systems, this means: risk management, incident reporting, access controls, encryption, and supply chain security must also cover AI components.

AI systems are particularly affected because they often rely on external APIs, cloud infrastructure, and third-party models. Each of these is a potential attack vector that NIS2 addresses. If your company falls under NIS2 and uses AI, both sets of requirements must be considered together.

Companies in critical infrastructure sectors (energy, healthcare, finance, transport, water) are subject to a triple regulation: AI Act, GDPR, and NIS2 apply simultaneously. This means higher documentation requirements, stricter demands on availability and integrity, and shorter reporting deadlines for security incidents.

For AI systems in critical infrastructure environments, additional requirements apply for resilience and traceability. Models used in critical processes need redundant monitoring systems and documented fallback mechanisms. Governance must map all three regulatory frameworks in an integrated structure.

An AI register records all AI systems that are in use or planned within your company. For each system, you document: purpose, risk class, responsible person, data used, provider, interfaces, and current compliance status.

We start with an inventory: Which AI tools are already in use, including informally? Often more systems are in use than IT is aware of. The register becomes the central control instrument for governance because it shows at a glance where action is needed. For high-risk AI systems under the AI Act, such a register is effectively mandatory.

The AI Act enters into force in stages. Since February 2025, the training obligation under Art. 4 applies: all employees who operate or oversee AI systems must have sufficient AI competency. From August 2025, the prohibitions on unacceptable AI practices take effect. From August 2026, the full requirements for high-risk AI systems apply: documentation, risk management, monitoring, auditability.

GDPR has applied since 2018 to all AI systems that process personal data. NIS2 has been in force as an EU directive since October 2024 and is being transposed into national law. Anyone using AI in production should build governance now, not wait until the last deadline hits.

Yes, in most cases. As soon as an external AI provider processes personal data on your behalf, a Data Processing Agreement (DPA) under Art. 28 GDPR is mandatory. This applies to cloud-based AI tools, API services, and SaaS platforms that work with your data.

For every AI tool, check: Is personal data being transmitted? Are inputs stored or used for training? Where is the data processed? For many common AI tools, GDPR compliance is not guaranteed out of the box. A DPA alone is not sufficient; it must be complemented by technical measures such as anonymization and access controls.

Ready when you are

Zukunft beginnt, wenn menschliche Intelligenz künstliche Intelligenz entwickelt. Der erste Schritt ist nur ein Klick.

Vertrieb kontaktieren
Jetzt bewerben

Since 2017, we have been building AI systems that transform businesses. Let's talk about yours.