Many explain regulation. We implement it.
We create clarity and deliver implementation: risk classification, data protection, documentation, training. Legally backed by Taylor Wessing.
If you want to scale AI, you need rules that scale with it
AI only creates real value when it goes beyond individual pilot projects. Into new value chains, into customer processes, into decisions with economic weight. That requires a framework: Who is allowed to do what? Which data flows where? How is documentation, classification, and training handled?
AI Act, GDPR, and NIS2 define that framework. Governance translates it into structures that keep your organisation actionable. Not as a constraint. But as a prerequisite for AI to move beyond the experiment.

From classification to implementation
AI Act, GDPR, and NIS2 result in concrete requirements for your AI system. We implement them. Nine service areas that together form a complete governance framework.
Unique consulting for a new kind of regulation

Complex regulation requires new approaches. In cooperation with Taylor Wessing, one of the leading law firms for IT and data protection law, we combine legal classification and technical implementation. For governance that doesn't end in a report, but runs in your system.
Your path from compliance gap to audit-ready
From competency certification to conformity assessment: three formats that directly address the requirements of the AI Act and GDPR.
AI Compliance
IT Security, GDPR, and EU AI Act — Covered
We develop, operate, and support AI in Germany in accordance with ISO 27001. Encryption, anonymization, clear architecture, and auditable documentation ensure that data protection, IT security, and regulatory requirements are met.
Latest on AI governance and regulation
Questions & Answers
AI governance is the organizational and technical framework that governs how your company develops, operates, and controls AI systems. This includes responsibilities, access controls, documentation, monitoring, and auditability.
Without governance, there is no foundation to scale AI beyond pilot projects. Since the AI Act, many of these measures are legally required. At the same time, GDPR and NIS2 impose their own requirements on AI systems. Governance brings all three regulations into an actionable framework.
AI strategy defines where and why your company wants to deploy AI: goals, use cases, prioritization, roadmap. AI governance regulates how that deployment is controlled and implemented in a compliant manner: roles, policies, documentation, monitoring, auditing.
Strategy answers the question "What do we do with AI?" Governance answers "How do we make sure we do it right?" Both are connected, but governance typically becomes relevant when AI goes into production or regulatory requirements take effect.
Personal data in training data requires a legal basis under Art. 6 GDPR, typically legitimate interest or consent. Additionally, the principles of data minimization and purpose limitation apply.
In practice, we rely on anonymization and pseudonymization before data enters training. Maintaining data quality in the process is critical. Synthetic data can be an alternative when original data cannot be used. Which method fits depends on the use case and data situation.
A Data Protection Impact Assessment (DPIA) under Art. 35 GDPR is mandatory when processing is likely to result in a high risk to the rights and freedoms of individuals. With AI systems, this is frequently the case: automated decision-making, profiling, processing of sensitive data, or large data volumes are typical triggers.
In practice, most production AI systems require a DPIA. We recommend conducting it early, not just before go-live. The DPIA documents risks and countermeasures and is one of the first documents requested during a supervisory authority review.
NIS2 obligates companies in critical and important sectors to implement comprehensive cybersecurity measures. For AI systems, this means: risk management, incident reporting, access controls, encryption, and supply chain security must also cover AI components.
AI systems are particularly affected because they often rely on external APIs, cloud infrastructure, and third-party models. Each of these is a potential attack vector that NIS2 addresses. If your company falls under NIS2 and uses AI, both sets of requirements must be considered together.
Companies in critical infrastructure sectors (energy, healthcare, finance, transport, water) are subject to a triple regulation: AI Act, GDPR, and NIS2 apply simultaneously. This means higher documentation requirements, stricter demands on availability and integrity, and shorter reporting deadlines for security incidents.
For AI systems in critical infrastructure environments, additional requirements apply for resilience and traceability. Models used in critical processes need redundant monitoring systems and documented fallback mechanisms. Governance must map all three regulatory frameworks in an integrated structure.
An AI register records all AI systems that are in use or planned within your company. For each system, you document: purpose, risk class, responsible person, data used, provider, interfaces, and current compliance status.
We start with an inventory: Which AI tools are already in use, including informally? Often more systems are in use than IT is aware of. The register becomes the central control instrument for governance because it shows at a glance where action is needed. For high-risk AI systems under the AI Act, such a register is effectively mandatory.
The AI Act enters into force in stages. Since February 2025, the training obligation under Art. 4 applies: all employees who operate or oversee AI systems must have sufficient AI competency. From August 2025, the prohibitions on unacceptable AI practices take effect. From August 2026, the full requirements for high-risk AI systems apply: documentation, risk management, monitoring, auditability.
GDPR has applied since 2018 to all AI systems that process personal data. NIS2 has been in force as an EU directive since October 2024 and is being transposed into national law. Anyone using AI in production should build governance now, not wait until the last deadline hits.
Yes, in most cases. As soon as an external AI provider processes personal data on your behalf, a Data Processing Agreement (DPA) under Art. 28 GDPR is mandatory. This applies to cloud-based AI tools, API services, and SaaS platforms that work with your data.
For every AI tool, check: Is personal data being transmitted? Are inputs stored or used for training? Where is the data processed? For many common AI tools, GDPR compliance is not guaranteed out of the box. A DPA alone is not sufficient; it must be complemented by technical measures such as anonymization and access controls.
Ready when you are
Zukunft beginnt, wenn menschliche Intelligenz künstliche Intelligenz entwickelt. Der erste Schritt ist nur ein Klick.
Since 2017, we have been building AI systems that transform businesses. Let's talk about yours.













