AI Transparency and Data Protection

How our AI system handles your data

When you use our AI services, you want to know what happens to your data. Here we explain the entire data flow, name every service involved, and link to the official sources. No marketing promises, just verifiable facts.

Data flow step by step

1. Your input

You ask a question or enter text via the web interface, the Teams bot, or the API.

2. Detection of personal data

Before the text leaves our system, it passes through five parallel detection methods: pattern matching for email, phone, IBAN, and ID numbers. Linguistic name recognition (spaCy, specialized for German). Address detection. Street name detection. Abbreviation detection for companies and departments.

All five run simultaneously. Total duration: approximately 10 milliseconds.

3. Anonymization

Detected personal data is replaced with placeholders.

Michael Berg called on March 15
[PERSON:a1b2c3d4] called on [DATE:e5f6g7h8]

The mapping is stored temporarily (Redis, automatic deletion after 1 hour). No permanent storage.

4. AI processing

Only the anonymized text is sent to the AI service. The service sees no names, no addresses, no contact details.

5. Response

The AI response comes back with placeholders. Our system restores the original data before you see the answer.

6. Logging

Every AI call is logged internally (which model, how many tokens, cost). No content is stored in the log.

Which AI services we use

For each service we explain: what the contract says, where the evidence is, and what it means in practice.

G

Google Gemini API

Paid access

Google does not use inputs and outputs from the paid API for product improvement or model training.

For users in the EEA, Switzerland, and the UK, these protections apply even on the free tier.

Log data is retained for 55 days but does not contain request content.

The Data Processing Agreement is automatically included with paid services.

A

Anthropic Claude API

Commercial access

API data is never used for model training. This is a blanket policy, no opt-out required.

Inputs and outputs are deleted within 30 days by default. Exception: if safety classifiers flag content, data may be retained for up to 2 years.

The Data Processing Agreement is automatically included with the commercial terms. It includes EU Standard Contractual Clauses (SCCs), Modules 2 and 3.

Since August 2025, regional processing in the EU is configurable.

Local models

No cloud service

For certain tasks we use local AI models running on our own infrastructure at Hetzner in Germany. This data never leaves the server.

Options for elevated requirements

For customers with special compliance requirements, we offer additional configuration options.

Google Vertex AI

Same models as the Gemini API, but with explicit EU processing in Frankfurt (europe-west3) and the Google Cloud Data Processing Addendum.

Cloud DPA ↗

Langdock as AI Gateway

ISO 27001:2022 certified, SOC 2 Type II audited. 100% EU data sovereignty. Contractual agreements with all AI providers already in place.

Security and Compliance ↗

What we do not do

We do not store conversation content in AI logs.

We do not share customer data with third parties, except with the AI services listed above, and only in anonymized form.

We do not use free API tiers where data could be used for training.

We do not promise 100% anonymization. No system is perfect. That is why we use multiple layers of protection simultaneously: PII proxy, contractual guarantees, and EU infrastructure.

Our infrastructure

Servers at Hetzner in Germany
Encrypted transmission (TLS 1.2+)
Every anonymization is logged internally and automatically checked for quality
PII mappings: only temporary in Redis (1 hour), no permanent storage

Last updated: March 31, 2026

Questions?

If you have questions about data protection in our AI services, contact us.

michael@schiller-partners.de