HOVIGuard

AI Transparency Statement

Per EU AI Act (Reg. 2024/1689), Article 50 — As of April 2026

The German version at hoviguard.eu/ki-transparenz is the legally binding original.

You are interacting with an AI system

HOVIGuard is an AI gateway that routes requests to Large Language Models (LLMs) as well as image and video generators. All responses in the chat interface are produced by machine models — not by humans. We explicitly indicate this in every chat view.

Models and providers in use

HOVIGuard does not train models itself. We route requests through curated third-party providers, listed transparently in the product under Superadmin → Models. Currently supported:

  • EU-hosted (🇪🇺): Mistral, Meta Llama on EU infrastructure in Lyon, Cloudflare Workers AI (EU PoPs)
  • Third country with safeguards: Anthropic Claude, OpenAI GPT-5.4, Google Gemini, Amazon Nova, xAI Grok, DeepSeek, Cohere, Perplexity — transferred under DPF and/or SCCs (GDPR Art. 46)
  • Locally on HOVIGuard server: Qwen3Guard-Gen-4B as safety layer, Whisper for speech-to-text

Tenant administrators can restrict their users to 🇪🇺-models.

Purpose and risk classification

HOVIGuard is classified as a General-Purpose AI system (Art. 3 No. 63 AI Act) and operated as minimal to limited risk:

  • No biometric remote identification
  • No social-scoring application
  • No automated decision-making with legal effect on data subjects (Art. 22 GDPR)
  • No manipulation of individuals
  • No use in critical infrastructure without prior DPIA with the customer

Use in high-risk areas (e.g. medical diagnostics, education, employment, justice) is contractually permitted only with additional risk assessment and approval by HOVIGuard.

Limitations and known risks

  • Hallucinations: LLMs may produce incorrect, outdated, or fabricated information. Verify results before business-critical use.
  • Training-data bias: Models reflect the distribution of their training data and may produce stereotypical or culturally biased outputs.
  • Prompt injection: Manipulated inputs can cause models to misbehave. Our safety layer filters known patterns but cannot catch everything.
  • Recency cutoff: Models have a training cutoff and are aware of current events only via web search (if enabled and supported by the model).
  • Copyright: Image/video generators may produce content imitating copyrighted material. Usage rights must be verified by the customer.

Safeguards by HOVIGuard

  • Multi-layer safety: Input classification via Qwen3Guard-Gen-4B (9 categories S1–S9, 119 languages), output scanner, jailbreak detection.
  • PII detection: Regex- and model-based detection of personal data before forwarding to external models.
  • Per-tenant content policy: Admins can configure upload types, NSFW filters, allowed model categories.
  • Audit logs: All security-relevant events are logged traceably.
  • EU AI Act compliance: Transparency notice in every chat, marking of AI-generated images with visible watermarks (C2PA preparation).

Marking of AI-generated content

All texts, images and videos you generate via HOVIGuard are AI outputs. When sharing or publishing this content, you are obliged to mark it as such (Art. 50(4) AI Act — Regulation (EU) 2024/1689 — “deep fakes” and AI-generated media). HOVIGuard automatically adds metadata to generated images (C2PA signature, in preparation).

Contact and feedback

For questions about AI use, complaints about unwanted outputs or suspected security issues, contact datenschutz@hoviguard.eu or use the security contact form.

Frequently asked questions about AI transparency

What does AI transparency mean in the EU AI Act context?+

Article 50 of the EU AI Act obliges providers and deployers to inform users that they are interacting with an AI system. In addition, AI-generated content (especially deepfakes and synthetic media) must be marked as such. HOVIGuard explicitly indicates AI use in every chat view.

Which tools does HOVIGuard provide for meeting transparency obligations?+

Typically: a visible transparency notice in the chat interface, an audit trail of all AI interactions, documented model and provider lists (Superadmin → Models), automatic markers for AI outputs, and C2PA watermarks for AI-generated images (in preparation).

Who is responsible for marking AI-generated content?+

When sharing or publishing, you as the user are obliged to mark AI-generated texts, images and videos as such (Art. 50(4) AI Act). HOVIGuard provides the technical tools (metadata, watermarks); the legal responsibility for publication remains with you.

Which models and providers are used?+

EU-hosted models (Mistral, Meta Llama, Cloudflare Workers AI in EU PoPs), third-country models with safeguards (Anthropic, OpenAI, Google, Amazon, xAI, DeepSeek, Cohere, Perplexity — under DPF and/or SCCs) and locally operated safety models (Qwen3Guard-Gen-4B, Whisper). Tenant admins can restrict to EU-only.

Which risks should I consider with AI responses?+

LLMs can hallucinate, reflect training-data bias, be manipulated by prompt injection and have a knowledge cutoff. Image/video generators can imitate copyrighted material. Verify results before business-critical use; details in the Limitations and known risks section.