European AI Requirements
The EU AI Act arrives in 2026. Here's how HOVIGuard prepares your organization.
Timeline
Primary sources: Regulation (EU) 2024/1689 on EUR-Lex · European Commission: Regulatory framework for AI
Penalties
How HOVIGuard Helps
Frequently asked questions about European AI requirements
What are the main points of the EU AI Act?+
The EU AI Act (Reg. 2024/1689) classifies AI systems into risk tiers (prohibited, high, limited, minimal), introduces transparency obligations (Art. 50, e.g. marking of AI-generated content and deepfakes), and defines duties for providers and deployers. Provisions enter into force in phases between 2024 and 2026.
How does HOVIGuard support meeting these requirements?+
Typically through an audit trail of all AI interactions (traceability), transparency notices in the chat interface, documented sub-processors and model providers, configurable usage policies per team, and planned C2PA watermarks for AI-generated images. These tools can ease your own risk classification and documentation.
How is HOVIGuard itself classified?+
HOVIGuard is classified as a General-Purpose AI system (Art. 3 No. 63 AI Act) and typically operated as minimal-to-limited risk. High-risk applications (medical diagnostics, education, employment, justice) require a separate risk assessment and contractual approval. Details are on the AI Transparency page.
Does HOVIGuard issue an AI-Act compliance guarantee?+
No. We provide technical tools that can support your compliance work. The risk classification of your specific AI application, the fulfilment of deployer obligations and the adaptation to sector-specific rules remain your responsibility. For binding assessments, we recommend qualified counsel.
Where can I find more details?+
On the AI Transparency page (Art. 50 AI Act), in the DPA (Annex B sub-processors) and in the Privacy Policy. For specific questions, contact us at datenschutz@hoviguard.eu.
