EU AI Act Compliance
Last updated: April 12, 2026
The EU Artificial Intelligence Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024 and introduces a risk-based approach to regulating AI systems, with obligations phased in between February 2025 and August 2027.
As an AI agent deployment platform, Connic is committed to full compliance with the EU AI Act and to helping our customers meet their own obligations under the regulation. This page explains how our platform, policies, and practices align with the Act's requirements.
Our Role Under the EU AI Act
The EU AI Act distinguishes between providers (organizations that develop or place AI systems on the market) and deployers (organizations that use AI systems in a professional context). Connic occupies a unique position in this framework:
- Connic as infrastructure provider: We provide the managed platform for deploying, running, and monitoring AI agents. We do not develop or supply the underlying general-purpose AI (GPAI) models that power your agents.
- Model-agnostic by design: Our customers choose their own LLM providers (OpenAI, Anthropic, Google Gemini, and others) and connect using their own API keys. GPAI model obligations under Articles 51-56 rest with those model providers directly.
- Enabling deployer compliance: When you deploy agents on Connic, you act as the deployer of the resulting AI system. Our platform provides the tooling, controls, and documentation you need to meet your deployer obligations under Article 26.
Risk Classification
The EU AI Act categorizes AI systems into four risk tiers. The obligations that apply to your agents depend on which tier they fall into:
Prohibited Practices
Since February 2, 2025, the EU AI Act's prohibitions on unacceptable-risk AI practices are in effect. In alignment with Article 5 of the Act, the Connic platform must not be used to deploy agents that:
- Use subliminal, manipulative, or deceptive techniques to distort behavior in ways that cause significant harm
- Exploit vulnerabilities of specific groups (age, disability, socio-economic situation)
- Classify or score individuals based on social behavior or personal characteristics, leading to detrimental treatment (social scoring)
- Assess or predict the risk of criminal offenses based solely on profiling or personality traits
- Build or expand facial recognition databases through untargeted scraping of images
- Infer emotions of individuals in workplace or educational settings, except for medical or safety purposes
- Perform real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, outside of narrowly defined exceptions
These prohibitions are enforced through our Terms of Use and our acceptable use policies. Violations may result in immediate suspension of service.
AI Literacy
Article 4 of the EU AI Act requires that providers and deployers ensure their staff and anyone operating AI systems on their behalf have a sufficient level of AI literacy. This obligation has been in effect since February 2, 2025.
Connic supports AI literacy in several ways:
- Comprehensive documentation: Our documentation covers agent configuration, deployment, connectors, observability, and platform capabilities in detail, enabling teams to understand how agents work and how to operate them responsibly.
- Transparent agent behavior: The observability features in our platform make agent decision-making traceable, helping users understand what an agent did and why.
- Agent templates: Our agent template library includes examples and documentation that help teams understand best practices for responsible AI agent deployment.
Transparency
Article 50 of the EU AI Act imposes transparency obligations on AI systems that interact directly with people or generate synthetic content. As a deployer, you are responsible for ensuring your agents meet these obligations. Connic provides the building blocks to make this straightforward:
- AI disclosure support: When agents interact with end users (e.g. via webhooks, email, or Telegram), you can include AI disclosure notices in your agent's system prompt and output formatting to ensure users know they are interacting with an AI system.
- Content labeling: If your agents generate text, images, or other content that could be mistaken for human-created content, you should configure your agents to label outputs as AI-generated in accordance with Article 50(2).
- System documentation: Connic provides clear documentation about agent capabilities, limitations, and intended use cases through the agent configuration system, including model selection, tool access, and system prompt details.
Human Oversight
Article 14 of the EU AI Act requires that high-risk AI systems are designed to allow effective human oversight, including the ability to understand, monitor, and intervene in the system's operation. As a deployer of high-risk AI systems, Article 26 requires you to ensure these oversight mechanisms function properly.
Connic provides several capabilities to support meaningful human oversight:
Data Governance
Article 10 of the EU AI Act requires that training, validation, and testing data for high-risk AI systems meets quality criteria and is subject to appropriate governance practices. While Connic does not train AI models, we support data governance across the agent lifecycle:
- No training on customer data: Data processed through Connic is never used to train, fine-tune, or improve AI models. Your data is used solely to execute your agents as configured.
- Data minimization: Agents process only the data necessary for their designated tasks. You control what data your agents access through tool configuration and environment variables.
- Data residency: Infrastructure is available across regions spanning North America, Europe, South America, Asia, and Africa. You choose your project's data region at creation time, giving you full control over where your data is processed and stored.
- Retention controls: Agent execution logs are retained according to your subscription tier. Data is deleted in accordance with our Privacy Policy and Data Processing Agreement.
- Encryption: All data is encrypted in transit (TLS 1.2+) and at rest (AES-256). See our Security page for full details.
Record-Keeping & Audit Trails
Articles 12, 19, and 26 of the EU AI Act require providers and deployers of high-risk AI systems to maintain logs and records sufficient for traceability and audit. Connic's observability system is designed with these requirements in mind:
- Comprehensive execution logs: Every agent run is logged with its full context: trigger source, input data, model used, tool calls made, outputs produced, duration, token usage, and final status.
- Structured traces: Agent executions produce hierarchical traces showing every step of the agent's reasoning process, from initial prompt through each tool call and LLM interaction to final output.
- Version history: Agent configurations are version-controlled through Git, providing a full history of changes to agent definitions, system prompts, tools, and settings.
- Usage tracking: The usage dashboard provides aggregated metrics on agent activity, costs, and performance over time.
- Data export: Logs and execution data can be exported for external auditing, compliance reporting, or integration with your existing governance tooling.
Risk Management
Article 9 of the EU AI Act requires providers of high-risk AI systems to establish and maintain a risk management system. As a deployer, Article 26 requires you to use high-risk AI systems in accordance with their instructions for use and to monitor their operation. Connic supports these requirements through:
- Testing and validation: The testing framework allows you to validate agent behavior before deployment, ensuring agents perform as expected across representative scenarios.
- Environment separation: The environments system supports staging and production separation, enabling you to test agents safely before deploying them to production.
- Automated evaluation: The judges system allows you to define quality criteria and automatically evaluate agent outputs against them, catching regressions or unexpected behavior.
- Incident monitoring: Real-time alerting on failed runs and anomalous behavior helps you identify and address issues promptly.
- Rollback capability: Git-based deployments support instant rollback to previous versions if a new agent version exhibits unexpected behavior.
Security & Robustness
Article 15 of the EU AI Act requires high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity. Our platform's security posture is designed to support these requirements:
- Container isolation: Each customer's agents run in isolated containers with strict resource limits, preventing cross-tenant interference.
- Ephemeral execution: Agent execution environments are destroyed after use, minimizing the persistence of sensitive data.
- Secrets management: API keys and credentials are encrypted at rest and injected securely at runtime, never stored in code or logs.
- Infrastructure certifications: Our cloud providers maintain SOC 2 Type II, ISO 27001, and PCI DSS certifications.
For comprehensive details on our security measures, see our Security page.
Shared Responsibility
EU AI Act compliance is a shared responsibility between Connic and our customers. As a general guide:
| Responsibility | Connic | Customer |
|---|---|---|
| Platform infrastructure security | ✓ | — |
| Logging, observability, and audit trail tooling | ✓ | — |
| Human oversight and approval gate features | ✓ | — |
| Classifying your AI use cases by risk level | — | ✓ |
| Conducting fundamental rights impact assessments | — | ✓ |
| Ensuring appropriate AI disclosure to end users | — | ✓ |
| Configuring human oversight for high-risk uses | — | ✓ |
| Complying with GPAI model provider requirements | — | Model provider |
Questions About EU AI Act Compliance?
If you have questions about how the EU AI Act applies to your use of Connic, or if you need formal compliance documentation and risk assessments, please contact us: