What Article 14 Actually Says
"High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use...Human oversight measures shall be proportionate to the risks, impacts and possible consequences of the AI system."
— EU AI Act, Article 14(1)
Article 14(4) is more specific:
"The natural persons to whom human oversight is assigned shall in particular...be able to intervene on the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure."
Autonomous trading agents — systems that make and execute financial decisions without human approval of each action — are classified as high-risk AI under Annex III. Article 14 applies to them in full.
The Three Requirements Most Teams Miss
1. Oversight must be "effective"
The regulation does not say you need an alert. It says you need effective oversight. A Slack notification that a human then has to act on — while the agent continues trading — does not meet this bar. By the time a human reads the alert and manually halts the process, the agent may have executed thousands of additional transactions.
Effective oversight means the oversight mechanism operates at the same speed as the agent. Hardware kill switches do. Human-in-the-loop dashboards often do not.
2. The stop button cannot be bypassable by the AI
This is the architectural requirement regulators will scrutinize. If the stop mechanism runs in the same process as the agent — or on the same host without hardware isolation — the AI has the ability (even if not the intent) to circumvent it. An adversarial prompt injection, a bug in the halt path, or a race condition can all produce a system where the stop button fails to stop.
Hardware isolation removes this failure mode. If the signing key exists only inside an enclave and the enclave rejects all signing under a halt condition, the agent literally cannot act — regardless of what it "wants" to do.
3. You must be able to demonstrate compliance
Article 14 requires documentation of oversight measures as part of the technical documentation under Article 11. Regulators will ask not just "do you have a kill switch?" but "can you prove it was active and operating correctly during this time period?"
Logs are insufficient. Logs are mutable. A motivated actor can delete or alter log entries. What regulators and auditors want is cryptographic evidence — proof that was generated at the time of the action and cannot be retroactively modified.
How Hardware Attestation Meets the Standard
AWS Nitro Enclaves provide hardware attestation documents — signed by the Nitro Security Module — that prove a specific binary is running on genuine AWS hardware. This is not a claim your software makes. It is a statement the hardware makes about your software.
When Sentinel evaluates a signing request, it generates a Verifiable Policy Proof that includes:
- The policy version that was active at the time
- Each constraint that was evaluated and its result
- The kill switch state
- The enclave attestation document
- A timestamp anchored to the hardware clock
This record cannot be retroactively modified. It is signed by a key that exists only inside the enclave. A regulator can verify, for any transaction, exactly what policy was active and that the policy was evaluated by attested hardware.
Mapping to Article 14 Requirements
| Article 14 Requirement | Sentinel Implementation |
|---|---|
| Effective human oversight | Kill switch activates at signing layer — no human latency required |
| Stop button not bypassable by AI | Key isolated inside Nitro Enclave — agent has no path to bypass |
| Proportionate to risks | Configurable rate limits, value caps, daily limits per policy |
| Documented oversight measures | VPP generated per action — cryptographically verifiable |
| Intervention capability | Real-time policy updates via control plane — effective immediately |
The Enforcement Timeline
The EU AI Act entered into force in August 2024. High-risk AI obligations — including Article 14 — became applicable in August 2026. National regulators in France, Germany, the Netherlands, and the Nordic countries have signaled active enforcement intent for financial AI systems.
Firms that build Article 14 compliance into their architecture now will have a provable compliance trail from the start. Firms that bolt on compliance later will face the much harder problem of reconstructing evidence for periods when their hardware attestation was absent.
Practical Next Step
The question to ask your engineering team: "If a regulator asked us to prove that our kill switch was active and operating correctly on a specific date six months ago, could we provide cryptographic evidence?"
If the answer relies on logs, dashboards, or process-level controls, the answer is likely "no."