EU AI Act Article 14: Don’t Treat The Delay As Permission to Wait


On 7 May 2026, the EU handed compliance teams nineteen months they did not have. The Council and Parliament’s provisional agreement under the AI Omnibus, if adopted, push the main compliance date for certain high‑risk AI obligations from August 2026 to December 2027; many teams are already using it as a planning baseline. The temptation will be to treat it as permission to wait.


THE PATTERN: PRESENT BIAS

Extensions produce deferral, and the pattern is consistent enough to plan around. The budget conversation gets pushed, the working group loses its urgency, and the item stays on the agenda without moving. By the time December 2027 becomes visible on the horizon, the sprint begins, and what gets built in a sprint is whatever clears the audit. Nineteen months just removed the pressure that would have prevented that.

Behavioral science has a name for this. Present bias describes the tendency to weight immediate relief over future cost: the extension feels like a gain today, and December 2027 feels distant enough to manage later. It is not a failure of intention. It is a predictable cognitive pattern, which means it is also a designable one. Organizations that understand why they defer can build structures that counteract it: a working group that meets whether or not a deadline is imminent, a board-level reporting cadence that keeps the item visible, a compliance calendar that treats December 2027 as a series of quarterly milestones rather than a single future event.


we have seen this before

GDPR is instructive. Organizations had two years to prepare from the time the regulation was adopted in April 2016 to its enforcement date in May 2018. By the time that deadline arrived, around one-third of EU businesses were fully compliant, according to a 2018 IT Governance survey. Enforcement began within months. The organizations that faced early action were not ignorant of the requirement. They ran out of the time they had chosen not to use.

The dynamic is not unique to data privacy. Regulatory timelines and organizational attention do not move at the same pace. An extension changes the date. It does not change the behavior that made the extension necessary.


what the law requires

Article 14 sets a specific standard. The people assigned to oversight must be able to understand the system’s limits well enough to monitor operation, interpret outputs, and step in when something is wrong. They need a real mandate to override or stop use, and institutional backing when they do. A reviewer who flags a concern and is routinely ignored performs ceremony, not oversight.

The distinction matters because it is invisible on paper. An org chart with a named reviewer, a policy document with oversight language, a training module completed at onboarding: none of these confirm that the function works. What confirms it is whether a reviewer has ever stopped a decision, and what happened when they did. Organizations that can answer that with confidence are the exception.


what to build with time

Nineteen months is enough time to address that gap properly, but only if the work starts now. The priority is not more policy language; it is building oversight that can function under real operating pressure.

1. Define the Reviewer's Authority in Writing

A reviewer cannot exercise authority that only exists by implication. If someone is expected to question, delay, or override an output, the role needs clear thresholds, clear escalation paths, and clear backing when that authority is used. Otherwise the first challenged decision will expose the function as ceremonial, not real.

2. Build for Interpretability

A reviewer cannot challenge what they cannot read. That does not mean every reviewer needs technical depth, but it does mean they need enough information to monitor the system, understand its limits, and make sense of its outputs. In practice, that means usable documentation, clear decision logs, and interfaces or workflows that make anomalies visible rather than burying them.

3. Treat Overrides as Governance Data

An override is not just a moment of intervention. It is evidence. If reviewers repeatedly challenge the same kind of output, that pattern should feed monitoring, retraining, threshold changes, or limits on use. Organizations that capture those signals build oversight programs that improve over time. Organizations that do not are left with paperwork and no proof that the control actually works.

None of this can be built in the quarter before a deadline. It takes time to define the role, test the escalation path, and prove that intervention changes outcomes. The deadline may have moved. The requirement did not.


Eunomia works with organizations on behavioral governance and AI compliance, including human oversight design under the EU AI Act. If you want to use the time well, we can help you start. Contact us.

Next
Next

The POWER Scan™: A Behavioral Diagnostic for Governance Capacity In the Age of AI