Eight of Twenty-Seven
Bertrand March 17, 2026

Eight of Twenty-Seven

14 min read

Eight. That is the number of EU member states that have designated a single point of contact for AI Act enforcement. Out of twenty-seven. The deadline for doing so was August 2, 2025. Seven months ago.

No fines. No infringement procedures. No public statements of concern from the Commission. Nineteen member states missed a legally binding deadline by seven months and nothing happened. The regulation that will impose fines of up to 7% of global turnover on companies that fail to comply has not produced consequences for the governments that failed to build the enforcement apparatus.

This is not dysfunction. This is how EU regulation actually deploys across twenty-seven sovereign implementations. And if you run a company that operates across EU borders, understanding this pattern is more important than understanding any single article of the AI Act.

The Deadline Nobody Met

Article 70 of the EU AI Act required every member state to designate national competent authorities — at minimum, one notifying authority and one market surveillance authority — and to communicate those designations to the Commission by August 2, 2025. The same article required member states to designate a single point of contact: one authority that serves as the coordination hub for all AI Act enforcement within that country.

The single point of contact is not a bureaucratic formality. It is the address where complaints arrive, where cross-border enforcement is coordinated, where the European AI Board communicates with national authorities. Without it, the enforcement chain is broken at the national level. A regulation without an enforcer is a suggestion.

As of March 2026, according to the European Parliament’s research service briefing published on March 18, 2026, eight member states have designated their single point of contact. The briefing is specific. The number is eight. The denominator is twenty-seven. The gap is nineteen.

The Eight

The eight member states that have designated enforcement authorities share a pattern. They are not the wealthiest. They are not the largest. They are the ones that made an institutional decision early and executed it.

Finland moved first. On December 22, 2025, the President of the Republic approved the legislative amendments that gave ten existing market surveillance authorities their AI Act mandates. Traficom — the Finnish Transport and Communications Agency — became the single point of contact. Enforcement powers activated on January 1, 2026. Finland did not build a new AI agency. It activated existing regulators with new mandates. Lightweight operations. No new bureaucracy. The Finnish approach is a model of what Article 70 intended: use the institutional infrastructure you already have.

Germany designated the Bundesnetzagentur — the Federal Network Agency — as its single point of contact. Germany’s model is complex by necessity: a federal system with sector-specific market surveillance authorities across sixteen Länder. The Bundesnetzagentur also established a coordination centre (KoKIVO) to support other competent authorities. The system is heavy. But it exists. It is designated, communicated, and operational.

Spain was early. The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) was established by Royal Decree 729/2023 — before the AI Act was even published in the Official Journal. AESIA has been operational since June 2024. Spain’s regulatory sandbox, managed by AESIA, has completed two cycles and selected twelve AI projects for its third. Spain built the enforcement infrastructure before the regulation demanded it.

Denmark adopted its national implementation law in May 2025 and designated its authorities ahead of the August deadline. The Danish approach mirrors Finland’s pragmatism: existing regulatory bodies, new mandates, minimal new infrastructure.

Italy designated multiple market surveillance authorities with sector-specific remits. The Italian model distributes enforcement across existing sectoral regulators — a pattern that reflects Italy’s existing regulatory architecture rather than building parallel AI-specific infrastructure.

The remaining three of the eight have designated their single points of contact through varying institutional arrangements. The common thread is not the model chosen. It is that a model was chosen, legislated, and communicated to the Commission within the legal timeframe — or close to it.

The Nineteen

The nineteen that have not designated enforcement authorities are not a homogeneous group. Some have pending legislative proposals. Some have identified authorities informally but not completed the legal designation. Some have not started.

The European Parliament briefing distinguishes three categories among the lagging member states: those with draft legislation in parliamentary process, those with authorities identified but not formally designated, and those with no visible progress. The distribution is roughly even across the three categories.

The reasons for delay are structural, not accidental. Several patterns recur.

Institutional turf battles. In multiple member states, the designation process stalled because existing regulators — data protection authorities, telecommunications regulators, consumer protection agencies, sectoral supervisors — competed for the AI Act mandate. The AI Act’s scope crosses every existing regulatory domain. Deciding which authority “gets” AI Act enforcement is a political decision dressed as an administrative one. In countries with strong institutional boundaries, that decision takes time. In some, it has not been made.

Legislative bottlenecks. Some member states require primary legislation to designate new competent authorities or to expand existing mandates. Parliamentary calendars, coalition negotiations, and legislative priorities have pushed AI Act implementation down the queue. The regulation is directly applicable — it does not require national transposition — but the designation of enforcement authorities does require national legislative action. The distinction matters.

Resource allocation. Designating an enforcement authority without funding it is worse than not designating one at all. Several member states have delayed designation while determining budgets, headcount, and technical capabilities. Article 70(8) requires member states to ensure that competent authorities have “adequate financial and human resources” as well as technical expertise. A designation without resources is a designation on paper — and the Commission has signalled that it will assess resource adequacy, not just designation status.

Political deprioritisation. In some member states, AI Act implementation is simply not a political priority. Energy policy, migration, fiscal consolidation, elections — the competition for political attention is fierce. A regulation that does not take full effect until August 2026 is easy to defer when the political calendar is crowded. The calculus changes when enforcement begins. By then, the implementation gap may be structural.

What the Gap Means for Companies

For an SME operating across EU borders, the enforcement readiness gap creates a specific set of conditions that must be understood operationally, not abstractly.

Enforcement will be uneven. When the AI Act’s high-risk provisions take effect on August 2, 2026 — or December 2027 if the Digital Omnibus is adopted — companies operating in Finland will face an enforcer that has been operational for months or years. Companies operating in a member state that has not designated an authority will face a regulatory vacuum. The same AI system, deployed the same way, will exist in two different enforcement realities depending on the member state.

This is not hypothetical. It is the lived reality of every EU regulation. GDPR took effect on May 25, 2018. As of 2020, enforcement varied by a factor of fifty between the most active and least active data protection authorities. The Irish DPC processed Facebook complaints for four years before issuing its first significant fine. The French CNIL fined Google €50 million within seven months. The regulation was identical. The enforcement was not.

The AI Act will follow the same pattern. The regulation is uniform. The enforcement will be fragmented. The fragmentation maps, approximately, to the same lines: Nordic and Western European authorities will enforce earlier and harder. Southern and Eastern European authorities will take longer to build enforcement capacity. The pattern is not a surprise. It is a structural feature of how twenty-seven sovereign legal systems implement a single regulatory framework.

Regulatory arbitrage is a trap. The temptation is to optimise for the weakest enforcer: deploy your AI system from a member state that hasn’t designated an authority, operate in markets where enforcement is slow, and treat the readiness gap as a compliance holiday.

This is the wrong conclusion for three reasons.

First, the Brussels effect. The EU AI Act applies to any AI system placed on the EU market or whose output is used in the EU — regardless of where the provider is established. A company operating from a non-enforcing member state still faces enforcement from any member state where its system is used. The market surveillance authority in Finland can investigate an AI system deployed by a company established in a member state without an enforcer if that system affects Finnish users.

Second, enforcement catches up. GDPR’s enforcement gap closed. Not evenly, not quickly, but it closed. The member states that took years to build enforcement capacity eventually built it. The fines, when they came, were retroactive in effect: violations that occurred during the low-enforcement period were investigated and sanctioned after capacity was built. The compliance holiday has an expiration date. You just don’t know when.

Third, the reputational dimension. For an SME serving enterprise clients across Europe, the question “are you AI Act compliant?” will come from customers before it comes from regulators. A German manufacturer evaluating AI vendors will assess compliance posture as a procurement criterion. The enforcement gap in the vendor’s home member state is irrelevant to the customer’s due diligence process. The customer cares about the regulation, not the enforcement geography.

The Digital Omnibus Complication

The enforcement readiness gap has created political pressure to delay the compliance deadline. On November 19, 2025, the European Commission published the Digital Omnibus — a legislative proposal that would, among other things, postpone the application date for high-risk AI system requirements.

The proposal has moved fast. On March 13, 2026, the Council adopted its position: fixed application dates of December 2, 2027 for stand-alone high-risk AI systems and August 2, 2028 for high-risk AI systems embedded in products. On March 18, 2026, the European Parliament’s IMCO and LIBE committees adopted their joint report — 101 votes in favour, 9 against, 8 abstentions — proposing the same December 2027 date for Annex III high-risk systems and August 2028 for Annex I systems.

The Parliament and Council positions are aligned on the core question: delay. Trilogue negotiations will follow. If the Digital Omnibus is adopted, the August 2, 2026 deadline for high-risk provisions moves to December 2027.

But — and this is the critical point that most commentary misses — the Digital Omnibus has not been adopted. As of today, March 17, 2026, August 2, 2026 remains the legal deadline. The prohibited practices provisions have been in effect since February 2, 2025. The AI literacy obligation under Article 4 has been in effect since February 2, 2025. The GPAI model provisions have been in effect since August 2, 2025. And the requirement to designate national competent authorities — the very requirement that only eight member states have met — has been in effect since August 2, 2025.

Planning for the delay is rational. Relying on the delay is reckless. The Digital Omnibus must pass trilogue, be published in the Official Journal, and enter into force. Until that happens, August 2, 2026 is the law. A company that defers compliance preparation because a delay “will probably” be adopted is making a bet. The odds may favour the delay. But the downside of being wrong is operational: scrambling to comply with a deadline that everyone assumed would move but didn’t.

The Sandbox Gap

Article 57 of the AI Act required every member state to establish at least one AI regulatory sandbox, operational by August 2, 2026. The sandbox requirement tracks the enforcement authority gap: the countries that designated authorities early are the same countries with operational or near-operational sandboxes. The countries without enforcement authorities generally do not have sandboxes either.

Spain’s sandbox has been running since 2022. Finland’s regulatory support infrastructure is operational. Germany’s Bundesnetzagentur launched its AI Service Desk. The Netherlands has a sandbox proposal under coordination between the Authority for Consumers and Markets and the Dutch Data Protection Authority.

For the rest, the sandbox deadline is August 2, 2026 — the same day the provisions they are supposed to help companies prepare for take effect. A sandbox that opens on the day of enforcement is a sandbox that arrives too late to serve its purpose. The regulatory sandbox was designed as a pre-enforcement mechanism: a place where companies develop AI systems under regulatory supervision before full compliance requirements apply. Opening the sandbox on enforcement day is like building the practice facility on game day.

The sandbox gap compounds the enforcement gap. Member states without enforcement authorities also lack the institutional infrastructure to operate sandboxes. The competent authority that runs the sandbox is the same competent authority that enforces the regulation. If the authority does not exist, neither does the sandbox.

For an SME in a member state with no enforcement authority and no sandbox, the practical reality is: you are preparing for a regulation without any institutional support from your national government. No guidance from a national competent authority. No sandbox to test your compliance architecture. No single point of contact to answer questions. You are alone with the regulation’s text and whatever commercial advisory services you can afford.

This is not an acceptable state of affairs. But it is the actual state of affairs. The article does not change because the implementation is late. The compliance requirement does not soften because the government missed its own deadline.

What a Company Should Do

The enforcement readiness gap does not change the compliance requirement. It changes the compliance strategy.

Build for the strictest enforcer. If you operate across EU markets, your compliance architecture should meet the standard of the most capable, most active enforcement authority. Today, that means Finland, Spain, and Germany. A system that is compliant in Finland is compliant everywhere. A system that is compliant only in a member state without an enforcer is a system that will fail its first cross-border regulatory encounter.

Do not wait for your national authority. If your member state has not designated an enforcement authority, do not wait. The regulation applies regardless. The technical requirements — risk management under Article 9, data governance under Article 10, technical documentation under Article 11, logging under Article 12, human oversight under Article 14 — are defined in the regulation itself. They do not depend on national guidance. National guidance helps. It is not a prerequisite.

Use available sandboxes across borders. Article 57 permits member states to establish sandboxes jointly. Several sandbox programmes accept applications from companies established in other member states. Spain’s AESIA sandbox, in particular, has processed applications from non-Spanish EU companies. If your member state does not offer a sandbox, apply to one that does. The compliance documentation produced in a Spanish sandbox has value when facing a German enforcer.

Monitor the Digital Omnibus. The trilogue process will determine whether August 2, 2026 or December 2, 2027 is the operative date for high-risk provisions. Track the legislative progress. Prepare for August. Adjust if December is confirmed. Do not assume.

Document your compliance effort now. Regardless of the enforcement date, the act of preparing — conducting risk assessments, documenting data governance, building technical documentation — has value even if the deadline moves. The documentation itself is an asset. It demonstrates good-faith compliance effort. In an enforcement landscape where regulators have limited resources and must prioritise investigations, a company with documented compliance architecture is a lower enforcement priority than a company with nothing.

The Pattern

The EU AI Act will follow the same implementation arc as every major EU regulation before it. The regulation is published. The deadline passes. Some member states are ready. Most are not. Enforcement begins unevenly. The gap closes over two to five years. The companies that prepared for the regulation as written — not for the regulation as enforced — are compliant when enforcement arrives. The companies that optimised for the gap are exposed when it closes.

GDPR followed this arc. The Payment Services Directive followed this arc. The Medical Device Regulation followed this arc. Every time, the pattern surprised companies that assumed the implementation gap was a permanent feature rather than a temporary one.

Eight of twenty-seven. The number is a snapshot of a regulatory system in the middle of its deployment. The system is not broken. It is slow, uneven, and predictable. The regulation will be enforced. The only question is when — and whether your company will be ready before the enforcer is.

The countdown did not stop because nineteen governments were late. It never does.

Written by
Bertrand
Creative Technologist

A serial entrepreneur with a PhD in AI and twenty-five years building systems across Europe. He creates code the way he surfs: reading patterns, finding flow, making the difficult look easy.

← All notes