The Standards Nobody Finished
The EU AI Act requires companies to comply with harmonised standards. The harmonised standards do not exist.
That is not a simplification. It is the situation. The European Commission asked CEN-CENELEC to write the standards. CEN-CENELEC missed the deadline. Then it missed the revised deadline. The standards are now expected by Q4 2026. The compliance date for high-risk AI systems is August 2, 2026. The standards are scheduled to arrive after the exam.
An SME cannot build a compliance programme against a standard that has not been written. But the regulation does not wait for the standards body. Article 9 still requires a risk management system. Article 11 still requires technical documentation. Article 13 still requires transparency. These articles are specific. They tell you what to build. The standards would tell you how to demonstrate it. The “what” exists. The “how” is missing.
This is the gap. And it is wider than most companies realise.
The Standardisation Request
In May 2023, the European Commission issued a standardisation request to the European standardisation organisations CEN and CENELEC. The request asked them to develop harmonised standards covering the requirements for providers of high-risk AI systems — the technical specifications that, once published in the Official Journal of the European Union, would give companies a presumption of conformity with the AI Act’s requirements.
The mechanism is established EU regulatory practice. The regulation defines the requirements. The standards bodies translate those requirements into technical specifications. Companies that comply with the specifications are presumed to comply with the regulation. It is the same architecture that underpins the Machinery Regulation, the Medical Device Regulation, and dozens of other EU product safety frameworks.
The original deadline for delivering the standards was April 30, 2025. CEN-CENELEC did not meet it. The chair of JTC 21 — the joint technical committee responsible for the work — indicated in early 2025 that the standards would likely be completed by end of 2025. On June 23, 2025, the Commission formally revised the deadline to August 31, 2025. CEN-CENELEC did not meet that deadline either.
The standardisation request was amended in June 2025 to align with the final text of the AI Act. The work programme under JTC 21 covers approximately thirty-five work items across five working groups: foundational aspects, operational aspects, trustworthiness, engineering, and specific application domains. Three hundred experts from more than twenty countries participate. The scope is enormous. The timeline was not.
What Exists
As of March 2026, one standard has entered public enquiry. One.
prEN 18286 — Artificial Intelligence: Quality Management System for EU AI Act Regulatory Purposes — became the first AI Act harmonised standard to enter public enquiry on October 30, 2025. The enquiry ran until December 27, 2025. National standardisation bodies submitted comments. CEN is now compiling responses and resolving comments before the standard can be adopted.
prEN 18286 covers Article 17 of the AI Act: the quality management system requirement. It translates Article 17’s obligations into concrete governance, documentation, lifecycle, and evidentiary controls. It tells a provider how to structure the quality management system that the regulation requires. It is specific, detailed, and operationally useful.
It is also not yet a European Standard. It is a draft. It is subject to change. It cannot be cited as a harmonised standard until it is finalised and its reference is published in the Official Journal. The presumption of conformity — the legal benefit that makes harmonised standards valuable — does not attach until that publication. As of today, prEN 18286 provides guidance. It does not provide legal certainty.
Its companion standard, prEN 18228, covers risk management under Article 9. It entered Committee Internal Ballot in mid-2025. It has not reached public enquiry. The risk management standard — arguably the most operationally critical standard for any company deploying a high-risk AI system — is months behind the quality management standard, which is itself months from finalisation.
Beyond these two, the remaining work items span data governance, bias, cybersecurity, robustness, logging, transparency, computer vision, and natural language processing. Most are in working draft or earlier stages. The pipeline is full. The output is not.
Why the Standards Are Late
Standards bodies are not fast. This is by design, not by failure. The legitimacy of harmonised standards depends on a consensus process that includes industry, academia, civil society, regulators, and standardisation experts from across Europe. Rushing that process produces standards that lack buy-in, that miss edge cases, that fail the companies they are supposed to serve. The consensus process is slow because it is thorough.
But the AI Act created a structural timing problem that no amount of process optimisation can solve. The regulation was published in the Official Journal on July 12, 2024. The high-risk provisions take effect on August 2, 2026. The standardisation request was issued in May 2023 — before the regulation was finalised. When the final text was published, the request had to be amended to align with the actual law. Work that had begun against a draft regulation had to be revised against the final text. The clock kept running. The scope expanded.
The result is a gap between the regulatory timeline and the standardisation timeline that was built into the architecture from the beginning. The regulation moves on a political timeline. The standards move on a technical consensus timeline. The two timelines are structurally incompatible.
CEN-CENELEC recognised the problem. In October 2025, the joint Technical Boards of CEN and CENELEC adopted an exceptional package of measures to accelerate delivery. The most significant: allowing direct publication of standards after a positive enquiry vote, skipping the separate formal vote that normally follows. This is extraordinary. The formal vote is a fundamental step in the European standardisation process. Bypassing it is an acknowledgement that the normal process cannot deliver within the regulatory timeline.
Even with this acceleration, the target for the first wave of standards is Q4 2026 — after the August 2, 2026 compliance date. The acceleration measures are designed to get standards published before the end of 2026. They are not designed to get standards published before August.
The Presumption of Conformity Gap
Understanding why this matters requires understanding what harmonised standards do in EU regulatory architecture.
Article 40 of the AI Act establishes the presumption of conformity: high-risk AI systems that conform with harmonised standards — or parts thereof — whose references have been published in the Official Journal are presumed to comply with the requirements covered by those standards. This is not compliance by default. It is a legal shortcut. If a harmonised standard covers Article 9’s risk management requirements, and your system conforms with that standard, you are presumed to have met Article 9. A regulator challenging your compliance must rebut that presumption.
Without the harmonised standard, there is no presumption. You still must comply with Article 9. But you must demonstrate compliance on your own terms, using your own methodology, without the scaffolding of a standard that a regulator has pre-approved. The evidentiary burden is higher. The legal certainty is lower.
This is the practical consequence of the standardisation gap. The law applies. The requirements are clear. But the tool that makes compliance demonstrable — the harmonised standard — is not available. Companies must comply without the roadmap that the regulatory architecture was designed to provide.
The analogy is architectural. The regulation says: build a house that meets these structural requirements. The harmonised standard is the building code that specifies how to meet those requirements — the materials, the methods, the measurements. Without the building code, you still must build a structurally sound house. But you must prove it is structurally sound without reference to the code that the inspector was trained to evaluate against. You are building to requirements, not to specifications. The requirements are the same. The proof is harder.
The Common Specifications Fallback
The AI Act anticipated this problem. Article 41 empowers the Commission to adopt common specifications — implementing acts that serve as an alternative compliance pathway when harmonised standards are unavailable.
The conditions are explicit. The Commission may adopt common specifications when: it has requested harmonised standards and the request has not been accepted, the standards are not delivered within the deadline, or no reference to harmonised standards has been published in the Official Journal and no such reference is expected within a reasonable period.
All three conditions are met. The request was accepted but the deadline was missed — twice. No reference has been published in the Official Journal. No such reference is expected before August 2, 2026.
The Commission has the legal authority to adopt common specifications today. It has not done so. The reason is political, not legal. Common specifications are Commission-drafted implementing acts — they bypass the consensus-based standardisation process. The standardisation community views them as an intrusion into their domain. Industry groups prefer standards developed through consensus to specifications imposed by regulation. The Commission has historically been reluctant to use the common specifications mechanism, preferring to wait for harmonised standards.
The result: the primary compliance pathway (harmonised standards) is unavailable. The fallback pathway (common specifications) has not been activated. Companies are left with the regulation’s text and whatever they can build from it.
The Digital Omnibus and the Shifting Deadline
The standardisation gap is a primary driver of the Digital Omnibus — the legislative proposal the Commission published on November 19, 2025 to amend the AI Act’s application timeline.
The political logic is straightforward. The regulation requires compliance. Compliance is facilitated by harmonised standards. The harmonised standards are not ready. Therefore, delay the compliance deadline until the standards are ready. The Digital Omnibus proposes replacing the August 2, 2026 deadline for high-risk provisions with a mechanism linked to the availability of harmonised standards — in practice, pushing the effective date to December 2, 2027 for stand-alone high-risk AI systems (Annex III) and August 2, 2028 for AI systems embedded in products (Annex I).
The Council adopted its position on March 13, 2026. The European Parliament’s IMCO and LIBE committees adopted their joint report on March 18, 2026 — 101 votes in favour, 9 against, 8 abstentions. Both institutions support the delay. The Parliament’s plenary vote was scheduled for March 26. Trilogue negotiations between the Parliament, Council, and Commission will follow.
The political momentum is clear. Both co-legislators support the delay. The trilogue will negotiate details, not the principle.
But — and I have made this point before and will make it again — the Digital Omnibus has not been adopted. It has not been published in the Official Journal. It has not entered into force. As of April 7, 2026, August 2, 2026 is the law. Planning for December 2027 is rational. Relying on December 2027 is reckless. The difference between planning and relying is whether you have started preparing for August.
A company that has not begun compliance work because it expects the deadline to move is making a bet. The odds favour the delay. The downside of being wrong is not abstract — it is operational. It is scrambling to build a risk management system, technical documentation, logging infrastructure, and human oversight mechanisms in the weeks between a failed trilogue and August 2. The probability is low. The impact is catastrophic.
What a Company Can Do Today
The absence of harmonised standards does not mean the absence of requirements. The AI Act’s substantive provisions are self-contained. They tell you what to build. The standards would have provided a consensus methodology for building it. Without the standards, you must build the methodology yourself — but the destination is the same.
Here is what exists, and what you can do with it.
Article 9 — Risk Management. The regulation specifies the requirements for a risk management system in detail. It must be a continuous iterative process. It must identify and analyse known and reasonably foreseeable risks. It must evaluate the risks that emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse. It must adopt suitable risk management measures. The article is prescriptive. You do not need prEN 18228 to build a risk management system. You need Article 9, your system’s technical documentation, and someone with domain expertise in your AI system’s application area. The standard, when it arrives, will provide a structured methodology. The article provides the requirements that methodology must satisfy.
Article 10 — Data Governance. Training, validation, and testing datasets must meet quality criteria: relevant, sufficiently representative, free of errors to the extent possible, and complete in view of the intended purpose. The article specifies data governance practices — including examination of biases, identification of gaps, and appropriate measures to address them. Document your data. Document its sources. Document your quality checks. Document how you handled bias. The standard will provide a framework for doing this. The article tells you what the framework must cover.
Article 11 — Technical Documentation. Annex IV lists the contents of the technical documentation in detail. General description, development process, monitoring and control, risk management system, lifecycle changes, harmonised standards applied, and compliance measures. The last item — harmonised standards applied — will be empty for now. Annex IV explicitly accommodates this: where no harmonised standards have been applied, the documentation must include “a list of other relevant standards and technical specifications applied.” ISO/IEC 42001, ISO/IEC 23894, ISO/IEC 38507 — these are not harmonised standards under the AI Act, but they are relevant technical specifications that demonstrate a compliance methodology. Use them. Document their application. The technical documentation does not require harmonised standards. It requires documentation of whatever standards you did apply.
Article 12 — Logging. The system must automatically log events. The article specifies what: period of use, reference databases, input data that produced matches, identification of persons involved in verification. This is an engineering requirement. It does not depend on a harmonised standard. Build the logging infrastructure. The standard will not change what you log. It may change how you structure and store it. Build now, refine later.
Article 13 — Transparency. The system must be designed to be sufficiently transparent to enable deployers to interpret the output and use it appropriately. Instructions for use must include: the provider’s identity, the system’s characteristics, capabilities, and limitations, the intended purpose, the level of accuracy and the relevant accuracy metrics. Write the instructions for use. The content is specified in the article. A harmonised standard might specify the format. The content is already defined.
Article 14 — Human Oversight. I have written about this at length in “The 500,000 Euro Mistake.” The requirement is clear: natural persons must be able to fully understand the system’s capacities and limitations, to monitor its operation, to be able to decide not to use the system or to disregard its output, and to be able to intervene or interrupt the system. Build the oversight interface. Train the overseers. The standard will not change the requirement. It will provide a methodology for demonstrating it.
prEN 18286 — Quality Management System. The draft standard is publicly available through national standardisation bodies. It is not finalised. It may change. But it is the most detailed guidance available for Article 17 compliance. Read it. Use it as a reference for structuring your quality management system. When the final standard is published, align to it. The cost of aligning a system built against the draft to the final standard is adjustment. The cost of building nothing while waiting for the final standard is starting from zero.
The ISO Bridge
The absence of AI Act-specific harmonised standards does not mean the absence of standards. The international standardisation landscape offers relevant frameworks that, while they do not carry the presumption of conformity under Article 40, provide structured methodologies for addressing the AI Act’s requirements.
ISO/IEC 42001 — Artificial Intelligence Management System — provides a management system standard for organisations using or providing AI. It covers risk assessment, governance, and lifecycle management. It does not map directly to the AI Act’s requirements, but it addresses the same structural concerns. A company with an ISO 42001-certified management system has a framework that translates to AI Act compliance with adaptation, not reinvention.
ISO/IEC 23894 — Artificial Intelligence: Guidance on Risk Management — provides a risk management framework specific to AI systems. It aligns conceptually with Article 9 but was developed independently of the AI Act. A company that implements ISO 23894 has a risk management system that addresses most of what Article 9 requires — with gaps that can be filled by reading Article 9 itself.
These standards are not substitutes for harmonised standards. They do not provide the presumption of conformity. But they provide something that the missing harmonised standards cannot: structure. A risk management system built on ISO 23894 and tailored to Article 9 is demonstrably more robust than a risk management system built from scratch with no methodological framework. When a regulator evaluates compliance in the absence of harmonised standards, the company that applied a recognised international standard — and documented its application — is in a stronger position than the company that applied nothing.
The Enforcement Reality
The practical question is what happens when enforcement begins without harmonised standards. The answer depends on the enforcer.
National competent authorities that have been operational for months — Finland’s Traficom, Spain’s AESIA, Germany’s Bundesnetzagentur — have had time to develop enforcement methodologies. These authorities understand the standardisation gap. They know that no company can demonstrate conformity with a standard that does not exist. Their enforcement approach will necessarily account for the absence of harmonised standards — evaluating compliance against the regulation’s requirements rather than against technical specifications.
This is not a licence for complacency. The absence of harmonised standards does not reduce the requirements. It changes the evidentiary framework. A company must still comply with Articles 9 through 15. It must still have a risk management system, technical documentation, logging, transparency, and human oversight. What it cannot have is a harmonised standard to point to as proof. The proof must come from the substance of what the company built, documented, and implemented.
The companies that will fare best in an enforcement environment without harmonised standards are the ones that did the work. Not the ones that waited for the standard. Not the ones that hired a consultancy to produce a gap analysis. The ones that read Article 9, built a risk management system, documented it, and maintained it. The ones that read Article 11 and Annex IV, produced the technical documentation, and kept it current. The ones that treated the regulation’s text as the specification — because, in the absence of harmonised standards, it is.
The Position
The standardisation gap is real, structural, and will not close before August 2, 2026. The Digital Omnibus may push the compliance date to December 2027. The harmonised standards may arrive by Q4 2026. Both may happen. Neither has happened.
The gap does not justify inaction. It justifies a different kind of action. Build against the regulation’s text, not against a standard that does not exist. Use the draft standards as guidance, not as a compliance pathway. Apply international standards where they address the same concerns. Document everything — the methodology, the standards applied, the gaps identified, the decisions made.
The companies that navigate the standardisation gap successfully will be the ones that treated the absence of standards as a building constraint, not a building excuse. Leonardo da Vinci worked without CAD software. The constraints did not stop the work. They shaped it.
The standards will arrive. They always do. The question is whether you built something worth aligning to them — or whether you waited at the drafting table for blueprints that never came.
The blueprints are late. The building is not optional.