The August Countdown
August 2, 2026. That is the date the EU AI Act’s provisions for high-risk AI systems take full effect. Not the GPAI model provisions and governance rules — those applied from August 2, 2025. Not the prohibited practices — those took effect in February 2025. The high-risk provisions. The ones with teeth.
Five months from today.
This is not another “what the EU AI Act means for your business” article. The market has produced thousands of those. They are abstract, comprehensive, and operationally useless — designed to demonstrate the author’s familiarity with the regulation, not to help a company prepare for compliance.
This is a specific, article-by-article breakdown of what a company deploying AI systems needs to have in place by August 2, 2026. It is written for companies with 50 to 500 employees that have deployed or are deploying AI systems that may fall under high-risk classification. It assumes you have not started preparing. Five months is enough — if you start now.
Is Your System High-Risk?
Article 6 defines high-risk AI systems in two categories:
Category 1 (Article 6(1)): AI systems that are safety components of products, or are themselves products, covered by EU harmonisation legislation listed in Annex I. This includes machinery, medical devices, toys, radio equipment, civil aviation, vehicles, marine equipment, rail systems, and others. If your AI system is embedded in or acts as a safety component of a product covered by these directives, it is high-risk by default.
Category 2 (Article 6(2) and Annex III): AI systems used in specific high-risk areas listed in Annex III. The eight areas are:
- Biometric identification and categorisation. Remote biometric identification systems, emotion recognition, biometric categorisation.
- Management and operation of critical infrastructure. AI systems used as safety components in the management of road traffic, water, gas, heating, and electricity supply.
- Education and vocational training. AI systems that determine access to education, evaluate learning outcomes, monitor prohibited behaviour during exams.
- Employment, workers management, and access to self-employment. AI systems for recruitment screening, job advertisement targeting, hiring decisions, task allocation, monitoring and evaluating workers’ performance, and decisions on promotion or termination.
- Access to essential private and public services. Credit scoring, insurance pricing, social benefit eligibility, emergency services dispatch.
- Law enforcement. Individual risk assessments, lie detection, evidence evaluation, profiling.
- Migration, asylum, and border control. Risk assessments, document verification, application processing.
- Administration of justice and democratic processes. AI systems that assist in legal research, case analysis, or sentencing.
For an EU SME with 50–500 employees, the most common high-risk classifications are: employment (any AI tool used in hiring, performance evaluation, or workforce management) and essential services (any AI tool used in credit decisions, insurance assessments, or benefit eligibility).
The classification is not about the model. It is about the use case. The same language model that generates marketing copy (minimal risk) becomes high-risk when it evaluates job applications. The model did not change. The use case changed. The obligations follow.
The Article 6(3) exception: AI systems listed in Annex III may be excluded from high-risk classification if they do not pose “a significant risk of harm” to health, safety, or fundamental rights. The company must document why the exception applies. If there is doubt, classify as high-risk. The cost of over-classification is compliance. The cost of under-classification is enforcement.
The Five Requirements — Article by Article
If your system is high-risk, five sets of requirements apply. Here they are, with specific operational actions for each.
Requirement 1: Risk Management System (Article 9)
You need a documented risk management system that identifies, analyses, evaluates, and mitigates the risks of your AI system. This is not a one-time risk assessment. It is a “continuous iterative process” that runs throughout the system’s lifecycle.
What this means in practice:
Before August 2: Document the known risks of your AI system. Not generic AI risks. Your specific system’s risks. What happens when it gets the answer wrong? Who is affected? How severely? What categories of error are most likely? (The model card — which I’ve written about — is your primary source for model-level risks.)
Create a risk register that maps each identified risk to a mitigation measure. The mitigation must be specific: “We conduct manual review of all automated decisions that affect individual employment” is a mitigation. “We monitor the system for risks” is not.
Establish a process for updating the risk register when the system changes, when the use case expands, or when a new risk is identified in production. The process must specify who is responsible, how often the register is reviewed, and what triggers an ad-hoc review.
Estimated effort: 2–3 weeks of dedicated work for a typical SME deployment. One person, part-time, with domain expertise in the AI system’s application area.
Requirement 2: Data Governance (Article 10)
Training, validation, and testing datasets must meet quality criteria specified in the regulation. The data must be “relevant, sufficiently representative, and to the extent possible, free of errors and complete.” You must document the data’s characteristics, its source, the data collection process, and any preprocessing operations.
What this means in practice:
If you fine-tuned the model: Document the fine-tuning dataset. What data was used? Where did it come from? What quality checks were applied? Were protected characteristics (age, gender, ethnicity, disability) present in the data? If so, how were they handled? Were there known biases in the data? If so, what mitigations were applied?
If you use a pre-trained model via API: The model provider’s model card and data documentation contribute to this requirement, but you are still responsible for the data your system processes. Document the data that enters your system: customer data, operational data, the documents in your RAG pipeline. The same quality criteria apply.
If you don’t have this documentation: Start building it now. The effort is retrospective documentation of decisions that have already been made. It is tedious. It is not complex. An intern with a template and access to the data team can produce 80% of the required documentation in two weeks.
Estimated effort: 1–2 weeks for a system using a pre-trained model via API. 3–4 weeks for a system with custom fine-tuning.
Requirement 3: Technical Documentation (Article 11 and Annex IV)
You must produce technical documentation before the system is placed on the market or put into service. Annex IV specifies the contents in detail:
- General description of the AI system and its intended purpose
- Detailed description of the elements of the AI system and its development process
- Detailed information about the monitoring, functioning, and control of the system
- Description of the risk management system
- Description of the changes made to the system throughout its lifecycle
- A list of the harmonised standards applied
- A description of the measures put in place to ensure that the system complies with relevant requirements
What this means in practice:
This is a documentation exercise. The system already exists (or is being built). The technical documentation describes what exists. The key principle: write it so that a competent technical reviewer can understand what the system does, how it does it, what risks it poses, and what controls are in place.
The document does not need to be beautiful. It needs to be accurate, complete, and maintained. A living document that is updated when the system changes is compliant. A polished document that was accurate six months ago and has not been updated is not.
Estimated effort: 3–4 weeks for the initial documentation. Ongoing maintenance: 2–4 hours per month.
Requirement 4: Record-Keeping and Logging (Article 12)
The AI system must automatically log events relevant to identifying risks and facilitating post-market monitoring. Logs must include: the period of each use, the reference database against which input data was checked, input data for which the search led to a match, and the identification of the natural persons involved in the verification of the results.
What this means in practice:
Your AI system must produce audit trails. Every decision the system makes (or recommends) must be logged with sufficient detail to reconstruct the decision after the fact. The log must include: the input, the output, the timestamp, and the identity of any human reviewer.
For an SME deploying a customer service AI or an HR screening tool, this means implementing structured logging in the application layer. The engineering effort is modest — most modern AI deployment frameworks support structured logging. The storage cost is proportional to volume: a system processing 500 decisions per day generates approximately 15MB of structured logs per month at moderate verbosity.
Estimated effort: 1–2 weeks of engineering for implementation. Minimal ongoing cost.
Requirement 5: Human Oversight (Article 14)
I have written a full article on this (“The €500,000 Mistake”). The summary: the system must be designed to be effectively overseen by natural persons. The oversight must be meaningful — independent assessment, practical override authority, sufficient time, and demonstrated variation in outcomes.
What this means in practice:
Build the review interface. Design the workflow. Train the reviewers. Monitor the override rates. All of this must be in place before August 2.
Estimated effort: 3–5 weeks for interface development, workflow design, and reviewer training.
The Conformity Assessment
Articles 16–22 define the obligations of providers of high-risk AI systems — the requirements a company must meet to demonstrate that its high-risk AI system is compliant.
For most SME deployments (those not covered by specific EU harmonisation legislation in Annex I), the conformity assessment is an internal assessment under Article 43(2). You do not need an external auditor. You do not need a notified body. You assess your own compliance based on the requirements in Articles 8–15, document the assessment, and issue an EU declaration of conformity (Article 47).
This is important: for most SME use cases, conformity is self-assessed. The regulation trusts the deployer to evaluate their own compliance — provided the evaluation is documented, the documentation is maintained, and the system is subject to post-market monitoring.
The declaration of conformity is a one-page document that states: this AI system, used for this purpose, meets the requirements of the EU AI Act. It references the technical documentation, the risk management system, and the quality management system.
The declaration must be kept for ten years after the AI system is placed on the market.
The Registration Requirement
Article 49 requires that high-risk AI systems be registered in the EU database for stand-alone high-risk AI systems (established under Article 71) before they are placed on the market or put into service.
Registration is electronic, through a portal maintained by the AI Office. The information required is: the name and contact details of the provider, a description of the intended purpose, the AI system’s status (on the market, withdrawn, recalled), a description of how the system is made available, and the EU declaration of conformity.
Registration is not a gatekeeping mechanism. It is a transparency mechanism. The database is public. Registering your system demonstrates compliance intent. Not registering an operational high-risk system is itself a violation.
The Timeline
Five months. Here is a realistic timeline for an SME that has not started preparing:
Months 1–2 (March–April 2026): Risk classification. Determine whether your AI systems are high-risk under Article 6 and Annex III. Inventory all AI systems in use across the company — including tools adopted by individual teams without central IT oversight. Start the risk management documentation (Article 9) and data governance documentation (Article 10) for any system classified as high-risk.
Month 3 (May 2026): Technical documentation. Produce the Annex IV technical documentation for each high-risk system. Implement structured logging (Article 12) if not already in place. Begin developing the human oversight interface and workflow (Article 14).
Month 4 (June 2026): Human oversight implementation. Complete the review interface, train reviewers, establish the workflow. Begin the internal conformity assessment. Identify gaps and remediate.
Month 5 (July 2026): Conformity assessment completion. Issue the EU declaration of conformity. Register high-risk systems in the EU database. Establish the post-market monitoring process. Document everything.
August 2, 2026: Full provisions in effect. Your systems are compliant, registered, and monitored — or they are not, and you are operating in violation.
The Quality Management System
Article 17 requires providers of high-risk AI systems to implement a quality management system. This requirement is often overlooked in countdown articles because it sounds generic. It is not.
The quality management system must include: policies and procedures for the implementation of the AI Act’s requirements, techniques and procedures for the design, design control, and design verification of the high-risk AI system, techniques and procedures for its development, quality control, and quality assurance, examination, test, and validation procedures to be carried out before, during, and after the development of the system, and data management procedures.
For an SME, the quality management system does not need to be ISO 9001 certified. It needs to be documented, implemented, and maintained. A practical quality management system for an SME AI deployment is a 10–15 page document that specifies: who is responsible for what, how changes to the system are controlled, how the system is tested before updates, how post-market incidents are reported and investigated, and how the documentation is kept current.
The document takes approximately one week to produce. It needs to exist before August 2. It needs to be followed after August 2. The gap between having the document and following the document is the gap that enforcement actions target.
The Penalty Framework
Article 99 defines the penalties for non-compliance:
- Violations of prohibited AI practices (Article 5): up to €35 million or 7% of annual global turnover.
- Violations of high-risk requirements (Articles 8–15): up to €15 million or 3% of annual global turnover.
- Supplying incorrect information to regulators: up to €7.5 million or 1% of annual global turnover.
For SMEs, the regulation provides proportional penalties — fines are calculated relative to the company’s size and the severity of the violation. But “proportional” is not “negligible.” A 3% revenue fine for a company with €10 million annual turnover is €300,000. For many SMEs, that is existential.
The regulation also provides for non-financial enforcement: orders to withdraw AI systems from the market, orders to modify AI systems, and public statements identifying non-compliant companies and their systems.
The Position
Five months is enough time to comply. Five months is not enough time to procrastinate, form a working group, commission a consulting engagement, and then comply.
The regulation is specific. The requirements are enumerable. The conformity assessment is internal. The registration is electronic. None of this requires an army of lawyers or a six-figure consulting budget.
What it requires is a decision: we will comply by August 2. That decision, made today, gives you five months of structured work. That decision, made in June, gives you five weeks of panic.
The EU AI Act is not ambiguous about what it requires. It is only ambiguous about whether you will take it seriously before the deadline arrives.
Five months. The countdown is running.