The Regulatory Sandbox Nobody Uses
Article 57 of the EU AI Act requires every member state to establish at least one AI regulatory sandbox by August 2, 2026. As of January 2026, several member states have operational or near-operational sandboxes. By August, all twenty-seven must have at least one.
A regulatory sandbox is a controlled environment where companies can develop, test, and validate AI systems under regulatory supervision — with reduced compliance burden during the testing period and direct guidance from the national competent authority. It is structured experimentation with a safety net.
Most EU SMEs have never heard of them. Among those that have, most assume sandboxes are for large enterprises, for high-risk AI systems, or for companies with legal departments large enough to navigate the application process.
All three assumptions are wrong. And the companies that discover this first will have a structural advantage that compounds.
What a Regulatory Sandbox Actually Offers
The EU AI Act’s sandbox provisions (Articles 57–63) are remarkably specific about what sandboxes must provide. This is not a vague “innovation-friendly environment” concept. The regulation defines concrete operational features:
Structured testing under supervision. A company enters the sandbox with a specific AI system and a specific use case. The national competent authority provides regulatory guidance during the testing period — not after deployment, not retroactively, but during development. This means you build the compliance architecture while building the product, with the regulator’s input at each stage.
Reduced compliance burden during testing. The sandbox framework specifies that participants benefit from a proportional compliance pathway during the testing period. The requirements are scaled to the development stage. This does not mean compliance is waived. It means the compliance pathway is phased, supervised, and iterative rather than all-at-once.
Priority processing. Article 62 requires that SMEs and startups have priority access to sandboxes. This is not a suggestion. It is a regulatory requirement. Member states must design their sandbox programmes to prioritise smaller companies. The regulation explicitly recognises that SMEs face disproportionate compliance costs and that sandboxes are a mechanism to reduce that disparity.
Data processing permissions. Article 59 provides specific provisions for processing personal data within sandboxes — subject to safeguards, but with a legal basis that may not exist outside the sandbox context. For companies developing AI systems that process personal data (which is most of them), this is a significant enabler.
Exit documentation. Companies that complete a sandbox programme receive a compliance record — a documented history of regulatory engagement that demonstrates good-faith effort and supervised development. When the AI Act’s full provisions take effect, this documentation has operational value: it shows regulators that the company’s AI system was developed in a controlled, supervised environment.
These are not theoretical benefits. They are specific regulatory provisions with legal force across all 27 member states.
The SME Priority Nobody Claims
Article 62 is worth reading in full. It requires that SMEs and startups have priority access to sandboxes and that the conditions for participation do not create disproportionate barriers.
In practice, this means:
Application processes must be accessible to companies without legal departments. A sandbox application process that requires 80 pages of technical documentation and three months of legal review is a “disproportionate barrier” for a 50-person company. Member states are obligated to design application processes that SMEs can actually complete.
Costs must be proportional. If a sandbox charges participation fees (some do, most don’t), those fees must not create barriers for smaller companies. Several member states have established entirely free sandbox programmes for SMEs.
Technical support must be provided. Sandboxes are not simply a regulatory permission. They must provide guidance — operational, technical, and regulatory guidance that helps the company develop its AI system in compliance with the Act. For an SME that cannot afford a dedicated regulatory compliance team, this guidance is the most valuable feature of the sandbox.
Despite these provisions, the early sandbox programmes across Europe have been predominantly used by large enterprises and well-funded startups. Spain’s sandbox, one of the first to launch, received applications predominantly from companies with more than 250 employees. The Netherlands’ sandbox programme showed a similar pattern.
The pattern is not because sandboxes are designed for large companies. It is because large companies have dedicated regulatory affairs teams that monitor new regulatory instruments and file applications as a matter of routine. SMEs do not have these teams. The information about sandboxes, how to apply, and what benefits they offer has not reached the companies that stand to benefit most.
This is an information gap, not an access gap. The access is legally guaranteed. The information is missing.
What’s Operational Now
As of early 2026, the following member states have operational or near-operational AI regulatory sandboxes:
Spain launched its sandbox in 2022 — the first in the EU, predating the AI Act. It has completed two cycles and is in its third. The programme is managed by the Secretary of State for Digitalisation and focuses on high-risk AI systems, though it accepts applications across all risk categories. Application process: approximately 20 pages. Timeline: 6-month testing periods.
The Netherlands has its sandbox administered by the Authority for Consumers and Markets (ACM) in cooperation with the Dutch Data Protection Authority (AP). The programme focuses on AI systems in consumer markets and financial services. Notable for providing particularly detailed regulatory guidance during the testing period.
France operates through CNIL (the national data protection authority) with a specific focus on AI systems that process personal data. France’s sandbox has been particularly accessible to SMEs, with a simplified application track for companies under 250 employees.
Germany has multiple sandboxes at the federal and Länder level. The Federal Ministry for Economic Affairs (BMWK) operates a broad sandbox programme. Bavaria and North Rhine-Westphalia have sector-specific sandboxes (manufacturing AI and healthcare AI, respectively). The federal programme accepts applications in English — a practical consideration for international SMEs.
Finland operates through Traficom and the Finnish Safety and Chemicals Agency (Tukes), with a sandbox programme notable for its technical support component — participating companies receive direct technical mentoring on AI safety and testing methodology.
Other member states — including Denmark, Lithuania, Austria, Malta, and others — have programmes at various stages of operational readiness. By August 2026, all twenty-seven member states must have at least one operational sandbox.
The Strategic Calculus
The strategic advantage of entering a sandbox early is threefold:
Compliance advantage. When the high-risk provisions of the EU AI Act take full effect (August 2, 2026), companies that developed their AI systems inside a sandbox will have documentation, regulatory history, and supervised compliance architecture. Companies that did not will be building compliance from scratch under full enforcement. The compliance catch-up cost for an SME deploying a high-risk AI system — which industry estimates place at €50,000 to €200,000 depending on scope and sector — is largely avoided by sandbox participation.
Knowledge advantage. Sandbox participants learn the regulatory framework by interacting with it, not by reading about it. The regulators provide interpretation guidance — how they read the Act, what they consider adequate, where the enforcement priorities lie. This operational knowledge is available nowhere else. No consulting firm has it. No webinar teaches it. It exists only in the direct interaction between sandbox participants and the national competent authority.
Relationship advantage. Companies that enter sandboxes establish a working relationship with their national regulator before enforcement begins. This relationship has practical value: when questions arise about a deployment’s compliance status, the company has a contact. When enforcement actions are considered, the company has a documented history of good-faith regulatory engagement. This is not a guarantee of lenient treatment. It is evidence of proactive compliance.
The combined effect of these three advantages creates a structural gap between sandbox participants and non-participants. As the regulatory environment matures, that gap widens.
What Holds SMEs Back
Sandboxes are accessible, beneficial, and legally prioritised for SMEs. The barriers are not about access. They are about information.
Awareness. The number one barrier is simple ignorance. The vast majority of EU SMEs with 50–250 employees are unaware that AI regulatory sandboxes exist. Among those that are aware, most believe sandboxes are “only for large companies or tech startups.” The regulatory information pipeline reaches trade associations, legal firms, and large enterprise compliance teams. It does not reach the operations director of a 120-person manufacturer in Linz.
Perceived complexity. SMEs that are aware of sandboxes often assume the application process is prohibitively complex. For some member states, this perception is outdated — early sandbox programmes had complex applications, and several have since simplified them. For others, the perception is accurate, and the member state has not yet met the Article 57(9) requirement for proportional access. The landscape is uneven.
Risk perception inversion. SMEs perceive sandbox participation as risky — exposing their AI system to regulatory scrutiny, inviting attention, potentially discovering non-compliance. This perception is inverted. The actual risk is the opposite: developing an AI system without regulatory input, discovering non-compliance after deployment, and facing enforcement action without the documented good-faith history that sandbox participation provides. The sandbox is not exposure to risk. It is managed risk reduction.
Timing confusion. Many SMEs assume they should wait until the full AI Act provisions are in effect before engaging with sandboxes. This is backwards. Sandboxes are designed for the pre-enforcement period. They exist specifically to help companies prepare. Entering a sandbox after full enforcement is possible but provides less value — the compliance architecture should already be in place.
How to Enter a Sandbox
The practical steps for an EU SME:
Step 1: Identify your national sandbox. The European Commission maintains a list of established and planned sandboxes through the AI Office. As of early 2026, this list is still being updated as member states finalise their sandbox programmes. Your national digital authority or data protection authority can confirm the status and application process for your member state.
Step 2: Define your use case. Sandbox applications require a specific AI system and a specific use case — not a general “we want to explore AI” application. The more specific the use case, the stronger the application. “We are developing a customer inquiry classification system using a fine-tuned language model that processes personal data from EU customers” is a viable application. “We want to use AI in our business” is not.
Step 3: Classify your risk level. The EU AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Sandbox participation is most valuable for high-risk systems (Article 6), where compliance requirements are most demanding. But limited-risk and minimal-risk systems also benefit from regulatory guidance, particularly on transparency obligations and data processing requirements.
Step 4: Prepare a proportional application. Under Article 62, the application process must not create disproportionate barriers for SMEs. If your national sandbox’s application requires documentation you cannot produce, say so — the regulation is on your side. Several member states have created simplified application tracks specifically for SMEs under 250 employees.
Step 5: Allocate internal time. Sandbox participation is not passive. It requires regular interaction with the regulatory authority — progress reports, testing results, compliance documentation. For a 50–250 person company, this typically requires one person dedicating 4–8 hours per week during the sandbox period. That person does not need to be a lawyer. They need to understand the AI system being tested and be able to communicate its operation clearly.
The Cross-Border Advantage
There is a dimension of sandbox participation that is particularly relevant for companies operating across EU member states: mutual recognition.
Article 58 of the EU AI Act specifies that national competent authorities shall cooperate and share best practices regarding sandbox outcomes. While full mutual recognition of sandbox results is not yet established (the implementing acts are still being finalised), the direction of travel is clear: sandbox participation in one member state will carry weight in regulatory interactions with other member states.
For an SME operating in multiple EU markets — which is common among Bluewaves’ clients, who serve customers across three to five countries — sandbox participation in one market creates a regulatory foundation that extends, partially, to other markets. The documentation, the risk assessment, the compliance architecture developed within the sandbox — all of these are portable.
This is not a guarantee of compliance in other jurisdictions. National competent authorities retain independent enforcement authority. But a company that can demonstrate “we developed this AI system within the French CNIL sandbox, under regulatory supervision, with this compliance documentation” has a stronger position in a German or Dutch regulatory conversation than a company that cannot demonstrate any regulatory engagement.
The cross-border advantage compounds: one sandbox participation produces compliance assets that are relevant in up to 26 other jurisdictions. The investment is in one market. The returns are EU-wide.
The Practical Reality
Let me be direct about what sandbox participation looks like in practice, because the formal descriptions make it sound more bureaucratic than it is.
At its core, a sandbox is a structured conversation between your company and your national regulator about a specific AI system. The conversation has a beginning (the application), a middle (the testing period), and an end (the compliance report). During the middle, you are building the AI system with regulatory input — not regulatory approval at each step, but regulatory guidance that helps you build the compliance architecture correctly the first time.
The regulators I have interacted with — in Portugal, France, and Germany — are not adversarial. They are doing what regulators do in sandbox programmes: helping companies understand the requirements and building institutional knowledge about how the regulation applies to real-world systems. The sandbox is a learning process for both sides. The regulator learns how AI systems work in practice. The company learns how the regulation applies in practice. Both leave the sandbox with knowledge that did not exist before.
The formality of the process depends on the member state. Some programmes are highly structured — formal applications, milestone reviews, quarterly reports. Others are more conversational — regular meetings, iterative feedback, informal guidance. The common thread is that the engagement is direct, specific, and focused on your actual AI system, not on abstract regulatory theory.
For an SME, the practical burden is modest: one person spending a few hours per week engaging with the regulatory process. The return is disproportionate: compliance architecture, regulatory relationship, and institutional knowledge that would cost tens of thousands of euros if acquired through external consultants.
The Window
The period between now and August 2, 2026 is a window. After August, the full provisions apply, the sandboxes shift from developmental to supervisory, and the compliance catch-up cost increases.
Every month of sandbox participation before August is a month of compliance architecture built under supervision rather than independently. Every interaction with the national competent authority before enforcement is an interaction that costs nothing beyond time.
The companies that use sandboxes now will be compliant in August. The companies that wait will be scrambling.
The sandbox is free. The guidance is free. The priority access is legally mandated.
The only cost is the time it takes to apply. The cost of not applying is the full compliance burden, built from scratch, under enforcement, without regulatory guidance.
The sandbox is not a risk. Not using it is.