Skip to main content
Mandatory Guide18 min read
Compliance 2026

EU AI Act for SMEs — Mandatory Guide 2026

Fines up to EUR 35M, deadline August 2, 2026: what every mid-market company needs to know about the new EU AI Regulation — clear, concrete, no legal jargon.

What is the EU AI Act and who does it affect?

The EU AI Act (Regulation (EU) 2024/1689 of June 13, 2024) is the world's first comprehensive legal regulation of artificial intelligence. It obligates all companies that use, develop or market AI systems in the EU — regardless of their size or location. This means: even a craftsman's business with 12 employees in Kempten is subject to the EU AI Act if it uses AI tools. Definition (direct, Art. 3 EU AI Act): An AI system is a machine-based system that, from a set of inputs, generates outputs such as predictions, recommendations, decisions or content that can influence physical or virtual environments. In practice, this means: ChatGPT, AI chatbots on your own website, automated application filters, AI-supported credit checks and smart video surveillance all fall under this definition. Who is affected? According to Art. 2 EU AI Act, the regulation applies to: — Providers (manufacturers/developers of AI systems, including in third countries if the system is used in the EU) — Deployers (companies that use AI systems in their own operations — the majority of SMEs) — Importers and distributors of AI systems — Affected persons (not obligated to compliance, but defined as the object of protection) According to a Bitkom study from 2025, 64% of German SMEs are not aware that they are affected by the EU AI Act. Yet according to Statista (2025), 47% of mid-market companies in Germany already use at least one AI tool in their operations — from AI chatbots to automated accounting software to AI-supported HR tools. Particularly critical: the regulation does not differentiate by company size. A 3-person startup that uses a high-risk AI classifier for applications has the same compliance obligations as a corporation. The requirements are tiered exclusively according to the risk of the AI system, not the size of the company. Why is the EU AI Act relevant now? The law is entering into force in stages. The first major wave came on February 2, 2025 with the prohibitions for unacceptable risk. On August 2, 2026, the central high-risk obligations take effect — that is the deadline that mid-market companies must prepare for today. According to the EU Commission (2025), an estimated 15,000 SMEs in Germany are directly affected by the high-risk requirements. The ZEW (Leibniz Centre for European Economic Research) estimates in a 2024 study that one-time compliance costs for a mid-market company can range between EUR 50,000 and EUR 400,000, depending on the number and risk class of the AI systems deployed. Early preparation saves substantial costs — anyone who only starts in 2026 will pay extra.
Warning: 64% of German SMEs are not aware, according to the Bitkom 2025 study, that they are affected by the EU AI Act. Act now — the August 2, 2026 deadline is approaching.

Risk classes explained: Which AI systems require which obligations?

The heart of the EU AI Act is a risk-based approach: the higher the risk of an AI system to fundamental rights and safety, the stricter the requirements. There are four risk classes. 1. Unacceptable risk (prohibited AI systems) — Art. 5 EU AI Act These AI systems have been banned since February 2, 2025. No company may deploy them: — Social scoring by public authorities (as known in China) — Real-time biometric surveillance in public spaces by authorities (with narrow exceptions) — Subliminal manipulation of behavior that harms people — Exploitation of vulnerabilities (age, disability) to influence behavior — Emotion recognition in the workplace and educational institutions (with few exceptions) — AI systems for profiling persons based on protected characteristics Particularly relevant for SMEs: anyone using AI-powered mood or emotion analysis on employees must shut it down immediately. 2. High-risk AI systems — Art. 6, Annex III EU AI Act High-risk systems may continue to be used, but are subject to strict requirements. They are listed in Annex III of the AI Act. Relevant high-risk areas for SMEs: — HR and personnel management: systems for automatic candidate pre-selection, AI performance reviews, AI salary recommendation. Affects every mid-market company that uses AI in HR work. — Credit granting: AI-supported creditworthiness checks and credit scoring. Relevant for fintechs and companies with AI financial tools. — Education and training: AI systems for evaluating learners or admission decisions. Relevant for training companies. — Safety-critical infrastructure: AI in water supply, energy, transport. — Justice and administration: AI for decision support in legal proceedings. — Biometric categorization: systems that categorize persons by characteristics such as race, nationality, political opinions. Obligations for high-risk deployers: establish technical documentation, perform conformity assessment, affix CE marking (for providers), implement risk management system, ensure transparency toward users, guarantee human oversight, incident reporting obligation in case of serious incidents. 3. Limited risk AI systems — Art. 50 EU AI Act This class includes AI systems with low but real deception potential. The main obligation is transparency: — Chatbots must disclose that the user is communicating with an AI — Deepfakes and synthetic media must be labeled as AI-generated — AI-generated texts with informational character (news, official communications) must be marked For most SMEs, this is the most relevant class: anyone deploying an AI chatbot on their website must ensure that users know they are talking to an AI. This is easy to implement, but is already being monitored by the supervisory authority. 4. Minimal risk — no specific obligations The largest share of all AI applications falls into this category: spam filters, AI game characters, simple image-editing AI, recommendation algorithms without critical context. Here there are no compliance obligations — only voluntary codes of conduct. Practical rule for SMEs: First check whether you use AI in the areas of HR, credit, education or critical infrastructure. If yes, you are likely in the high-risk area. If you only use AI for marketing, customer support or text generation, you are usually in the limited or minimal risk area.
Practical tip: An AI chatbot on your website is limited risk (transparency obligation). An AI tool for candidate selection is high-risk (extensive compliance obligations). The distinction is decisive.

Deadlines and transition periods 2025–2027: When must what be fulfilled?

The EU AI Act takes effect in stages. The key dates for your company at a glance: August 1, 2024 — Entry into force Regulation (EU) 2024/1689 entered into force on August 1, 2024. The official implementation period began with this. February 2, 2025 — Prohibitions for unacceptable risk apply Since February 2, 2025 (6 months after entry into force), all banned AI practices listed in Art. 5 are illegal. Companies still operating systems with unacceptable risk — such as emotion recognition in the workplace or manipulative AI advertising — have been acting unlawfully since this date and risk immediate fines. August 2, 2025 — GPAI obligations and authority structure From August 2, 2025 (12 months after entry into force), the rules for general-purpose AI models (GPAI) such as GPT-4, Claude or Gemini apply. Providers of these models must provide technical documentation, publish copyright policies and take additional security measures in case of systemic risks. For SMEs as users of these models, this means: you may only use GPAI models from providers that meet the new compliance requirements. Non-compliant LLMs must be used with caution from this date. At the same time, the national market surveillance authorities become operational. August 2, 2026 — The critical deadline for most SMEs From August 2, 2026 (24 months after entry into force), all requirements of the EU AI Act apply in full — including the extensive high-risk obligations from Art. 6 et seq. For deployers of high-risk AI systems, this means: — Complete technical documentation must be available — Conformity assessment must be completed — Risk management system must be implemented and documented — Human oversight must be institutionally embedded — Transparency and information obligations toward users must be fulfilled — Incident reporting obligations to the national authority must be established Important: for high-risk systems in already regulated areas (CE marking under other directives), an extended transition period until August 2, 2027 applies. August 2, 2027 — Final transition deadline Until August 2, 2027, providers and deployers of high-risk AI systems already subject to CE marking procedures under other harmonization directives (e.g. medical devices, vehicles) have time to meet the new AI Act requirements. For everyone else: full compliance was already mandatory from August 2026. Recommendation for SMEs: Start now with the AI inventory (which AI tools am I using?), risk classification and building the compliance documentation. Anyone who only starts in spring 2026 cannot realistically meet the August deadline cleanly.
August 2, 2026 is THE deadline: that is when all high-risk obligations apply in full. Companies without compliance documentation by then risk fines up to 7% of annual turnover.

Fines: What does a violation of the EU AI Act cost?

The EU AI Act foresees one of the strictest fine systems in European business law. Fines are tiered according to the type and severity of the violation (Art. 99 EU AI Act): Highest tier: Up to EUR 35M or 7% of global annual turnover This fine framework applies to violations of the prohibitions in Art. 5 (banned AI practices) and to serious violations of the data protection requirements of the AI Act. For an SME with EUR 10M annual turnover, this means: fines up to EUR 700,000 are possible — already with a single proven violation. Middle tier: Up to EUR 15M or 3% of annual turnover This framework applies to violations of high-risk requirements (missing risk management, missing documentation, transparency obligations not met). This is the area that will affect most SMEs. Lowest tier: Up to EUR 7.5M or 1% of annual turnover For false or misleading information to authorities, e.g. during conformity assessment. Special rule for SMEs and startups: Art. 99 (6) EU AI Act provides that proportionality is taken into account when determining fines and that for SMEs and startups the lower of the two values (absolute upper limit or percentage share of turnover) applies. This is a certain protective cushion — but not a free pass. Private damages claims: In addition to administrative fines, affected persons can claim damages under Art. 82 GDPR (and in the future under national AI liability law). The EU AI Liability Act (still in preparation, as of May 2026) will further expand these litigation possibilities. Administrative enforcement: Each EU Member State must designate a national market surveillance authority for AI. In Germany, the Bundesnetzagentur is expected to take on this role (decision expected at the Federal Ministry for Digital Affairs and Transport). The authority receives extensive investigation and enforcement powers — including unannounced audits. Reputational damage: In addition to financial sanctions, AI Act violations entail considerable reputational damage. The EU AI Act provides for a European database in which high-risk systems and incidents are publicly accessible. An entry due to a compliance violation is damaging to business for B2B companies.
Warning SMEs: Up to 3% of global annual turnover for missing high-risk documentation. With EUR 5M turnover, that's up to EUR 150,000 in fines — solely for inadequate documentation.

Are you affected by the EU AI Act?

Practical steps for SMEs: Compliance in 6 phases

The path to EU AI Act compliance is not a one-time project, but a structured process. Here is a proven 6-phase approach for mid-market companies: Phase 1: Build an AI inventory (weeks 1–2) Start with a complete inventory of all AI systems in use. This sounds trivial, but it isn't: many SMEs underestimate the number of AI tools in operation. Typical blind spots are AI features in standard software (e.g. AI scoring in HubSpot, AI writing assistance in Microsoft 365, automated prioritization in the helpdesk system). Create a table with: tool name, provider, purpose of use, affected department, number of affected persons and preliminary risk category. According to a McKinsey survey (2024), 73% of companies that consider themselves "low-AI-intensive" identified more than 15 individual AI features after a systematic inventory. Phase 2: Risk classification (weeks 3–4) Classify each identified AI system according to the four risk classes of the EU AI Act. Work with the concrete text of Annex III of the regulation and the EU Commission guidelines (current status: delegated act on high-risk systems, October 2024). For uncertainties in classification, consultation with a lawyer specializing in AI law is recommended. Phase 3: Gap analysis and setting priorities (weeks 5–6) For each high-risk system: review the current compliance status against the requirements of Art. 9–15 EU AI Act. Typical gaps in SMEs: — No technical documentation available — No formal risk management system — No defined responsibility (AI Officer or comparable) — No user transparency documentation — No defined escalation processes for AI errors Phase 4: Implement technical and organizational measures (months 2–5) Implement the identified measures by priority. Practice-tested immediate measures: — AI Governance Policy: internal guideline for AI use (template available at Wito AI) — Designate an AI Officer: at least one person at management level who carries compliance responsibility — Technical documentation: for each high-risk system, create the documentation required by the AI Act — Human oversight processes: define how and by whom AI decisions are reviewed — Transparency notices: mark all user touchpoints with AI Phase 5: Conformity assessment (months 5–6) For high-risk systems that you develop or significantly modify yourself: perform a formal conformity assessment under Art. 43 EU AI Act (in many cases possible as self-assessment). For high-risk systems that you only deploy as a deployer: review the provider's conformity declaration and ensure the provider is compliant under Art. 16 EU AI Act. Phase 6: Ongoing monitoring and incident management (from month 7) Compliance is not a one-time project. Implement: — Annual AI inventory review — Logging of AI decisions (where technically possible) — Escalation process for AI incidents with reporting chain to the future national authority — Regular training for employees who operate AI systems — Update tracking: the AI Act is being further developed through delegated acts Typical effort for a mid-market company: According to the Bundesverband IT-Mittelstand (BITMi, 2025), SMEs with 50–249 employees need on average 120–280 hours for the initial compliance documentation. With external support, this effort is reduced to 40–80 hours.
Tip: Start with Phase 1 (AI inventory) this week. It costs 2–4 hours and gives you immediate clarity on whether you deploy any high-risk systems at all.

EU AI Act vs. GDPR: What are the differences and how do they relate?

Many SMEs ask: "We are already GDPR-compliant — isn't that enough?" The clear answer: No. GDPR and EU AI Act are complementary but independent regulatory frameworks with different focus. GDPR (since 2018): Regulates the protection of personal data. Focus on data minimization, purpose limitation, data subject rights (information, erasure, objection). Applies to every processing operation of personal data — with or without AI. EU AI Act (from 2025/2026): Regulates the use of AI systems according to their risk level. Focus on transparency, human oversight, documentation and conformity assessment. Applies to AI systems — even when they do not process personal data. Overlaps: For AI systems that process personal data, BOTH regulations apply simultaneously. An AI HR tool for candidate selection is both GDPR-relevant (processing of candidate data) and AI-Act-relevant (high-risk classification). Compliance obligations add up — they do not replace each other. Key differences at a glance: — GDPR applies to all data processing, AI Act only to AI systems — GDPR has no product-specific approach, AI Act is product-related — GDPR fines: up to 4% global annual turnover; AI Act: up to 7% (banned practices) or 3% (high-risk) — GDPR responsibility lies with the data controller, AI Act differentiates between provider and deployer Practical tip: Use your existing GDPR documentation as a basis. Many elements — Data Protection Impact Assessment (DPIA), records of processing activities, technical security measures — can be reused and supplemented for the AI Act, instead of starting from scratch.
GDPR + AI Act: those who already have clean GDPR documentation can save 30–50% of the AI Act compliance effort. Use existing structures.

How does Wito AI help SMEs with EU AI Act compliance?

Wito AI is a full-service digital and AI agency focused on the mid-market. We accompany SMEs through all six compliance phases — from the first inventory to ongoing monitoring. What we deliver concretely:Free Digitalisierungs-Check: our web-based assessment (ass.wito.ai) identifies your current AI usage in 3 minutes and provides a first estimate of AI Act relevance. 100% free, no registration required. — AI inventory and risk classification: together with you, we create a complete list of all AI systems and classify them according to the EU AI Act. — Compliance documentation: technical documentation, AI Governance Policy, risk management system under Art. 9 EU AI Act — all from one source. — Training for your team: hands-on workshops on EU AI Act fundamentals, risk classification and ongoing compliance maintenance. — Ongoing CDOaaS model: as an external Chief Digital Officer, we accompany your company on a long-term basis and keep your compliance up to date with regulatory changes. Why Wito AI? We combine technical AI know-how with legal compliance understanding. Our team has already accompanied more than 40 mid-market companies through GDPR and AI compliance. We don't speak legal jargon — we translate regulatory requirements into pragmatic business measures. With the CDOaaS model starting at EUR 990/month, small and medium-sized enterprises receive an external expert who treats compliance as an ongoing topic — not as a one-time project.
Start with the free Digitalisierungs-Check at ass.wito.ai — in 3 minutes you'll know whether and how intensively you are affected by the EU AI Act.

Frequently Asked Questions

Yes. The EU AI Act has no employee threshold. Even a micro-enterprise with 2 people is subject to the law if it deploys AI systems. However, reduced fine ceilings and simplified conformity assessments are provided for SMEs and startups (Art. 99 (6) EU AI Act). The law does not differentiate by company size, but by the risk class of the deployed AI system.
ChatGPT (GPT-4 and successors) is considered a general-purpose AI model (GPAI) and is subject to the GPAI obligations of the EU AI Act from August 2, 2025. For SMEs as users, this means: you must ensure that the provider (OpenAI) fulfills the AI Act obligations. You can provide proof via the provider's conformity declaration. If you use ChatGPT for high-risk applications (e.g. for automated candidate selection), you are additionally obligated as a deployer of a high-risk system.
Yes, as a so-called 'deployer' within the meaning of Art. 3 No. 4 EU AI Act. As a deployer, you use an AI system in a professional context. The deployer obligations are less extensive than the provider obligations, but not trivial: you must follow the provider's technical instructions, ensure human oversight for high-risk systems, document and report incidents and ensure that you only deploy compliant AI systems.
Provider: develops an AI system and places it on the market or puts it into service. Has the most extensive obligations: technical documentation, conformity assessment, CE marking (for high-risk), registration in the EU database. Examples: software companies that develop and sell AI tools. Deployer: uses an AI system from another provider in their own operations. Has slimmer but real obligations: observe usage restrictions, ensure high-risk oversight, train staff, report incidents. Examples: a craftsman's business using AI software from SAP, a medical practice using AI diagnostic software.
Since February 2, 2025, penalties apply for prohibited AI practices (Art. 5 EU AI Act). From August 2, 2026, penalties apply for all other violations including missing high-risk documentation. The national market surveillance authorities will become operational from August 2025. The first fines in other EU countries are expected for 2026/2027. Germany is currently setting up supervision (as of May 2026).
Depending on the number and risk class of the deployed AI systems and the existing GDPR documentation, the initial compliance implementation for an SME with 50–249 employees typically takes 3–6 months. With external support from specialists like Wito AI, this period can be shortened to 6–10 weeks. Minimum effort for an SME without high-risk systems: 1–2 weeks for inventory and transparency notices.
The EU AI Act does not prescribe a formal 'AI Officer' role like the data protection officer in the GDPR. However, deployers of high-risk systems must institutionally embed human oversight and be able to demonstrate designated responsibilities for AI compliance. In practice, it is recommended to designate an internal person (or an external service provider) as the AI compliance officer — also to pass audits.
No, the EU AI Act applies to professional use in the corporate context. Tools used privately by employees on personal devices are generally not covered. It becomes critical when employees use private AI tools for business purposes (Shadow AI): the company then effectively acts as a deployer and bears compliance responsibility. A clear corporate policy on AI use (AI Policy) is therefore essential.
Formally, you are acting unlawfully and risk fines from the national market surveillance authority. In practice, it is to be expected that authorities will initially focus on companies that have committed serious violations (banned practices, high-risk without any documentation). Nevertheless: even missing transparency notices on a simple chatbot are subject to fines from August 2026. The best protection is demonstrable compliance effort — even if individual points are still being implemented.
Only partially. Open-source models published under a free license are exempt from certain provider obligations (in particular model documentation) — but not from deployer obligations. If you deploy an open-source model (e.g. LLaMA, Mistral) in your corporate infrastructure for high-risk applications, the deployer obligations apply in full. The open-source exception expressly does not apply to systems with unacceptable risk.
According to the ZEW study 2024, one-time compliance costs for SMEs range, depending on complexity, between EUR 50,000 and EUR 400,000, when everything is implemented internally. With external service providers and clever use of standard templates, costs decrease considerably. Wito AI offers AI Act compliance packages starting at EUR 4,900 for SMEs without high-risk systems. For companies with high-risk systems, individual offers are necessary.
Yes. AI systems in safety-critical production areas can be classified as high-risk, especially if they can influence the safety of persons (e.g. robot safety systems, autonomous manufacturing lines). AI-supported quality control without direct personal reference often falls into the minimal or limited risk category. The classification under Annex III of the EU AI Act is decisive — a review is strongly recommended.
Explicitly ask the provider for the EU AI Act conformity declaration. For high-risk systems, the provider must, under Art. 47 EU AI Act, issue a written EU conformity declaration and register the system in the EU database (from August 2026). For GPAI models, conformity documentation under Art. 53 EU AI Act applies. If this documentation is missing, you should critically reconsider using the tool or follow up with existing providers — many well-known providers such as Microsoft, Google and SAP already publicly publish their AI Act conformity documentation.

Start free EU AI Act Digital Check

In 3 minutes you'll know which AI systems you deploy, which risk class they have and what you need to do by August 2026.

100% free, no registrationResult in 3 minutesConcrete action recommendations