Does your company use AI for recruitment? Do you have a chatbot on your website? Are you using automated lead scoring?
From 2 August 2026, all of this is governed by hard European law with fines reaching 35 million euros.
Don’t panic — most Polish B2B companies are in a better position than they think. But there is one catch that almost every article on this topic misses. We’ll get to it in a moment.
What Is the AI Act and Why Does It Affect Me
Regulation (EU) 2024/1689 is the world’s first comprehensive law regulating artificial intelligence. It entered into force on 1 August 2024.
The key principle: it applies to every company that offers or uses AI in the EU — regardless of where it is based. A US startup selling you AI software? Subject to the AI Act. A Polish company using that software? Also subject to the AI Act.
The AI Act works like building regulations for AI: the higher the risk, the stricter the standards.
Timeline: What Already Applies, What Is Coming
- 1 August 2024 — AI Act enters into force
- 2 February 2025 — Ban on unacceptable-risk AI systems — already in effect
- 2 August 2025 — Rules for general-purpose AI models (GPT-4, Claude, Gemini)
- 2 August 2026 — Full application for high-risk systems — the key deadline
- 2 August 2027 — AI embedded in medical devices, industrial machinery
You have until 2 August 2026. That sounds far away, but preparing documentation for high-risk systems takes 6–12 months in practice.
Four Risk Categories — Where Does Your Company Fall
Unacceptable Risk — Absolute Ban (since February 2025)
- Systems that manipulate users subliminally
- Emotion recognition in workplaces and schools
- Social scoring of citizens by public authorities
- Predictive policing based on personal characteristics
- Creating facial recognition databases by mass scraping
If your HR team had an idea for AI to “analyse candidate mood during job interviews” — that just became illegal.
High Risk — Full Documentation by August 2026
This is the catch. High-risk systems are not only military and medical. They include very common business applications:
- Recruitment — any AI system for CV screening, candidate ranking, employee evaluation
- Credit and insurance scoring — automated creditworthiness assessment
- Employee performance management — if AI decides on bonuses, promotions, dismissals
- Critical infrastructure (energy, water, transport)
- Educational assessment systems
For these applications, full documentation, conformity assessment, registration in the EU AI Database and designated human oversight are required.
Limited Risk — Transparency Obligations
- Customer service chatbots — must inform users they are talking to AI
- Generative AI producing text, images or audio — obligation to label content as AI-generated
- Deepfakes — mandatory labelling as synthetic material
Minimal Risk — No Additional Requirements
- Spam filters
- Product recommendation tools in e-commerce
- Most productivity AI tools (AI writing assistants, Copilot)
- BI analytics and sales prediction
The Key Distinction: Provider vs. Deployer
Provider — you create or deploy an AI system for others. Full obligations: technical documentation, certification, EU AI Database registration, CE marking, designated AI officer.
Deployer — you buy a ready-made AI tool (SaaS, API) and use it in your business. Lighter obligations: use according to provider instructions, human oversight, incident monitoring and reporting, informing employees.
Most Polish B2B companies are deployers — using ChatGPT, Copilot, no-code AI tools. Good news: the scope of obligations is much smaller than for companies building their own systems.
Bad news: if you use AI for recruitment or employee assessment — you are a deployer of a high-risk system and the obligations are concrete.
What Your Company Must Do — Checklist
If you use AI for recruitment or employee assessment (high risk):
- Conduct a Fundamental Rights Impact Assessment (FRIA) before deployment
- Ensure human oversight — no decision can be fully automatic
- Inform employees and their representatives about AI use
- Retain system logs for a minimum of 6 months
- Verify that your tool provider has technical documentation and certification
If you have a chatbot or use generative AI (limited risk):
- Add a clear notice: “You are talking to an AI assistant”
- Provide the option to connect with a human
- Label AI-generated content where published
For all companies:
- Inventory all AI tools you use and determine their risk category
- Verify that your SaaS and API providers are updating their products for AI Act compliance
- Designate someone responsible for AI Act compliance in your company
Fines — What They Really Cost
- Using prohibited AI systems — €35M or 7% of global annual turnover
- Breaching high-risk obligations — €15M or 3% of global annual turnover
- Providing incorrect information to authorities — €7.5M or 1.5% of global annual turnover
For SMEs and startups the lower of the two values applies. A company with 5M PLN annual revenue faces a maximum of around 450,000 PLN for a general breach, not €15M.
But fines are not the only risk. Reputational damage, lawsuits from employees dismissed “by an algorithm”, liability for discriminatory hiring decisions — these can cost more than the fine.
Poland: What Changed on 1 April 2026
The Council of Ministers adopted an implementing act that:
- Designates KRiBSI as the main supervisory authority
- Establishes a national sanctions framework aligned with the AI Act
- Creates a regulatory sandbox for SMEs and startups
Important: the AI Act as an EU regulation applies directly in Poland without additional transposition. The Polish act clarifies supervision and national sanctions.
Summary
The AI Act is not a threat to most Polish companies — as long as you understand which category you operate in.
Three things to do now:
- Inventory all AI tools in your company
- Check the category — recruitment and employee assessment = high risk
- Designate someone responsible — not necessarily a lawyer, but someone who knows your processes
Your deadline is 2 August 2026. Organisations that start now will have comfort. Those that start in July 2026 will be fighting fires.
Book a free consultation with Prospere AI — we will help assess where your company stands →