AITutoro
🇵🇱

EU AI Act: What Training Does Your Company Need?

Article 4 mandates AI literacy for all staff using AI. Effective since 2 February 2025. Here's exactly what you need to know — and how to prove compliance.

What You Need to Know

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Among its provisions, Article 4 requires all providers and deployers of AI systems to ensure their staff has a sufficient level of AI literacy.
This isn't optional, and it's not future-dated. The AI literacy requirement has been enforceable since 2 February 2025. Every company that uses AI tools in the EU — from ChatGPT to Microsoft Copilot — must ensure their people understand what they're working with.

Quick Reference

Regulation
Regulation (EU) 2024/1689
Key Article
Article 4 — AI Literacy
Effective
2 February 2025
Full enforcement
2 August 2026

Article 4: AI Literacy — The Full Text

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

What This Means in Practice

"Providers and deployers"

This covers two groups: companies that build AI systems (providers) and companies that use them (deployers). If your team uses any AI tool at work, you're a deployer.

"sufficient level of AI literacy"

Staff must understand what AI can do, what it can't do, and how to use it responsibly. The bar isn't expert-level — it's appropriate to their role and context.

"taking into account their technical knowledge, experience, education and training"

Training must be role-appropriate. A developer needs different training than a marketing manager. One-size-fits-all programs don't satisfy this requirement.

"the context the AI systems are to be used in"

Training must cover the specific AI tools your organization actually uses, not generic AI theory. Context matters.

Key Definitions (Article 3)

AI Literacy
Skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems and to gain awareness of the opportunities and risks of AI and possible harm it can cause.
Deployer
Any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Provider
A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model and places it on the market or puts the AI system into service under its own name or trademark.

Who Needs Training?

Article 4 applies to anyone dealing with the operation and use of AI systems. The training focus should vary by role:

RoleFocusKey Topics
General StaffAI literacy fundamentalsWhat AI is, recognizing AI-generated content, responsible use policies, limitations and risks of AI outputs
AI OperatorsTool-specific competenceEffective prompt engineering, output validation, understanding tool-specific limitations, data privacy in AI interactions
Technical / ITRisk assessment & implementationAI risk categories, data handling requirements, system integration, monitoring and evaluating AI outputs, incident response
ManagementGovernance & strategic oversightAI governance frameworks, compliance obligations, risk management, organizational AI policies, vendor assessment

Enforcement Timeline (Article 113)

The EU AI Act entered into force on 1 August 2024. Different provisions apply at different dates:

2 February 2025Chapters I & II (including Article 4)

AI literacy requirement, definitions, and scope. Already enforceable.

2 August 2025Prohibited practices, GPAI obligations

Chapter III Section 4, Chapter V, VII, XII, Article 78. Banned AI uses and general-purpose AI model rules.

2 August 2026Full regulation applies

All remaining provisions take effect, including high-risk AI system classification and conformity assessments.

2 August 2027Pre-existing high-risk systems

High-risk AI systems already placed on the market or put into service must comply with all obligations.

Article 4 is already in effect. Waiting for the 2026 deadline means missing the literacy requirement that's enforceable today.

How to Prove Compliance

There is no central EU certification for AI literacy training. Compliance is process-based — you demonstrate it through documentation, not a certificate on the wall.

What Companies Need for an Audit

1

Training Logs

Records showing who received training, when, and what content they completed. Must cover all staff and persons dealing with AI systems.

2

Syllabus Mapping

Documentation proving your training content covers the relevant risks, opportunities, and contexts for your organization's specific AI use.

3

Role-Based Assessment

Documented assessment of staff technical knowledge and experience before assigning training levels — demonstrating you took individual backgrounds into account.

ISO/IEC 42001 (AI Management Systems) serves as a gold-standard reference framework for building your compliance documentation.

AI Risk Categories: The Big Picture

The EU AI Act organizes AI systems into four risk tiers. While Article 4's literacy requirement applies broadly, understanding the full framework helps contextualize your training obligations.

Prohibited

AI practices banned outright — social scoring, manipulative systems, real-time biometric identification (with exceptions).

Training: Staff must recognize prohibited uses to avoid deploying them.

High-Risk

AI in critical areas — hiring, credit scoring, education, law enforcement. Subject to conformity assessments.

Training: Operators need deep understanding of risks, monitoring obligations, and human oversight requirements.

Limited Risk

AI systems with transparency obligations — chatbots, deepfakes, emotion recognition must disclose AI involvement.

Training: Staff must understand disclosure requirements and implement them correctly.

Minimal Risk

Most AI applications — spam filters, AI-assisted writing, recommendation systems. No specific obligations beyond Article 4.

Training: General AI literacy as required by Article 4.

Training Built for Article 4 Compliance

AITutoro's adaptive learning engine was designed with regulatory requirements in mind.

**Role-appropriate training** — The calibration system assesses each learner's technical knowledge and experience, then delivers content matched to their level. This directly addresses Article 4's requirement to take into account "technical knowledge, experience, education and training."

**Context-specific content** — Training covers the specific AI tools your organization uses — ChatGPT, Claude, Copilot, Gemini, and more. Not abstract theory, but the actual systems your staff interacts with daily.

**Completion tracking** — Every session, every module, every learner. Training logs and completion records provide the documentation foundation you need to demonstrate compliance during an audit.

Frequently Asked Questions

Article 4 Is Already in Effect

Your company's AI literacy obligation is live today. Start building compliant training records now — before someone asks for them.