EU AI Act Guide 2025: AI Security and Compliance Rules

Dominika Żurawska

Updated Nov 25, 2025 • 35 min read
cyber security

As artificial intelligence becomes more powerful and widespread, it also brings serious risks. AI systems now influence decisions about credit, employment, healthcare, public services, and even legal outcomes — with real-world consequences for individuals and society. The impact is growing — and so is the responsibility to make AI safe and trustworthy.

In response to these challenges, the European Union introduced the AI Act — the world’s first comprehensive legal framework for regulating artificial intelligence. It’s not just a set of restrictions. The EU AI Act sets a new standard for safe, transparent, and fair AI, aiming to balance innovation with fundamental rights and public trust.

For any organization operating in the EU — or selling AI-based products into the EU market — this is a turning point. AI is no longer just a technical tool; it’s a regulated technology governed by laws, policies, and compliance requirements that span cybersecurity, ethics, and data protection.

Whether you're building AI in-house, sourcing it from a third party, or simply integrating it into your operations, understanding the EU AI Act is essential.

This article is here to help. Inside, you’ll find:

  • What qualifies as an AI system under the EU AI Act
  • How to classify your model by risk level
  • The AI development lifecycle, step by step
  • Key security measures to defend against cyber threats
  • How Poland’s draft AI law fits into the bigger picture
  • How to stay ahead of the latest EU AI Act news and updates

Understanding the EU AI Act: What Qualifies as an AI System?

Before you can comply with the EU AI Act, you need to determine whether your technology qualifies as an AI system under the law. The definition is intentionally broad — and it includes many tools that organizations may not typically consider "AI."

What the regulation says about AI systems

The EU AI Act defines an AI system as:

“A machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, infer how to generate outputs such as predictions, content, recommendations, or decisions.”

In simple terms: if your system takes input data and produces outputs that affect digital or physical environments, it likely qualifies — whether it’s a chatbot, a credit scoring engine, or a fraud detection model.

Common examples of AI systems under the Act

cyber security AI classification

AI systems covered by the Act include (but aren’t limited to):

  • Machine learning models – including logistic regression, decision trees, support vector machines (SVM), and deep learning (CNNs, RNNs)
  • Natural language processing (NLP) – such as chatbots, virtual assistants, sentiment analysis, or GPT-based systems
  • Computer vision – including facial recognition, object detection, and image classification
  • Generative AI – tools that generate text, images, audio, or video (e.g., GPT, Stable Diffusion)
  • Reinforcement learning systems – often used in automation, robotics, and adaptive systems
  • AI scoring tools – for creditworthiness, hiring, insurance, or customer segmentation

Even simple rule-based algorithms can fall under the AI Act if they automate decisions in sensitive or regulated domains — such as employment, finance, or healthcare.

Understanding the AI risk classification system

The EU AI Act doesn’t regulate all AI equally. Instead, it introduces a risk-based framework that categorizes AI systems according to their potential impact on people, society, and fundamental rights.

This classification directly determines what legal obligations your company must meet — whether you're building, deploying, or using AI.

AI systems with Minimal risk

At the lowest level are minimal-risk AI systems, such as spam filters, invoice scanners, or internal workflow automation tools. These pose little threat and are not subject to legal obligations under the Act. Still, developers are encouraged to follow voluntary best practices for ethical use.

AI systems with limited risk

Limited-risk systems typically interact with users but don’t carry serious consequences. Examples include chatbots, virtual assistants, or content generators.

These are allowed under the Act, but they must meet transparency requirements, including:

  • Clearly informing users they’re interacting with AI
  • Labeling AI-generated content (e.g., synthetic audio, video, or images)

AI systems with High risk

This is where the most stringent rules apply. High-risk systems are those that influence important or life-altering decisions, including:

  • Credit scoring or loan approval
  • Recruitment or employee evaluation
  • Biometric identification (like facial recognition)
  • AI used in healthcare, education, or critical infrastructure

If your system is classified as high-risk, you must comply with a full set of requirements, including:

  • Comprehensive risk and impact assessments
  • Use of high-quality, bias-mitigated training data
  • Detailed technical documentation (Annex IV)
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity safeguards
  • Ongoing post-market monitoring and reporting
  • Registration in the EU’s public AI system database

AI systems with unacceptable risk

Some AI use cases are considered too dangerous to be allowed at all. These systems are prohibited outright, including those that:

  • Use real-time biometric surveillance in public spaces
  • Assign social scores to individuals (public or private sector)
  • Predict criminal behavior based on profiling
  • Exploit vulnerable populations (e.g., children, elderly)
  • Manipulate users with subliminal techniques

If your AI project falls into this category, it must be stopped or redesigned. These bans reflect the EU’s position that AI should enhance human rights — not undermine them.

What’s banned: AI practices that cross the line

While the EU AI Act supports innovation, it draws a firm line when it comes to certain applications of AI. Some systems are considered too dangerous, too manipulative, or too invasive — and are prohibited entirely within the European Union.

These unacceptable-risk AI systems are not subject to compliance procedures or conditional approval. They are simply not allowed.

At the core of these bans is a fundamental principle: AI must not violate human rights or dignity. If a system manipulates people without their awareness, enables discrimination, or undermines basic freedoms, it has no place in the European AI ecosystem.

Prohibited use cases under the AI Act

The regulation explicitly bans the following practices:

  • Real-time remote biometric identification in public spaces
    Systems like facial recognition that identify individuals without their consent.
  • Social scoring by public or private entities
    Assigning personal scores based on behavior, lifestyle, or personal characteristics.
  • Predictive systems that assess the likelihood of criminal activity
    Profiling individuals to forecast potential unlawful behavior.
  • AI that uses subliminal techniques to manipulate users
    Influencing behavior in ways users cannot consciously detect or resist.
  • Exploitation of vulnerable individuals
    Targeting people based on age, disability, economic status, or other vulnerabilities in order to influence decisions or limit access.

These practices are considered incompatible with EU values. The goal of the Act is to ensure that AI serves the public interest — not controls or harms it.

If your organization is developing or using an AI system that resembles any of these prohibited practices — even indirectly — it is critical to stop and reassess. These are not edge cases for legal debate. There is no legal pathway to deploy these systems in the EU market.: they are bright-line bans.

The road to compliance: EU timeline and Poland’s draft law

The EU AI Act officially entered into force on August 1, 2024, making it the world’s first binding legal framework for artificial intelligence. But while the law is now active, its obligations roll out in phases, giving organizations time to prepare.

This staggered timeline is designed to let both AI providers and users update their systems, finalize documentation, and implement risk and security controls — all before enforcement measures take full effect.

Key EU compliance deadlines

Date

What Happens

August 1, 2024

The AI Act officially enters into force (legal status begins)

February 2025

Use of prohibited AI systems becomes illegal across the EU

August 2, 2025

Registry for high-risk AI systems opens — registration becomes mandatory

August 2, 2026

Core requirements apply to high-risk systems (documentation, monitoring, etc.)

August 2, 2027

Pre-2025 general-purpose AI models must comply with all applicable obligations

Each date marks a critical legal threshold — especially for high-risk systems. By mid-2026, companies must have the necessary safeguards in place, including transparency, human oversight, data governance, and cybersecurity measures. Missing these requirements could result in fines, restrictions, or product withdrawals.

National enforcement: Poland’s draft AI law

To support the EU-wide framework, member states are developing their own national laws. In Poland, the Ministry of Digital Affairs published a draft version of the national AI Act on October 16, 2025.

The draft outlines the creation of a new domestic authority responsible for overseeing AI usage in Poland. This supervisory body will be authorized to:

  • Audit companies developing or using AI
  • Interpret legal requirements and issue practical guidance
  • Impose sanctions for noncompliance
  • Handle user complaints, particularly those involving harm or fundamental rights

The proposal also includes conformity assessments for high-risk systems and defines formal procedures for appeals when individuals are harmed or unfairly treated by AI systems.

This marks a shift from soft guidance to structured enforcement — not just at the EU level, but within national jurisdictions as well.

Who’s responsible? EU AI Act Obligations by role

One of the most important aspects of the EU AI Act is how broadly it applies. It doesn’t just regulate developers — it covers any organization involved in the lifecycle of an AI system: from building and selling to using and importing.

Even if your company didn’t create the AI tool, you may still be legally accountable for how it’s used, how it performs, and whether it complies with the law.

Here’s how responsibilities break down by role:

Providers (you build or develop AI)

If your organization designs, trains, or sells an AI system, you're a provider. You must:

  • Conduct risk assessments and maintain a risk management system
  • Document your system thoroughly (Annex IV requirements)
  • Ensure training data is accurate, fair, and up to date
  • Design human oversight and transparency mechanisms
  • Report serious incidents within 15 days
  • Register high-risk systems in the EU’s official database
  • Apply CE marking and issue a declaration of conformity
  • Keep records for 10 years after market placement

Many AI startups and vendors will fall into this category — and face some of the most demanding requirements.

Deployers (you use AI in your business)

If your company uses a high-risk AI system — for hiring, credit scoring, fraud detection, or other sensitive functions — you’re a deployer, and you’re still on the hook for compliance.

You must:

  • Follow the provider’s usage instructions
  • Ensure someone qualified is overseeing the system
  • Monitor performance and accuracy regularly
  • Report serious incidents and pause use if needed
  • Store system logs for at least 6 months
  • Inform employees when AI is used in evaluations
  • Conduct a Data Protection Impact Assessment (DPIA), when required

Even if the AI system comes from a third party, you’re still responsible for its impact on people.

Distributors and importers

If you're distributing or importing AI systems into the EU market, your role is to:

  • Verify CE markings and technical documentation
  • Ensure the product is legally compliant
  • Report any known compliance failures

Regulators and market authorities

National regulators — such as the proposed AI supervisory body in Poland — will lead enforcement. Their tasks include:

  • Auditing companies
  • Investigating complaints
  • Issuing penalties
  • Reporting incidents to the European Commission

Building high-risk AI: the compliance lifecycle

Meeting the EU AI Act requirements isn’t something you do at the end of development. For high-risk systems, compliance must be built into the entire lifecycle — from the first idea through deployment and beyond.

Think of it as a continuous process involving product, legal, data, risk, and engineering teams. Missing a step could delay your go-to-market, trigger legal issues, or result in the system being pulled entirely.

Here’s how the lifecycle breaks down:

Phase 1: concept and risk classification

Start by evaluating the intended use case. Under the AI Act (see Chapter 3), all systems must be categorized as prohibited, high-risk, limited-risk, or minimal-risk.

If the system is prohibited — such as those involving social scoring or subliminal manipulation — development must stop or be significantly redesigned.

If it's classified as high-risk, this phase triggers the full compliance track. At this point, you’ll need to:

  • Establish a Quality Management System (QMS)
  • Conduct internal risk and impact assessments
  • Document the system’s purpose, scope, and intended outcomes
  • Identify any vulnerable users or potential societal impacts

Only once risks are understood and mitigation plans are in place can the project move into development.

Phase 2: development and documentation

This phase is where compliance becomes hands-on. You must design the system in line with the Act’s requirements for:

  • Data quality — clean, relevant, representative, and regularly updated
  • Bias mitigation — especially across protected characteristics
  • Human oversight — not just on paper, but designed into workflows
  • Explainability — at both system and individual decision levels
  • Technical documentation — in line with Annex IV

Documentation should cover the model’s architecture, inputs/outputs, training methods, evaluation metrics, and cybersecurity measures — and must be understandable by non-technical reviewers.

Phase 3: validation and approval

Before your system can be placed on the market, it must pass a validation phase. This includes:

  • Testing for accuracy, robustness, and resilience
  • Reviewing compliance with GDPR and privacy laws
  • Ensuring risk acceptability and clear auditability
  • Validating human-in-the-loop mechanisms

If successful, you can proceed to the conformity assessment, issue the EU Declaration of Conformity, and apply CE marking — a legal requirement for entering the EU market.

Phase 4: deployment and post-market monitoring

After deployment, high-risk AI systems remain under scrutiny.

You’re required to continuously monitor for:

  • Performance drift (accuracy, fairness, relevance)
  • Cybersecurity vulnerabilities and attempted attacks
  • Unexpected decisions or ethical concerns
  • Serious incidents or user complaints

You must also:

  • Log key system activities
  • Maintain audit trails
  • Report any serious incidents to national regulators within 15 days
  • Update documentation and risk assessments as needed
  • Be ready to suspend or withdraw the system if risks become unacceptable

See diagram below for a full walkthrough of each stage.

Process Diagram  Part 1: Implementation of governance use cases including a classical AI model – credit scoring

Securing AI: cybersecurity, attack types, and prevention strategies

The EU AI Act places cybersecurity front and center, especially for high-risk AI systems. It’s not enough for an AI model to be smart or ethical — it must also be resilient.

What the AI Act requires

Under Article 15, high-risk AI systems must be designed to meet specific standards for:

  • Accuracy — They must perform reliably and within acceptable error margins.
  • Robustness — They should remain stable even when encountering incomplete, noisy, or unexpected inputs.
  • Cybersecurity — They must be protected against unauthorized access, tampering, and external attacks.

These requirements apply across the entire lifecycle of the system — from development to deployment, through regular updates, and even during model retirement.

Common AI attack types

While the Act doesn’t list every possible threat, it expects providers to defend against a broad range of known attack vectors. The most critical ones include:

  • Data poisoning
    Attackers inject manipulated or false data into the training set, corrupting the model’s behavior.
    Example: Falsifying credit histories to mislead a loan approval model.
  • Privacy attacks
    Threat actors attempt to extract sensitive information from the model itself. This includes:
    • Model inversion – Reconstructing personal data from model outputs
    • Membership inference – Determining if a specific person’s data was used to train the model
  • Evasion attacks (a.k.a. adversarial attacks)
    Inputs are subtly altered to fool the model into making incorrect classifications.
    Example: Slight changes to an image allow a facial recognition tool to misidentify a person.
  • Malicious prompting (in generative AI)
    Attackers craft inputs designed to bypass safety filters or prompt harmful responses.
    Example: Coaxing a chatbot to reveal private company data or produce biased content.
  • Data abuse attacks
    These involve feeding incorrect — but plausible — data into the system during runtime, often from compromised third-party sources.

How to defend your AI system

The EU AI Act promotes a “security by design” approach. That means security measures must be built in from the start — not added later.

While it doesn’t mandate specific tools, your defenses should include:

  • Anomaly detection to identify abnormal behaviors or inputs
  • Access controls and encryption across the full AI pipeline
  • Secure update processes that don’t introduce new vulnerabilities
  • Audit trails to log key actions and decisions for accountability
  • Adversarial testing to evaluate how your system performs under stress or manipulation attempts

The key is proactive resilience. Regulators won’t wait for something to go wrong — they’ll want to see that your team anticipated threats and planned accordingly.

If your AI system is high-risk, you’re expected to design it as if it will be attacked — because at some point, it probably will be.

Fairness and bias: how to meet transparency and equality standards

Fairness is no longer a “nice to have” in AI development — it’s a legal requirement. Under the EU AI Act, particularly for high-risk systems, fairness is about safeguarding fundamental rights, ensuring equal treatment, and preventing discrimination in automated decisions.

This applies to any system that affects real-world outcomes in areas like recruitment, finance, healthcare, or public services — domains where algorithmic bias can cause serious harm.

Why fairness matters under the AI Act

High-risk AI systems must be designed to:

  • Use high-quality, representative, and unbiased training data
  • Include built-in mechanisms to detect and mitigate discrimination
  • Be explainable, so decisions can be understood and challenged
  • Treat individuals and groups equitably, regardless of age, gender, ethnicity, or other protected traits

This isn't just about good intentions — it's about traceable accountability. Every phase, from data collection to deployment, must be auditable for fairness.

Common sources of bias in AI

Even responsible teams can unintentionally bake bias into their systems. The most frequent sources include:

  • Historical bias – When the training data reflects real-world discrimination
  • Representation bias – When certain groups are underrepresented in the dataset
  • Label bias – When human-labeled data reflects subjective or skewed judgments
  • Feature bias – When inputs act as proxies for protected characteristics (e.g., ZIP codes standing in for race or income)

The result? A hiring tool that favors male candidates. A loan model that penalizes applicants from specific neighborhoods. A medical system that performs poorly on darker skin tones.

How to measure and monitor fairness

To stay compliant — and avoid reputational or legal fallout — teams need to quantify fairness throughout the model lifecycle. Key metrics include:

  • Statistical parity difference – Are outcomes evenly distributed across groups?
  • Equal opportunity – Do all groups have equal true positive rates?
  • Disparate impact ratio – Are selection rates skewed between groups?
  • Error rate gaps – Are false positives/negatives disproportionately affecting certain users?

These indicators should be reviewed during:

  • Model design and testing
  • Validation and go/no-go approvals
  • Continuous monitoring after deployment

Regular fairness audits should also include individual-level tests — checking that similar people get similar outcomes.

Making fairness explainable

Under the Act, AI decisions must be explainable — especially in sensitive or regulated domains. That doesn’t mean open-sourcing your model, but it does mean providing understandable reasoning.

Techniques like:

  • SHAP or LIME – For local/global model behavior explanations
  • Contrastive explanations – To show why one decision was made over another
  • Plain-language summaries – To communicate logic in a way non-experts can understand

If someone is denied a loan or a job by an AI system, they have a legal right to understand why — and challenge that decision if needed.

Fairness isn’t just an ethical or legal checkbox — it’s about building systems people can trust, question, and hold accountable.

When bias breaks the pipeline: what happens during rejection and revalidation

Even well-designed models can fail fairness tests. The EU AI Act not only expects you to detect bias — it requires action when it’s found.

In this example, the credit scoring model was rejected after initial testing revealed gender-based disparate impact. Developers made changes to training data, adjusted features, and re-ran fairness metrics before passing revalidation.

This loop of bias detection → model iteration → reapproval is a critical part of AI governance.

Process Diagram  Part 2: Rejection of the model due to detected bias, implementation of changes in the model, and revalidation

AI Governance in Practice: From Policy to Execution

Complying with the EU AI Act isn’t just about following technical checklists — it’s about embedding responsible AI practices into how your organization operates. Governance is the system that ensures those practices stick.

Instead of treating compliance as a one-off task, businesses must approach it as a continuous discipline. That means defining ownership, aligning teams, and creating oversight mechanisms that span legal, technical, and operational domains.

Monitoring elements for predictive model management

What AI governance really means

AI governance turns regulation into repeatable practice. It includes:

  • Internal policies that align with legal requirements
  • Clearly assigned roles and responsibilities across departments
  • Traceable decision-making throughout the AI lifecycle
  • Ongoing audits to ensure systems remain compliant and ethical after launch

Governance bridges the gap between regulation and implementation. It ensures your AI projects don’t go live without risk assessments, bias checks, or human oversight — and that issues get caught early rather than after harm is done.

Who’s responsible for documentation? a practical breakdown

Governance doesn’t just mean good intentions — it means traceability. Each role involved in the AI lifecycle is responsible for different parts of documentation, which must meet legal and technical standards.

Here’s how responsibilities typically break down across functions:

Process Diagram: Deployment of the model to production Part 3 Model governance documentation and technical documentation

Incident Reporting and Post-Market Monitoring

Deploying a high-risk AI system doesn’t end your compliance obligations. Under the EU AI Act, continuous monitoring is required to track how the system performs, how it may fail, and how it affects people in the real world.

The goal: detect risks early — before they cause harm.

What qualifies as a serious incident?

The AI Act defines a serious incident as any malfunction or failure that results in:

  • Death or serious harm to a person’s health
  • Irreversible disruption of critical infrastructure
  • Violation of EU fundamental rights
  • Serious damage to property or the environment

Crucially, reporting obligations apply even if the harm isn’t confirmed — only a “sufficiently high probability” that the AI system contributed to the incident is enough to trigger action.

The 15-day reporting rule (Article 73)

Once a serious incident is identified, the provider of the AI system must:

  • Notify the relevant market surveillance authority in the EU country within 15 calendar days
  • Inform any importers or distributors, if applicable
    Submit a report with enough technical detail to support investigation

Failure to report on time may result in fines, recalls, or a ban on the product.

If a user of the system (e.g. a bank using a third-party credit model) becomes aware of such an incident, they must inform the provider immediately.

Logging and monitoring obligations

To meet transparency and traceability standards, high-risk AI systems must:

  • Automatically log key system events and decisions
  • Securely store logs for at least six months
  • Maintain a clear audit trail for internal and external review
  • Continuously monitor for issues like:
    • Accuracy drift
    • Bias or discrimination
    • Unexpected outputs
    • Cybersecurity threats

These requirements help ensure the system remains safe and compliant over time — not just at launch.

When suspension or recall is necessary

If post-market monitoring reveals that the AI system:

  • No longer performs accurately
  • Introduces new risks
  • Violates legal or ethical standards
  • Has been compromised (e.g., through adversarial attacks)

…the provider or deployer must suspend or withdraw it from the market until the issues are resolved.

In cases of systemic non-compliance — such as repeated failures to report incidents or maintain documentation — authorities may enforce a mandatory recall, even if no direct harm has occurred.

Before your AI system goes live, it needs more than just clean code. You must be able to demonstrate that it meets the EU AI Act’s legal, ethical, and technical requirements — and that the right processes, documentation, and oversight are in place.

This final readiness checklist is designed to help cross-functional teams — including product, compliance, data science, security, and legal — verify deployment readiness and avoid last-minute regulatory surprises.

Deployment Readiness: Yes/No Checklist by Lifecycle Phase

Phase

Key Requirement

Check

Use Case Definition

Risk classification performed (prohibited, high-risk, etc.)

Yes / No

Use case complies with Articles 5–6 (e.g., no banned practices)

Yes / No

Risk Assessment

Quality Management System (QMS) in place (for high-risk systems)

Yes / No

Ethical, legal, and data risks evaluated and documented

Yes / No

Model Development

Data quality and bias mitigation checks complete (Art. 10)

Yes / No

Human oversight mechanisms designed (Art. 14)

Yes / No

Model explainability tested at global and local levels

Yes / No

Validation

Accuracy, robustness, and cybersecurity tested (Arts. 15–16)

Yes / No

Technical documentation complete and Annex IV-compliant

Yes / No

Approval

Declaration of Conformity issued (Art. 47)

Yes / No

CE marking applied (Art. 48)

Yes / No

Deployment

Post-market monitoring plan in place (Art. 72)

Yes / No

Logging, audit trails, and incident detection configured

Yes / No

Incident reporting process defined and ready (Art. 73)

Yes / No

Staff Training

Key roles trained on monitoring and human intervention procedures

Yes / No

Role-Based Sign-Off: Final Responsibility Tracker

Role

Responsibility

Product Owner

Confirms use case alignment and risk acceptance

Compliance Officer

Approves QMS, documentation, and legal conformity

Data Science Lead

Validates model performance, fairness, and explainability

Security Lead

Verifies cybersecurity controls and incident response plan

AI Risk Analyst

Confirms classification, scoring logic, and mitigation plans

MLOps Engineer

Confirms deployment environment, monitoring, and rollback readiness

The EU AI Act is a wake-up call for organizations building or using artificial intelligence in high-impact domains.

Compliance means documentation, audits, and oversight. But it’s also an opportunity to differentiate your AI product — by making it safer, more trustworthy, and future-ready.

Companies that treat the AI Act as a strategic framework (not a checklist) will move faster, build with more confidence, and avoid costly surprises down the line.

They’ll also be ready when other markets follow suit. Canada, the U.S., Brazil, and others are developing their own AI laws. And the core principles — transparency, fairness, safety, accountability — are here to stay.

More posts by this author

Dominika Żurawska

Cyber Security Engineer
Boost efficiency with AI  Automate processes to enhance efficiency   Get Started!

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business