Preparing for the EU AI Act: Key Requirements Every Enterprise Should Know

EU AI Act: A Defining Chapter in the Governance of Intelligence

Artificial intelligence (AI) has outgrown its experimental stage. It now decides who qualifies for a loan, evaluates medical scans, predicts traffic congestion, and influences corporate strategy. With such reach, questions of fairness, transparency, and accountability can no longer be deferred.

The European Union’s Artificial Intelligence Act responds to this urgency. Published in the Official Journal of the EU in July 2024 and effective from 1 August 2024, it stands as the first law in the world to govern AI across its full lifecycle, ie, from design and data selection to deployment and oversight. Its scope extends beyond Europe’s borders, binding any organization whose AI systems or outcomes touch European users.

In the words of Belgium’s Minister for Digitization, Mathieu Michel, the Act is “a cornerstone for trust and innovation in the age of AI.” It defines how technology should earn its place in society through responsibility, documentation, and human control.

By framing AI through the lens of risk rather than potential, the EU has set a global precedent: innovation is welcome, but it must be explainable, lawful, and safe.

What Is the EU AI Act?

The EU Artificial Intelligence Act (EU AI Act) is the world’s first legal framework created to define how artificial intelligence should be developed, deployed, and governed. Adopted in 2024, it lays down uniform rules for all Member States within the European Union, giving shape to a well-defined approach toward responsible AI adoption.

The regulation establishes clear expectations for the entities that create or use AI systems. Each system is assessed on its potential to affect safety, rights, and social well-being. Based on this assessment, it is assigned to a specific risk category, and corresponding obligations are defined for providers, deployers, and distributors.

The Act applies directly across the EU without the need for separate national laws. Its jurisdiction extends to any organization whose AI systems or their outcomes reach European users, regardless of where the technology is developed or operated. This extraterritorial scope ensures that AI systems entering the EU market meet the same standards of transparency and accountability.

The European Commission describes this initiative as an effort to build “trustworthy AI.” The framework promotes data quality, explainability, and human oversight as essential attributes of responsible design. It introduces a transition period of two years, allowing companies to adapt their systems and internal processes before full enforcement.

Why Europe Decided to Regulate AI

Artificial intelligence influences how people access healthcare, financial services, education, and even public infrastructure. Its expansion has created immense opportunity, but also a need for clarity on how decisions made by algorithms affect safety, privacy, and rights.

European policymakers recognised that the growth of AI required dedicated governance, for which the EU Artificial Intelligence Act was formulated.

The Core Motivations Behind the Regulation

Protecting fundamental rights: AI systems can influence employment, credit approvals, or social services. The Act ensures that these decisions respect human dignity and equality.

  • Addressing opacity in decision-making: Many AI models operate without clear explanations of how outcomes are produced. The regulation introduces requirements for documentation, traceability, and explainability. 
  • Ensuring safety and reliability: Systems deployed in healthcare, transport, or critical infrastructure must meet high safety standards. The Act integrates AI oversight with existing EU product-safety laws. 
  • Promoting trustworthy innovation: Legal clarity encourages responsible experimentation. Businesses gain a predictable environment where innovation aligns with established ethical and legal principles. 
  • Building public confidence in AI: Consistent governance fosters public trust. Citizens and enterprises can engage with AI systems knowing that they operate under measurable accountability.

How the EU AI Act Classifies AI Systems

The EU Artificial Intelligence Act operates through a risk-based model. Every AI system is placed into a defined category that reflects its potential impact on safety, rights, and social well-being. This approach ensures that governance measures scale with the level of influence the technology holds.

Four primary risk levels form the foundation of this framework.

1. Unacceptable Risk

AI systems that threaten fundamental rights or enable manipulative or discriminatory practices. 

  • Social scoring by governments or private entities 
  • Emotion recognition in schools or workplaces 
  • Predictive policing or profiling of individuals 
  • Untargeted facial-image scraping to build biometric databases

2. High Risk

Applications that affect people’s safety, livelihoods, or access to essential services. 

  • AI in healthcare, education, and employment screening 
  • Credit scoring and loan eligibility systems 
  • Critical infrastructure and transportation management
  • Law enforcement, border control, and judicial decision support

3. Limited Risk

Systems that require transparency but pose low safety concerns. 

  • Chatbots and virtual assistants that must disclose AI identity 
  • Generative models producing synthetic or deepfake content 
  • Recommendation engines influencing consumer choices

4. Minimal Risk

Everyday or entertainment-related AI with negligible impact on rights or safety. 

  • Spam filters, video-game AI, and productivity tools 

Each category defines how much oversight, documentation, and testing are expected before deployment. 

Who Holds Responsibility Under the EU AI Act

The EU Artificial Intelligence Act assigns responsibilities to everyone involved in the lifecycle of an AI system. Each role: developer, deployer, importer, or distributor has defined duties to ensure compliance, safety, and accountability.

For AI Providers

  • Entities that design or develop AI systems before placing them on the market. 
    Establish a risk-management process across the system’s lifecycle 
  • Maintain detailed technical documentation and usage instructions 
  • Ensure training and validation data meet quality and relevance standards 
  • Implement continuous monitoring and logging mechanisms 
  • Undergo conformity assessments and obtain CE certification

For AI Deployers

Organizations or individuals who integrate or use AI within operations. 

  • Follow all technical and operational instructions from the provider 
  • Monitor performance and report serious incidents or malfunctions 
  • Conduct fundamental-rights impact assessments in sensitive sectors 
  • Assign qualified personnel for system supervision

For Importers and Distributors

Parties involved in introducing AI systems to the EU market. 

  • Verify that systems carry valid conformity assessments 
  • Maintain documentation for inspection by regulators 
  • Suspend or withdraw non-compliant systems from circulation

EU AI Act Rules for General-Purpose or Foundation Models

Some AI systems are built not for one task but for many. These are known as general-purpose or foundation models, such as large models that power chatbots, creative tools, enterprise analytics, and generative platforms. The EU AI Act recognises their scale and influence and establishes a dedicated layer of rules for their governance.

Key expectations for providers include: 

  • Preparing clear technical documentation that explains model design, purpose, and capabilities. 
  • Publishing a summary of training data while respecting intellectual-property and privacy laws.
  • Ensuring that any AI-generated material can be identified as synthetic or machine-produced. 
  • Meeting cybersecurity and risk-management standards proportionate to the model’s reach.

Foundation models deemed to pose systemic risk and those trained with exceptionally high computing resources or widely deployed across markets are subject to deeper oversight. These providers must conduct model evaluations, adversarial testing, and incident reporting. 

Free and open-source models receive a lighter regime focused mainly on transparency and copyright compliance, acknowledging their role in research and innovation. 

By regulating these models separately, the EU sets a global benchmark for accountability in large-scale AI development. It also signals a shift toward measurable governance where technical documentation, transparency, and continuous review become as vital as innovation itself.

Governance, Enforcement, and Penalties

The EU Artificial Intelligence Act introduces an oversight network that joins national authorities, expert panels, and a central European body. Together, they form a system designed to keep AI development transparent and accountable across the Union.

At the centre of this framework is the AI Office, established within the European Commission. It supervises general-purpose models, coordinates enforcement among Member States, and issues implementation guidance.

Supporting this office are several specialised bodies:

  • European AI Board – a forum for national regulators that promotes consistent interpretation of the law. 
  • Scientific Panel – independent experts who assist in risk evaluation and technical standard setting. 
  • Advisory Forum – representatives from industry, academia, and civil society providing regular feedback.

Each member state designates its own authority to investigate complaints, inspect documentation, and order the withdrawal of systems that fail to comply.

Enforcement Model

The regulation follows a cooperative enforcement model. 

  • National authorities conduct local supervision and reporting. 
  • The AI Office manages cross-border coordination and systemic-risk cases. 
  • The Commission ensures uniform application through delegated acts and periodic reviews. 

Penalties 

Non-compliance attracts significant financial consequences aligned with the gravity of the violation. 

  • Prohibited AI practices: up to €35 million or 7 % of global turnover. 
  • Violations involving high-risk or foundation models: up to €15 million or 3 % of turnover. 
  • Supplying false or misleading information: up to €7.5 million or 1 % of turnover. 
  • Small and medium enterprises benefit from proportionate reductions in these limits.

Timeline for Implementation

The EU Artificial Intelligence Act follows a phased rollout to give organizations time to adapt their systems, processes, and documentation. Each stage activates a new dimension of responsibility for AI providers and deployers.

Key milestones include:

Effective DateProvision Activated
2 February 2025Prohibitions on unacceptable-risk practices and introduction of AI literacy initiatives for public bodies.
2 August 2025Governance framework becomes operational; general-purpose model obligations take effect for all new deployments.
2 August 2026Most requirements for high-risk and limited-risk systems become enforceable, including documentation and human-oversight measures.
2 August 2027Final compliance deadline for high-risk systems that serve as safety components in regulated products.

Organizations are expected to prepare transition plans well in advance. Each phase allows time for internal audits, data-governance alignment, and technical assessments before full enforcement.

Ready for the EU AI Act?

Strengthen your compliance posture with our Enterprise Risk Management Solution, built for visibility, accountability, and control.

Explore Now

What It Means for Enterprises

Enterprises must treat risk evaluation, model transparency, and human supervision as part of the design process rather than post-deployment tasks.

Key Shifts Enterprises Must Prepare For

  • System inventory and classification – maintaining a register of all AI applications and mapping them to risk tiers. 
  • Data-governance discipline – recording data lineage, training-set quality, and validation processes for every model. 
  • Documentation standards – creating technical files, usage guidelines, and traceable logs accessible to auditors. 
  • Human oversight and escalation flows – defining clear approval chains for interventions or overrides. 
  • Monitoring and incident management – tracking system performance and reporting anomalies or rights-related impacts.

Aufait Technologies’ Perspective

Enterprises operating within the Microsoft ecosystem can use existing capabilities to prepare effectively:

  • Microsoft Purview for data classification, policy enforcement, and lineage tracking. 
  • SharePoint for storing technical documentation, conformity evidence, and audit records. 
  • Microsoft Power Automate for creating oversight workflows and automated incident reports. 
  • Microsoft Power BI for building dashboards that visualise AI performance metrics and compliance indicators. 
  • Azure Monitor and Log Analytics for capturing logs, risk events, and model behaviour insights.

Building a Responsible AI Economy

The EU Artificial Intelligence Act marks a defining stage in how societies approach the governance of intelligent systems. It transforms the abstract idea of ethical AI into an enforceable practice that reaches across industries and borders. 
By establishing clear rules for safety, transparency, and accountability, the regulation sets a foundation for trust between people and technology. It reminds enterprises that the success of AI depends as much on governance as on innovation.

The next phase of digital transformation will belong to organisations that design systems with evidence, oversight, and integrity built in from the start.   
Aufait Technologies sees this shift as an opportunity for purposeful progress. Our work with Microsoft ecosystems helps enterprises operationalise the principles of the AI Act through data governance, audit readiness, and user-centric transparency. Each step toward compliance becomes a step toward a more reliable digital future.

👉 Contact us today to book a consultation with our Microsoft experts and blueprint your digital transformation.

📢 Follow us on LinkedIn for expert insights, technology adoption tips, and compliance best practices.

Disclaimer:

1. All the images belong to their respective owners. 

2. The blog does not constitute legal advice. The EU AI Act is a complex regulation, and obligations may vary depending on the specific AI system, its use case, and the jurisdiction involved. Readers should consult the official text of the AI Act, related EU guidelines, and professional legal counsel before making decisions or taking action


1. What are the goals of the EU AI Act, and how does it affect businesses in 2025?


The EU AI Act aims to ensure that AI systems are trustworthy and respect fundamental rights, while still encouraging innovation. It introduces a unified legal framework with risk‑based obligations, meaning businesses must assess whether their AI systems are banned, high‑risk, limited‑risk, or minimal‑risk and comply accordingly.


2. Who must comply with the EU AI Act? Does it apply to U.S. or non‑EU companies?


The AI Act applies to any entity, regardless of location, that develops, modifies, or uses AI systems if their outputs are used within the EU. U.S. and other non‑EU companies are subject to the Act when serving EU users.


3. How does the EU AI Act classify AI systems into risk categories, and what are the obligations for each?


AI systems fall into four categories: unacceptable, high, limited, and minimal risk, each triggering different rules. Unacceptable systems are banned; high‑risk systems face strict requirements; limited‑risk systems need transparency; and minimal‑risk systems have no specific obligations


4. What AI practices are banned under the EU AI Act?


The AI Act prohibits AI systems that pose unacceptable risks, such as biometric categorization based on sensitive traits, emotion recognition in workplaces or schools, manipulative systems that exploit vulnerabilities, social scoring, predictive policing, and untargeted facial recognition scraping


5. What are the requirements for general‑purpose AI (GPAI) models and generative AI under the EU AI Act?


Providers of GPAI models, including generative AI and large language models, must document their models, publish a summary of their training data, provide transparency reports, and observe copyright laws. Models deemed systemic risk face stricter evaluation and cybersecurity requirements.


6. When do EU AI Act obligations take effect?


The Act entered into force on August 1, 2024, and phases in over three years. Banned practices take effect from February 2, 2025; general‑purpose AI rules from August 2, 2025; most high‑risk obligations from August 2, 2026; and rules for high‑risk AI embedded in regulated products from August 2, 2027.

February 2, 2025: Bans on unacceptable‑risk practices and AI literacy obligations commence

August 2, 2025: Governance and GPAI obligations apply; existing GPAI models have an extra year to comply

August 2, 2026: Main obligations for high‑risk and limited‑risk systems (e.g., data documentation, transparency) become enforceable

August 2, 2027: High‑risk AI systems embedded in regulated products must fully comply


7. What are the penalties for non‑compliance with the EU AI Act?


Fines vary by violation: up to €35 million or 7 % of global turnover for prohibited practices, up to €15 million or 3 % for breaches of GPAI or other obligations, and up to €7.5 million or 1 % for providing false information.

• Administrative fines are calculated based on global turnover; the highest fines apply to banned practices

• Lower thresholds apply to SMEs and startups to encourage compliance

• Fines can be combined with corrective measures such as market withdrawal or product recall.


8. How will the EU enforce the AI Act? Who oversees compliance?


Enforcement is shared between the European Commission’s AI Office, national market‑surveillance authorities, and a new AI Board

• The AI Office coordinates enforcement across member states and supervises GPAI models

• National authorities monitor compliance within their jurisdiction, handle market surveillance, and can withdraw non‑compliant systems

• The AI Board, composed of representatives from member states, issues guidance and ensures consistent application of the law

• A scientific panel of independent experts supports the AI Office by identifying systemic risks


9. What should companies do now to prepare for EU AI Act compliance?


Businesses should audit their AI systems, classify them by risk, establish internal governance, document data and training practices, educate employees, and follow regulatory updates
Create an AI inventory: Map all AI systems in use, identify their purpose, and classify them under the risk framework

Clarify roles: Determine whether the organisation is a provider, deployer, or modifier of AI

Document and disclose: Prepare technical documentation, training data summaries, and user instructions for high‑risk and GPAI systems. Implement data governance and copyright safeguards.

Educate staff: Provide modular AI literacy training tailored to roles; appoint an AI officer or cross‑functional working group to coordinate compliance.

Establish governance: Create internal AI policies covering acceptable use, bias mitigation, human oversight, incident reporting, and privacy.

Stay updated: Monitor guidance from the AI Office and national authorities, including upcoming Codes of Practice and technical standards.


10. How does the EU AI Act interact with other laws, such as GDPR and the NIS2 Directive?


The AI Act complements rather than replaces existing regulations. AI systems may need to comply with data‑protection (GDPR) and cyber‑security (NIS2) rules in addition to the AI Act.

• AI systems in critical infrastructure or digital services are subject to both the AI Act and the NIS2 Directive, which requires cyber‑resilience and risk management.

• Any processing of personal data must continue to meet GDPR requirements on transparency, lawfulness, and data subject rights.

• Companies should harmonise compliance strategies across multiple regulatory regimes.

Trending Topics

Make Your AI Systems Compliance-Ready

Align your enterprise with the EU AI Act through governance-ready solutions on Microsoft 365 and Azure.

Book a Compliance Consultation