EU AI Act: Why it matters, and how to prepare

Read why companies should take the EU AI Act seriously irrespective of their location, plus a four-point call to action to align with its mandates.

AI and Data automation
There is a pivotal role that the European Union’s Artificial Intelligence Act (EU AI Act) plays in shaping the global landscape for AI governance and compliance. The EU AI Act represents a groundbreaking legislative effort to address the challenges and risks associated with the rapid development and deployment of artificial intelligence technologies. Here, we delve into why companies, irrespective of their geographical location, should take the EU AI Act seriously and outline a four-point call to action for organizations to align with its mandates.

The importance of the EU AI Act

The EU AI Act is the world’s first comprehensive legal framework for regulating artificial intelligence, setting out obligations and standards for AI systems used within the European Union. Its significance lies not only in its pioneering nature but also in its potential to set a global benchmark for AI regulation, akin to the impact of the General Data Protection Regulation (GDPR) on data privacy norms worldwide.

The Act categorizes AI systems based on the level of risk they pose, from minimal to unacceptable risk, imposing stricter requirements for higher-risk categories. This risk-based approach helps ensure that AI applications are developed and used in a manner that upholds human rights, prioritizes safety, and respects fundamental freedoms, including privacy and data protection.

Why companies should take it seriously

  1. Global impact and extraterritorial reach: Like the GDPR, the EU AI Act has an extraterritorial scope, meaning it applies to companies outside the EU if their AI systems affect individuals within the Union. This global reach necessitates that businesses worldwide evaluate their AI practices against the Act’s standards to confirm compliance and avoid hefty penalties.
  2. Reputation and consumer trust: In an era where ethical considerations play a significant role in consumer decisions, adherence to the AI Act can enhance a company’s reputation and build consumer trust. Demonstrating compliance with such a rigorous and forward-thinking regulatory framework can set a company apart as a leader in ethical AI use, potentially attracting more customers and partners.
  3. Legal and financial risks: Non-compliance with the EU AI Act can result in substantial financial penalties, similar to those seen with GDPR violations. These penalties can significantly impact a company’s bottom line and shareholder value. Moreover, legal risks extend beyond fines, as failure to comply may result in litigation, regulatory investigations, and enforcement actions that can further tarnish a company’s reputation and disrupt its operations.
  4. Future-proofing and innovation: The AI Act encourages the development of AI in a safe, transparent, and accountable manner. Companies that proactively align their AI practices with the Act’s requirements can future-proof their operations against upcoming regulations in other jurisdictions, given the EU’s role as a regulatory trendsetter. Additionally, compliance can drive innovation by encouraging the development of AI that is not only technologically advanced but also ethical and socially responsible.

Four-point call to action

  1. Conduct a comprehensive AI audit: Companies that are already using AI as part of their technology stack should start by conducting a thorough audit of their AI systems to identify which ones fall under the scope of the EU AI Act. This audit should assess the risk level of each AI application, mapping out the necessary compliance steps for those classified as high-risk. These audits should be conducted in line with an accepted framework such as NIST’s AI Risk Management Framework or ISO/IEC 42001.
    Organizations that are evaluating the complexities of folding AI into their working method but not yet using it in an officially sanctioned manner should conduct a wide-swath assessment to understand if and how their workforce may be using AI unofficially, such as for task management, meeting transcription, or document editing, to understand their exposure.
  2. Implement robust AI governance and ethics frameworks: Organizations need to establish comprehensive governance structures and ethical frameworks for AI development and use. These frameworks should support and demand transparency, accountability, and adherence to privacy and data protection standards, as well as incorporate mechanisms for human oversight and risk management.
  3. Invest in compliance and risk management training: It’s crucial that all stakeholders involved in the development, deployment, and management of AI systems are aware of the EU AI Act’s requirements and the ethical considerations surrounding AI. Companies should invest in training programs to build internal competency and empower their teams to navigate the complexities of AI regulation and risk management effectively.
  4. Enhance data governance and understand your data: In the context of the EU AI Act, robust data governance becomes a cornerstone for compliance, particularly given the emphasis on privacy and data protection as integral pillars to AI systems’ ethical deployment. Organizations must take definitive steps to know their data intimately – where it comes from, how it’s processed, and why and how it’s used. This understanding is crucial not only for compliance with the AI Act but also for broader regulatory landscapes like GDPR.

To enhance data governance for use within AI, companies should implement the following strategies:

  • Data mapping and inventory: Create comprehensive maps and inventories of your data flows, including data collection, storage, processing, and sharing activities. This will help identify which data sets feed into AI systems and ensure they are managed in compliance with the AI Act’s provisions, particularly regarding data quality and the minimization of biases.
  • Privacy impact assessments for AI: Conduct Privacy Impact Assessments (PIAs) specifically for AI projects to evaluate how personal data is used and to identify and mitigate privacy risks at early development stages. This is in line with the AI Act’s focus on risk assessment and mitigation for high-risk AI systems.
  • Data quality and bias mitigation: Implement processes to help ensure the quality of data used by AI systems and to actively mitigate risks of bias and discrimination. This involves regular auditing of data sets and algorithms for accuracy, representativeness, and fairness, aligning with the AI Act’s emphasis on trustworthy AI.
  • Data protection By Design and By Default: Embed data protection principles into the design of AI systems from the outset, so that personal data is processed lawfully, transparently, and for specified purposes only. This approach aligns with both the AI Act and GDPR, reinforcing the commitment to protecting individuals’ rights and freedoms in the digital age.

By prioritizing data governance and taking proactive steps to understand and manage their data, organizations can not only move toward compliance with the EU AI Act but also enhance their operational integrity and trustworthiness. This fourth pillar of the call to action emphasizes the foundational role of data in the ethical development and deployment of AI, urging companies to adopt a holistic and disciplined approach to data management. Through such efforts, which are often seen as Herculean, organizations can position themselves as leaders in the responsible use of technology, with AI systems that are not only meeting legal compliance but also ethically aligned with societal values and expectations

In conclusion

The EU AI Act sets the stage for the responsible development and use of artificial intelligence. By taking the Act seriously and proactively aligning their AI practices with its requirements, companies can not only mitigate legal and financial risks but also position themselves as leaders in the ethical use of AI technologies. The four-point call to action outlined here provides a roadmap for organizations to navigate the challenges of AI compliance and harness the opportunities it presents for innovation and competitive advantage.

Get in touch with our specialists

View All Specialists

Deborah Nitka

PMP, CSM, CDPSE, Senior Manager, Cybersecurity, Technology Risk and Privacy
Bhavesh Vadhani

Bhavesh Vadhani

CISA, CRISC, CGEIT, PMP, CDPSE, Principal, Global Leader, Cybersecurity, Technology Risk, and Privacy

Looking for the full list of our dedicated professionals here at CohnReznick?



Let’s start a conversation about your company’s strategic goals and vision for the future.

Please fill all required fields*

Please verify your information and check to see if all require fields have been filled in.

Please select job function
Please select job level
Please select country
Please select state
Please select industry
Please select topic

Related services

Our solutions are tailored to each client’s strategic business drivers, technologies, corporate structure, and culture.

This has been prepared for information purposes and general guidance only and does not constitute legal or professional advice. You should not act upon the information contained in this publication without obtaining specific professional advice. No representation or warranty (express or implied) is made as to the accuracy or completeness of the information contained in this publication, and CohnReznick LLP, its partners, employees and agents accept no liability, and disclaim all responsibility, for the consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it.