An AI Generated Webpage

EU's Artificial Intelligence Act

EU's Artificial Intelligence Act
  • Issue
  • 5 minute read
  • April 10, 2024

Artificial Intelligence Act: A groundbreaking regulation from the EU that ensures safety, protects rights, and fosters innovation in AI. It includes bans on certain applications, clear guidelines for law enforcement, obligations for high-risk systems, transparency requirements, and support for innovation and SMEs. Dive in to learn more about Europe's leadership in AI governance.

Welcome to the key highlights of the EU's Artificial Intelligence Act

We're excited to introduce the EU's groundbreaking Artificial Intelligence Act, a pivotal step towards ensuring safety, protecting fundamental rights, and fostering innovation in the field of AI. This landmark regulation, backed by MEPs with an overwhelming majority, is designed to position Europe as a global leader in AI while upholding core democratic values.

Key Highlights

Banned Applications

The EU's AI Act takes a proactive stance in safeguarding citizens' rights by banning specific AI applications that have the potential to infringe upon individual freedoms and privacy. Biometric categorization systems, for instance, have raised concerns regarding discriminatory practices and the misuse of sensitive personal data. By outlawing these systems, the EU aims to prevent discriminatory profiling based on characteristics such as race, gender, or ethnicity.

Emotion recognition in sensitive contexts like workplaces and schools is another area of concern. Such technology could lead to intrusive surveillance and undermine individuals' right to privacy and autonomy. Prohibiting emotion recognition in these settings ensures that workplaces and educational environments remain free from undue surveillance and manipulation.

The ban on social scoring and predictive policing based solely on profiling is a critical step towards preventing algorithmic discrimination and bias. These practices have the potential to reinforce existing inequalities and undermine trust in law enforcement and judicial systems. By outlawing these practices, the EU seeks to uphold the principles of fairness, justice, and equal treatment under the law.

By outlawing these practices, the EU seeks to uphold the principles of fairness, justice, and equal treatment under the law.

Law Enforcement Exemptions

While ensuring public safety is paramount, the EU's AI Act recognizes the need to balance security concerns with individual rights and freedoms. The Act establishes clear guidelines for the use of biometric identification systems by law enforcement agencies, ensuring that such technology is deployed responsibly and transparently.

Real-time deployment of biometric identification systems is subject to stringent safeguards, including limitations on time, scope, and specific authorization requirements. These measures are designed to prevent arbitrary or widespread surveillance while allowing for targeted and proportionate use in situations where public safety is at risk.

Post-facto use of biometric identification systems, known as "post-remote RBI," is considered high-risk and requires judicial authorization linked to a criminal offense. This ensures that law enforcement agencies cannot abuse biometric data for surveillance or profiling purposes without proper oversight and accountability.

The Act establishes clear guidelines for the use of biometric identification systems by law enforcement agencies, ensuring that such technology is deployed responsibly and transparently.

Obligations for High-Risk Systems

High-risk AI systems, by their very nature, have the potential to cause significant harm to individuals, communities, and society at large. The EU's AI Act imposes clear obligations on entities deploying such systems to assess and mitigate risks, maintain transparency, and ensure human oversight.

Risk assessment and mitigation are fundamental to identifying potential harms associated with high-risk AI systems and implementing appropriate safeguards to mitigate these risks. This includes conducting thorough impact assessments and implementing measures to prevent discrimination, bias, and other adverse effects.

Transparency is essential for building trust and accountability in AI systems.

High-risk AI systems must maintain transparent and accurate records of their operations, including use logs and decision-making processes. This enables independent auditing and ensures that individuals affected by AI systems have access to information about how their rights are being protected.

Human oversight is critical for ensuring that AI systems operate ethically and responsibly. While AI can augment decision-making processes, human judgment remains indispensable in complex and sensitive situations. High-risk AI systems must therefore incorporate mechanisms for human oversight and intervention to prevent potential harms and ensure compliance with legal and ethical standards.

Human oversight is critical for ensuring that AI systems operate ethically and responsibly.

Transparency Requirements

Transparency is a cornerstone of responsible AI governance, enabling individuals to understand and assess the implications of AI systems on their lives and communities.

The EU's AI Act imposes transparency requirements on general-purpose AI systems, including compliance with EU copyright law and disclosure of detailed training data summaries.

Compliance with copyright law ensures that AI systems respect intellectual property rights and do not infringe upon the rights of content creators. Additionally, disclosing detailed training data summaries allows for greater transparency into the sources of data used to train AI models, enabling users to evaluate the reliability and fairness of AI-generated outputs.

Labeling artificial or manipulated content such as deepfakes is essential for preventing the spread of misinformation and protecting individuals from potential harm. By clearly identifying manipulated content, users can make informed decisions and mitigate the risks associated with consuming or sharing deceptive media.

The EU's AI Act imposes transparency requirements on general-purpose AI systems, including compliance with EU copyright law and disclosure of detailed training data summaries.

Support for Innovation and SMEs

Innovation is vital for driving economic growth, fostering competitiveness, and addressing societal challenges.

The EU's AI Act promotes innovation by establishing regulatory sandboxes and real-world testing environments accessible to SMEs and startups.

Regulatory sandboxes provide a controlled environment for testing innovative AI solutions in real-world scenarios, allowing companies to identify and address potential regulatory challenges before bringing their products to market. This enables SMEs and startups to navigate complex regulatory frameworks and ensure compliance with legal and ethical standards.

Real-world testing environments offer SMEs and startups the opportunity to validate their AI solutions in diverse settings and gain valuable insights into user needs and preferences. By facilitating access to testing resources and expertise, the EU fosters a culture of innovation and entrepreneurship, driving economic growth and technological advancement.

The EU's AI Act promotes innovation by establishing regulatory sandboxes and real-world testing environments accessible to SMEs and startups.


Legal Flash: New Artificial Intelligence Act

Scope, definitions and everything you need to know: An AI Generated Flash

Read more



Find the full Artificial Intelligence Act by clicking on the button below:

Artificial Intelligence Act | Texts adopted | P9_TA(2024)0138


Follow us

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Contact us

Vassiliοs Vizas

Vassiliοs Vizas

Tax & Legal Services Leader, PwC Greece

Hide