AI Act Impact on the Public Sector: Risk-Based Approach

AI Act Impact on the Public Sector
  • 20/03/25

Patrik Meliš-Čuga

AI trust driver, PwC Czech Republic

+420 734 692 573

E-mail

The AI Act represents a significant piece of legislation that is reshaping the landscape of artificial intelligence governance within the European Union (EU). The AI Act is the first legislative framework aimed at regulating artificial intelligence (AI) technologies across the EU. The AI Act has introduced a wide range of rules and requirements designed to ensure the responsible development and use of AI systems. The AI Act has gone through a lengthy legislative process and its requirements come into force gradually starting in February 2025.

The AI Act has a broad scope. The AI Act covers all AI systems that are developed within the EU or have an impact within the EU. According to the AI Act, “AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

 

The impact of the AI Act differs depending on the risk of AI systems

The core idea of the AI Act is to regulate AI systems according to the risks they pose to people's health, safety, and fundamental rights. The AI Act groups AI systems into several categories, which determine the specific requirements: 

  • Unacceptable risk category – AI systems which are prohibited;
  • High risk category – AI systems whose creation and use is to be very heavily regulated
  • Limited risk AI systems – AI systems subject to transparency obligations;
  • Minimal or low risk AI systems – unregulated, however they should implement a number of best practice principles.

The transparency requirements may be considered as an add-on requirement for certain AI applications, e.g. chatbots or generative AI. 

The EU AI Act has a separate category of AI systems it calls general-purpose AI (GPAI), which need to comply with their own additional requirements. These are categorised as follows: 

  • General-purpose AI models; and
  • General-purpose AI models with systemic risk. 

The strictest requirements of the AI Act fall on the "unacceptable" and "high" risk categories, as well as on the general-purpose AI models. Nevertheless, each category has its own specific requirements.

The AI Act will also have a significant impact on high-risk AI applications within the public sector

A number of AI systems that can be deployed in the public sector by government agencies and institutions are classified as high-risk in the AI Act. These use cases will be subject to additional strict requirements and can include:

  • Healthcare: Applications affecting patient health, for example in health insurance, or medical devices for disease diagnosis and treatment recommendations.
  • Public Security and Law Enforcement: AI systems used for public security, surveillance, and law enforcement. 
  • Transportation and Critical Infrastructure: Managing transportation systems and the safety and operation of critical infrastructure, such as power grids, water supply networks, as well as public transportation.
  • Education: Student assessments, grading, or decisions, which impact academic and professional futures of students.
  • Essential Services and Social Welfare: AI used to assess access to public services, including social benefits, financial assistance programs and allocation of resources and benefits to citizens.
  • Migration Control: AI applications used in immigration and border control for document and identity verification and determining access to visas and residence permits. 
  • Employment Context: Recruitment and hiring processes and HR management where AI is used.

Currently, there are several other laws, which will increase the accountability of creators or users of AI systems for AI systems responsible for causing harm. These include the recast of the Product Liability Directive, which has recently been updated and adopted. The AI Act together with the aforementioned laws are to ensure that providers and deployers (users) of AI systems will be obliged to implement additional steps in their process of work as they create, deploy and maintain AI systems in the long-term to guarantee their safety. 

 

Your next steps

As the AI Act gets closer to its full implementation by August 2026, it is set to bring about a new era of accountability, transparency, and responsibility in the AI domain. Particularly worth noting is the role the public sector will need to take in ensuring the success of the AI Act. Steps which can be taken are the following:

  1. Identify the potential impact that the AI Act will have on AI in your organisation by assessing the existing or anticipated AI projects you have in place and how the AI Act will affect them. Begin by identifying AI systems in use or development, which fall into the "high-risk" category as defined by the AI Act. Ensure there are no “prohibited” AI systems used or developed.
  2. Identify the currently existing mechanisms for technology governance you have and identify gaps that you need to fill to ensure compliance with the AI Act.
  3. Invest in training and education programs to ensure staff is well-versed in AI technology, the AI Act's requirements, and the evolving regulatory landscape.
  4. Consider how your organisation will meet the AI literacy requirements of the AI Act, which became applicable in February 2025. Explore our AI literacy trainings and see how we can support you.

 

This article is part of a series called the “AI Act impact on the public sector”. For further exploration of the public sector responsibilities as an AI provider and deployer, see this link. Additionally, for insights into the role of the public sector in implementing and enforcing the AI Act, see the following part of the series. This series does not attempt to provide any legal analysis or interpretation.

Zůstaňte s námi v kontaktu

Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.

Beru na vědomí, že vyplněním formuláře budou poskytnuté osobní údaje v něm obsažené zpracovávány entitami ze sítě PwC uvedenými v části „Správce údajů a kontaktní údaje" v prohlášení o ochraně osobních údajů v souladu s příslušnými zákonnými ustanoveními (zejména Nařízením Evropského parlamentu a Rady (EU) 2016/679 ze dne 27.dubna 2016, obecným nařízením o ochraně osobních údajů (GDPR), a zákonem č. 110/2019 Sb., o zpracování osobních údajů, v platném znění) na základě oprávněného zájmu výše uvedených entit ze sítě PwC pro účely vyřízení mého požadavku.
Přečtěte si, prosím, naše prohlášení o ochraně osobních údajů, kde se dozvíte více o našem přístupu k osobním údajům a o vašich právech, zejména právu vznést námitku vůči zpracování.

Skrýt