AI Act Impact on the Public Sector: Public Sector Responsibilities

AI Act Impact on the Public Sector
  • 27/03/25

Patrik Meliš-Čuga

AI trust driver, PwC Czech Republic

+420 734 692 573

E-mail

This instalment delves further into the intricacies of AI innovation within public institutions and government agencies, focusing on the vital role of public sector providers and deployers in navigating this landscape.

Navigating the Responsibilities of the Public Sector as a Provider and Deployer of AI

As high-risk AI becomes more prevalent in the public sector, it is crucial to understand how innovation and compliance can coexist, shaping a future where responsible AI drives progress and meets the needs of the modern world.

In some cases, the public sector will develop or commission AI systems to be developed. As a provider of an AI system, the public sector is responsible for exercising appropriate controls over the way the system is designed, developed and operated. The specific requirements depend on the risk category of the AI system in question. Providers must thoroughly document their compliance with the appropriate regulations and ensure AI systems are safe throughout their lifetimes. This includes also having to monitor how high risk AI systems perform once in operation. 

In other cases, the public sector would only procure and use existing AI systems, in line with the instructions of their creators. As a deployer, the public sector is responsible to ensure alignment with legal requirements in the way that the system is implemented in practice, used and maintained. Additionally, as a deployer, the public sector can and should verify that the vendors it engages with also adhere to the AI Act requirements to ensure that whichever systems they invest in and implement will serve them safely, long term and without any disruptions. Deploying AI systems, especially general-purpose AI models such as those behind OpenAI’s ChatGPT, Google’s Bard, or Anthropic’s Claude may present additional challenges, as information about these models and how they meet specific legal requirements is currently scarce. 

Both as a provider and a deployer, the public sector must take steps to comply with specific requirements.

The AI Act imposes specific requirements on the providers (creators) and deployers (users) of AI systems. Through its risk-based approach, requirements become stricter as the risks of AI systems increase with the strictest requirements focused on high risk AI systems and general-purpose AI systems. High risk AI systems requirements concern for example the following areas:

  • Data Governance - put in place appropriate measures to ensure data quality, relevance, sufficient representativeness,
  • Human Oversight - AI should be designed and provided with instructions that ensure appropriate human control and allow for effective human oversight,
  • Risk Management - decisions about how the AI system is designed, operates and is used should strive to mitigate the risks posed by the AI
  • Technical Documentation and Record-Keeping - appropriate technical documentation should be gathered and reviewed before placing AI system on the market; automatic recording of events (logs) of the AI’s operation should be maintained once it’s on the market, or
  • Robustness, accuracy and security - AI systems should be tested, designed and deployed in a way to achieve appropriate levels of accuracy, robustness and cybersecurity.

 

Moreover, deploying AI systems in a responsible and ethical manner necessitates a robust set of processes and procedures. Here is an overview what this means for public sector deployers: 

  • Fundamental Rights Impact Assessment (FRIA): Where public sector organisations use high-risk AI systems, they should carry out a FRIA beforehand to assess the impact the AI system can have on the rights and freedoms of affected stakeholders. They should plan appropriate safeguards in how the systems are used and what channels for appeals exist.
  • AI Assessment: Deployers in the public sector should have a rigorous assessment framework in place for AI tools, which they consider to acquire. This includes conducting thorough evaluations of each tool's capabilities, potential risks, and alignment with the AI Act's requirements.
  • Procurement processes: The assessment framework should be reflected within the procurement processes for selecting AI solutions. This includes defining procurement criteria, methodologies, and evaluation metrics, which align with the AI Act's requirements.
  • Waiting for Standards: Given that AI regulations are continually evolving, it is essential for public sector entities to keep an eye on forthcoming standards from European standards organisations such as CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization). These standards will provide guidance on AI system compliance, and public sector organisations should incorporate them into their evaluation and procurement processes.

 

Compliance with the AI Act for the public sector, both as a provider and a deployer, will necessitate the establishment of a comprehensive end-to-end governance for AI systems. Significant investments may be necessary in terms of resources, technology, and expertise to make this a reality. Employee upskilling and a basic level of literacy about AI technology and the AI Act will be necessary, as will reliance on the right tools and technology to document and assess AI systems. 

Additionally, there exists an optional but advisable aspect for the public sector as the deployer of AI systems created by others. The deployer may choose to take action to verify and select AI systems for deployment that adhere to the AI Act, thereby reinforcing compliance and ensuring that the AI systems meet the necessary regulatory standards. This is especially important for the public sector due to its duties to society.

 

Your next steps

These steps serve as a roadmap for the public sector in navigating the complex landscape of AI implementation and governance while ensuring compliance with the AI Act:

  1. Start by addressing the digital literacy gap within your organisation. Invest in training programs to upskill existing staff and to attract new talents with expertise in AI technology. Consider the AI Act’s AI literacy requirements. 
  2. Map AI within your organisation and identify any high-risk AI use cases. Ensure those AI systems are developed in line with the AI Act and you have carried out a Fundamental Rights Impact Assessment, if necessary.
  3. Define and implement a clear procurement process for AI systems you would like to use. This should include clear requirements against which AI vendors and tools are assessed and appropriate methodologies to use them.
  4. Given the continually evolving AI regulations, maintain an active awareness of the AI Act's supporting documents and standards emerging from European standards organisations, such as CEN and CENELEC. Stay abreast of industry advancements and regulatory changes.
  5. Create a plan for tackling any missing capabilities, tools or processes to ensure you create, procure and deploy AI systems in line with the AI Act requirements and the broader legal requirements on the public sector.

 

This article is part of a series called “AI Act impact on the public sector”. For further exploration of the AI Act's risk-based approach in the public sector, see this link. Additionally, for insights into the role of the public sector in implementing and enforcing the AI Act, see the following part of the series. This series does not attempt to provide any legal analysis or interpretation.

Zůstaňte s námi v kontaktu

Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.

Beru na vědomí, že vyplněním formuláře budou poskytnuté osobní údaje v něm obsažené zpracovávány entitami ze sítě PwC uvedenými v části „Správce údajů a kontaktní údaje" v prohlášení o ochraně osobních údajů v souladu s příslušnými zákonnými ustanoveními (zejména Nařízením Evropského parlamentu a Rady (EU) 2016/679 ze dne 27.dubna 2016, obecným nařízením o ochraně osobních údajů (GDPR), a zákonem č. 110/2019 Sb., o zpracování osobních údajů, v platném znění) na základě oprávněného zájmu výše uvedených entit ze sítě PwC pro účely vyřízení mého požadavku.
Přečtěte si, prosím, naše prohlášení o ochraně osobních údajů, kde se dozvíte více o našem přístupu k osobním údajům a o vašich právech, zejména právu vznést námitku vůči zpracování.

Skrýt