Patrik Meliš-Čuga
The AI Act represents a significant piece of legislation that is reshaping the landscape of artificial intelligence governance within the European Union (EU). The AI Act is the first legislative framework aimed at regulating artificial intelligence (AI) technologies across the EU. The AI Act has introduced a wide range of rules and requirements designed to ensure the responsible development and use of AI systems. The AI Act has gone through a lengthy legislative process and its requirements come into force gradually starting in February 2025.
The AI Act has a broad scope. The AI Act covers all AI systems that are developed within the EU or have an impact within the EU. According to the AI Act, “AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The core idea of the AI Act is to regulate AI systems according to the risks they pose to people's health, safety, and fundamental rights. The AI Act groups AI systems into several categories, which determine the specific requirements:
The transparency requirements may be considered as an add-on requirement for certain AI applications, e.g. chatbots or generative AI.
The EU AI Act has a separate category of AI systems it calls general-purpose AI (GPAI), which need to comply with their own additional requirements. These are categorised as follows:
The strictest requirements of the AI Act fall on the "unacceptable" and "high" risk categories, as well as on the general-purpose AI models. Nevertheless, each category has its own specific requirements.
A number of AI systems that can be deployed in the public sector by government agencies and institutions are classified as high-risk in the AI Act. These use cases will be subject to additional strict requirements and can include:
Currently, there are several other laws, which will increase the accountability of creators or users of AI systems for AI systems responsible for causing harm. These include the recast of the Product Liability Directive, which has recently been updated and adopted. The AI Act together with the aforementioned laws are to ensure that providers and deployers (users) of AI systems will be obliged to implement additional steps in their process of work as they create, deploy and maintain AI systems in the long-term to guarantee their safety.
As the AI Act gets closer to its full implementation by August 2026, it is set to bring about a new era of accountability, transparency, and responsibility in the AI domain. Particularly worth noting is the role the public sector will need to take in ensuring the success of the AI Act. Steps which can be taken are the following:
This article is part of a series called the “AI Act impact on the public sector”. For further exploration of the public sector responsibilities as an AI provider and deployer, see this link. Additionally, for insights into the role of the public sector in implementing and enforcing the AI Act, see the following part of the series. This series does not attempt to provide any legal analysis or interpretation.
Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.