AI Compliance tool

Guide your technical, business and risk, and compliance teams through the requirements of the AI Act and provide them with a single platform to cooperate and document the compliance of separate AI projects within your organisation.

Why do you need a compliance tool?

AI and data are being increasingly regulated around the globe
As the impact of new technologies and business models has been observed over the past decades, regulators began developing policy solutions to manage risks and enable positive innovation. The trend towards regulating AI is visible globally. Some examples of such laws include the EU’s AI Act, the Digital Services Act, the Digital Markets Act, the Canadian Artificial Intelligence and Data Act (AIDA) or the US 14110 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Each of these require AI innovators and users to be aware and ready to meet new legal requirements.

The EU AI Act can be challenging to comply with
The AI Act, adopted by the EU, is the world’s first comprehensive regulatory framework for AI. It has gone through a lengthy legislative process and its requirements will come into force in phases between the years 2024 and 2027.

AI Act challenges

The AI Act has a broad scope. The AI Act covers all AI systems that are available in the EU or that have an impact within the EU. “AI” is defined in a way that covers not only advanced technologies using machine-learning or reinforcement learning techniques, but also simpler ones such as knowledge-based approaches. The definition focuses on systems which operate with some autonomy from humans and "infer" by themselves how to achieve a set of objectives. This means that even software that we wouldn’t consider AI can be affected by the AI Act.

The impact and specific obligations resulting from the AI Act will differ depending on the risk of AI systems. The AI Act will regulate AI systems according to the risks they pose to people's health, safety, and fundamental rights. The AI Act groups AI systems into several categories, which determine the specific requirements:

  • Unacceptable risk category – AI systems which are prohibited;
  • High risk category – AI systems whose creation and use is to be very heavily regulated;
  • Limited, transparency risk – additional transparency requirements are required for certain AI applications, e.g. chatbots or generative AI;
  • Low risk systems – all other AI systems are subject to limited AI Act requirements, e.g. the requirement to ensure sufficient digital literacy among users of AI.

In addition, foundation models or general purpose AI models (GPAI) are regulated by the AI Act. They are similarly separated into different categories of risk:

  • All GPAIs – there are requirements around the way such systems are created and what information about them needs to be communicated
  • High impact GPAIs - For models that are exceptionally powerful, requiring 10^25 FLOPs during training, additional requirements including risk mitigation and monitoring are required

The strictest requirements of the AI Act fall on the "unacceptable" and "high" risk categories; however, each category has its own specific requirements.

All AI systems and both providers and users of AI will have obligations to comply with. Currently, there are several other laws, which will increase the accountability of creators or users of AI systems for AI systems responsible for causing harm. These include the AI Liability Directive or the recast of the Product Liability Directive, which is currently in development. The AI Act together with the aforementioned laws are set to ensure that both providers and deployers (users) of AI systems will be obliged to implement additional steps in their process of work as they create, deploy and maintain AI systems in the long-term to guarantee their safety.

Achieving compliance with the proposed AI Act will necessitate a comprehensive and coordinated effort across the organisation and technical domains. Organisations will need to establish a structured framework that aligns with the risk-based approach outlined in the regulation. This involves conducting thorough assessments to categorise AI systems based on their risk levels and implementing tailored compliance measures accordingly. Collaboration across multiple teams and departments, including legal, data governance, and technical development teams, will be paramount for successfully addressing this regulation. For high-risk AI systems, organisations may need to institute conformity assessments, ensuring the quality of training data, transparency, and human oversight.
Technical teams will play a crucial role in implementing the necessary safeguards, such as developing algorithms that adhere to ethical guidelines and data protection principles. Simultaneously, legal and compliance teams will need to navigate the evolving regulatory landscape, ensuring that organisational practices align with the AI Act. Effective communication and collaboration between these diverse teams will be essential to foster a cohesive and compliant AI ecosystem within organisations.

AI Compliance Tool

PwC Czech Republic offers an AI Compliance Tool to help guide you navigate the complex regulatory AI landscape in response to the EU’s AI Act. The tool would guide your technical, business and risk and compliance teams through the requirements of the AI Act and provide them with a single platform to cooperate and document the compliance of separate AI projects within your organisation. 

Functionalities of the tool

AI Governance

  • A single point of governance for AI projects across your company, providing overviews of their risk profiles, stages of development and performance, allowing you to prioritise efforts where they are most beneficial.

Supporting your AI Act implementation in an auditable platform

  • A collaborative work environment, just like that of bees, that enables coordinated action for AI Act requirements implementation and the allocation of tasks and responsibilities within teams.
  • An auditable paper trail of your processes on the platform that allows you to record and trace your teams’ answers, actions and uploaded documents that showcase your compliance and facilitates internal and external auditing of your AI projects.

Access to tailored guidance

  • A modular structure that breaks down the AI Act requirements into simple actionable steps for you and your teams, tailored to each individual AI project, depending on its risk profile and your organisation’s role in it.
  • The provision of tailored guidance, clarifying terminology and defining processes and templates for specific actions.
  • The potential to deliver bespoke templates, guidance and solutions for selected clients and sectors.

Key features of the tool

  • Organisation of all your AI projects in one place and quick assessment of their risks;
  • Following of straightforward step-by-step tasks to tackle complex legal requirements;
  • Documentation of your compliance journey;
  • Collection of supporting evidence in one place;
  • Access to guidance for AI governance best practices to understand the ways to achieve the highest level of compliance;
  • Utilisation of a tool which will be updated regularly.

How the tool will work

Risk assessment & classification

A questionnaire to identify risk category of AI system under the AI Act and highlight other risk factors

Outcome: Risk classification

Modules & guidance

A set of modules to provide targeted guidance and questions with open space for answers and uploading evidence of actions taken.

Outcome: Completed and evidenced actions for compliance

Auditing & monitoring

A single place to access all evidence of the compliance and functioning of an AI system.

A platform to prompt and remind of regular monitoring of system operation after its deployment.

Outcome: Easy access to evidence and regular monitoring

This tool is not intended to, and does not, provide legal advice or a legal opinion. The tool also does not substitute for experienced legal counsel. PwC can, however, offer you legal advice through its Legal Services.

Playback of this video is not currently available


Your next steps

As the AI Act gets closer to coming into force in phases between the years 2024 and 2027, it is set to bring about a new era of accountability, transparency, and responsibility in the AI domain. To navigate this complex landscape of AI implementation and governance, the following steps may be taken:

  1. Identify the potential impact that the AI Act will have on AI in your organisation by assessing your existing or anticipated AI projects in place and how the AI Act will affect them.
  2. Identify your currently existing mechanisms for technology governance and identify the gaps that you need to fill to ensure compliance with the AI Act.
  3. Invest in training and education programmes to ensure staff is well-versed in AI technology, the AI Act's requirements, and the evolving regulatory landscape, as well as to attract new talents with expertise in AI technology.
  4. Given the continually evolving AI regulations, maintain an active awareness of the AI Act's updates and potential standards emerging from European standards’ organisations, such as CEN and CENELEC. Stay abreast of industry advancements and regulatory changes.
  5. Create a plan for tackling any missing capabilities, tools or processes to ensure you create, procure and deploy AI systems in line with the AI Act requirements.

Contact us

Patrik Meliš-Čuga

Patrik Meliš-Čuga

AI Strategy and Risk Leader, PwC Czech Republic

Tel: +420 734 692 573

Christina Hitrova

Christina Hitrova

Digital Ethics and Compliance, PwC Czech Republic

Tel: +420 604 163 959

Aneta Fortelková

Aneta Fortelková

Responsible AI, PwC Czech Republic

Tel: +420 733 154 566

Daniela Kidjemet

Daniela Kidjemet

AI Governance and Deployment, PwC Czech Republic

Tel: +420 734 464 439