Among the various obligations, the EU AI Act sets stringent rules around the development and adoption of Artificial Intelligence (AI). But from a practical perspective, what actions can businesses take from now as they prepare for the new AI regulation?
The democratisation of Artificial Intelligence (AI) has made the technology available to an unprecedented number of individuals and businesses, pushing it beyond the exclusive reach of specialised researchers and big tech. In fact, 68% of CEOs in PwC’s 2024 Global CEO Survey agree that generative AI will increase competitive intensity in their respective industry by 2027.
Disagree
Agree
Generative AI will significantly change the way my company creates, delivers and captures value.
Disagree
Agree
Generative AI will require most of my workforce to develop new skills.
Disagree
Agree
Generative AI will increase competitive intensity in my industry.
Note: Disagree is the sum of ‘slightly disagree,’ ‘moderately disagree’ and ‘strongly disagree’ responses; Agree is the sum of ‘slightly agree,’ ‘moderately agree’ and ‘strongly agree’ responses.
Source: PwC's 27th Annual Global CEO Survey
AI is significantly reinventing the service and product delivery capabilities of organisations, be it for medical diagnosis, financial fraud detection or customised customer service chatbots. As most business leaders aim to scale their AI adoption in the coming months to generate sustainable value, it will be essential for their in-house compliance teams and lawyers to understand the EU AI Act’s impact on their business.
Adopting AI governance across all business functions enables adequate oversight on AI projects. A governance framework also enables the C-suite to make informed decisions on investment and scaling based on tangible metrics such as risk tolerance and complexity.
In many cases, a single AI technology can be applied in different use cases - with each of them posing new challenges for governance. With the recent EU AI Act, executives have an opportunity to innovate safely within regulatory guardrails and at the same time, establish a framework which can be adapted to different risk profiles.
Here are four key steps to get started.
Under the EU AI Act, AI systems are divided into various categories depending on the potential risks they may represent to the health, safety and fundamental rights of individuals. Developing an AI Inventory and classifying an AI system’s risks based on EU taxonomy can assist in-scope organisations in understanding the extent of their obligations under the law.
In any case, a risk classification of AI systems can fast-track AI adoption by putting into focus priority areas where the business needs to take immediate action. Without a solid governance foundation, compliance teams may be unable to identify and mitigate the risks adequately.
A comprehensive regulatory analysis can help formulate business strategies as well as inform and prepare stakeholders for the incoming change in the organisation’s processes.
The EU AI Act imposes specific requirements on organisations based on their role as providers, deployers, importers or distributors of AI systems and general-purpose AI models. In some cases, businesses can act as both providers and deployers - if, for example, a business repurposes an existing AI system or places it on the market with its own trademark. It is therefore critical for businesses to assess their role in each specific AI use case, and map such role with the corresponding regulatory obligations.
In addition to the requirements of the EU AI Act, the use of AI within the EU may also impose further obligations on businesses in terms of other legislation such as the GDPR and the Digital Services Act. A number of regulatory authorities such as the EDPB and the French data protection supervisory authority have issued guidelines in recent months on the development and use of AI tools following GDPR requirements - all reaffirming that organisations should ensure their AI processes and procedures comply with requirements of lawfulness, transparency and fairness.
The intersection between AI adoption and GDPR compliance has been an ongoing challenge for many organisations. Large language models (LLMs), which require vast amounts of data for their machine learning process, have caused regulatory uncertainties over their alignment with GDPR principles such as data minimisation.
Following a temporary ban of OpenAI’s ChatGPT in March 2023 (due to inter alia concerns over transparency and unlawful data collection) - and following the lift of the ban once OpenAI addressed the issues raised - the Italian supervisory authority again notified the technology provider earlier this year that their product is in breach of European data protection law.
More recently, a complaint was lodged against OpenAI with the Austrian supervisory authority based on ChatGPT’s ‘hallucination’ risks, i.e. instances where the AI creates content that contradicts the source or creates factually incorrect information. Such risk is said to be in breach of the GDPR’s principle of data accuracy.
Inversely, Facebook and Instagram’s parent company, Meta, has agreed to pause their plans to train their AI models with personal data from their platforms following concerns expressed by the UK’s Information Commissioner’s Office.
Once regulatory risks have been identified and properly understood, implementing a holistic governance framework is central to mitigating such risks. Updating internal policies and procedures, identifying new channels of communication and reporting, and outlining specific roles and responsibilities aligned with the corporate structure will help stakeholders formalise ownership of specific risk mitigation actions and controls.
The governance framework should not be limited to internal processes only, but also address external-facing challenges such as customer transparency, third-party due diligence, and regulatory scrutiny. Without a holistic view of AI’s ongoing impact, organisations may not fully benefit from the competitive advantage that the technology has to offer.
More and more employees are asking to use AI in their daily tasks, and management wants to move fast to capitalise on its benefits. However, staff are also unaware of the potential risks of AI - bias, misinformation and ‘hallucinations’ are a few examples of where technology can go wrong.
While some organisations have decided to block the use of AI technology completely, top performers are embracing change by implementing adequate training programmes for their employees. Helping staff understand the risks associated with AI, and defining clear actions to address such risks, will help individuals fully exploit the AI potential every day.
In any case, the EU AI Act also introduces a requirement for ‘AI literacy’, i.e. entities need to ensure that relevant stakeholders have the necessary skills, knowledge and understanding to make an informed deployment of AI systems. Organisations will need to comply with this obligation in six months' time, namely in January 2025.