Skip to content Skip to footer
Search

Loading Results

Responsible AI

Building artificial intelligence (AI) trust in line with the National Fourth Industrial Revolution (4IR) Policy



 

What the National Fourth Industrial Revolution (4IR) Policy means for businesses

Balancing the benefits and risks associated with AI

Overcoming risks with Responsible AI

4IR advancements are getting more accessible, and its implications to business are evident, amplified by the COVID-19 pandemic. Its changes can be felt by consumers to manufacturers to cities and potentially whole economies, primarily through productivity gains in various sectors. 

The recently released National Fourth Industrial Revolution (4IR) Policy identifies AI as one of the 5 foundational 4IR technologies which presents both great potential, but also significant risks. 

There is a clear need for organisations to harness the power of AI in an ethical and responsible manner to be able to realise its intended benefits.

What the National Fourth Industrial Revolution (4IR) Policy means for businesses

Responsibile AI

Committed to pursuing balanced, responsible and sustainable growth, the Policy underscores the need for the private sector to help champion change and execute meaningful and equitable initiatives to impact the rakyat, and move the economy up the value chain. 

With the growing emphasis on digital transformation within the private sector, organisations are in a unique position to not only adopt digital solutions, but to develop innovative businesses, functions, processes and infrastructure to address economic, social and environmental challenges. With their clout, there are opportunities for businesses to co-create and collaborate in new partnership models by leveraging 4IR platforms, ecosystems and digital marketplaces.

These are some ways to achieve the outcomes envisioned in the Policy by 2030, which includes increasing investments in 4IR-enabling infrastructure and the number of homegrown 4IR technology providers. 

Balancing the benefits and risks associated with AI

We observe increasing applicability of AI to benefit a range of industries like financial services, healthcare and pharma, industrial products, retail consumer, and telecom, media and tech, especially in the wake of the pandemic.

Top five in the list of AI applications cited as important in 2021, according to PwC’s 2021 AI Predictions survey are: 

  • Managing risk, fraud and cybersecurity threats

  • Improving AI ethics, explainability and bias detection

  • Helping employees make better decisions

  • Analysing scenarios using simulation modelling

  • Automating routine tasks

Getting the fundamentals of trust and accountability right

Organisations need to understand the risks of AI, and how these could, on a more macro level, impact society or result in ethical concerns if misused (refer to visual for an illustration). Upskilling and training their employees to deploy AI responsibly is essential. 

AI affects human beings on a very personal level, such as the use of autonomous vehicles where passengers need to trust their lives to a machine. Without proper measures to safeguard against risks, this could lead to algorithms that condone racial or gender bias, for instance, in the area of recruitment or customer service, or even influence political results. Organisations will need to properly understand the frameworks and guidance on using AI responsibly. 

Responsible AI

Our recommendations: Overcoming risks with Responsible AI

As organisations start to adopt AI, they need to be aware of certain barriers that may complicate technology implementation.

Being aware of emerging AI regulation to govern the use of AI is only one part of the equation in mitigating risks. They will also need to look inwards and challenge any siloes in their approach to AI and data governance, and assess if their workforce has the necessary skills critical to AI adoption. 

Here are 3 steps organisations can take to build greater trust in AI.

Responsible AI
Take a multi-disciplinary approach to governance

To govern the use of AI, ensure that all stakeholders are involved. This means the team tasked with overseeing governance should comprise representatives from various areas of the business, including leadership, procurement, compliance, human resources, technology and data experts, and process owners from different functions.

If there is an existing governance structure in place, you may extend it by adopting a three lines of defence risk management model. 

Responsible AI
Build up your AI risk confidence

Ensure that you have the right AI policies, standards, controls, tests and monitoring for all risk aspects of AI. 

Having a common AI playbook may serve as a ‘how to’ guide for approaching new AI initiatives to build trust in this technology. It may be helpful to guide how you collaborate and discuss risks based on your goals, while identifying the level of rigour required to address risks based on their severity. 

Responsible AI
Act to maintain performance

Keep the momentum going as you familiarise yourself with AI and learn how to manage the risks. Observing good governance and risk management may not necessarily slow you down in this regard. The right level of explainability, for example, will depend on each AI model’s level of risk and required accuracy levels, allowing for quicker progress in some areas than others.

PwC’s Responsible AI Toolkit

You may be starting out on your AI journey or you may need answers to your board’s questions on how AI is governed. Regardless of your maturity level, there is a way to navigate your journey to engender trust and inspire confidence in the technology, among your stakeholders, both internally and externally. 

Find out how PwC’s Responsible AI Toolkit can enable and support the assessment and development of Responsible AI across your organisation. It is scalable and can be tailored to your unique business requirements and level of AI maturity, to help you develop transparent, explainable and ethical AI applications. 

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Contact us

Michael Graham

Michael Graham

Chief Digital Officer, PwC Malaysia

Tel: +60 (3) 2173 0234

Elaine Ng

Elaine Ng

Partner, Financial Services and Risk Assurance Services Leader, PwC Malaysia

Tel: +60 (12) 334 6243

Marina Che Mokhtar

Marina Che Mokhtar

Deals Partner, Economics and Policy, PwC Malaysia

Tel: +60 (3) 2173 1699

Khai Chiat Ong

Khai Chiat Ong

Partner, Risk Assurance Services, PwC Malaysia

Tel: +60 (3) 2173 0358

Clarence Chan

Clarence Chan

Director, Risk Assurance Services, PwC Malaysia

Tel: +60 (3) 2173 0344

Nataraj Veeramani

Nataraj Veeramani

Director, Assurance, PwC Malaysia

Tel: +60 (3) 2173 0897

Hide