Responsible by design: How your organization can embrace GenAI without risking your reputation

Almost every business leader wants to know how they can use generative artificial intelligence (GenAI) to increase the efficiency of their organization and people. But like with any innovation, before you forge ahead, it’s important to take time to understand what GenAI is and how you can implement it responsibly to create business value without risking your reputation.

Understanding GenAI’s value and limitations

GenAI may not solve every problem in your organization where AI might be useful. It has its value, but it also has limitations. GenAI is most effective when you’re looking to produce directionally accurate results (e.g. a strong first draft) rather than something exact. For example, GenAI can be a great tool for supporting certain processes (e.g. sales). You could save significant time by using it to generate the first draft of a sales plan, which you could then edit and supplement with personal insights. You could also use GenAI to help efficiently answer questions your employees might have on internal policies or procedures.

Comparatively, GenAI isn’t the right choice if you want to produce verifiable results, such as content for your financial report. This is because most GenAI solutions operate as black boxes, making it impossible to explain how and why specific insights were generated. In these cases, you should consider other AI techniques where you can explain decisions to a reasonable degree (e.g. random forest).

Most conventional machine learning (ML) models are explainable because the number of model features is small enough that you can comprehend and explain which are more important for driving a decision. For deep learning (the technique used by GenAI) the number of features can be in the hundreds of thousands, making such models inherently inexplicable. The large language models (LLMs) used for GenAI applications also tend to operate as black boxes.

Understanding the risks

If you’re looking to embrace GenAI, it’s critical that you don’t view it as a tech-only decision or process. If you consider GenAI in a vacuum, you could overlook important risks related to ethics, people and culture. While many of the risks associated with GenAI are similar to the risks related to other forms of AI, there are four unique risks your organization should be aware of:

  • Ethical risks: The use of private customer or employee data to generate content can create ethical dilemmas and put your reputation at risk.
  • Trust risks: If your data sources aren’t reliable, robust, accurate and unbiased, any outputs generated could be problematic and erode the trust of your stakeholders. In the case of GenAI, the use of LLMs trained on data sets that aren’t readily available can create significant trust risks.
  • Hallucination risks: GenAI can easily be used to generate convincing outputs that reinforce specific points of view that aren’t accurate. This can be a major risk if you aren’t aware of this possibility and take generated outputs at face value.
  • User risks: GenAI can be an excellent tool, but if your employees aren’t taught to understand when and how it should and shouldn’t be used, your organization could face regulatory, legal and reputational risks.

Using GenAI responsibly: Where to start

The best time to consider responsible AI (RAI) practices is at the very beginning of your GenAI journey. By taking a holistic and multi-dimensional view to GenAI from the get-go, you can better define opportunities, understand risks and enhance or build any missing guardrails so you’re well positioned to use GenAI tools responsibly moving forward. We call this approach RAI by design. 

What does RAI by design look like in action? Below, we outline the four key pillars of our Responsible AI Framework, which you can use as a foundation for designing a responsible approach to GenAI for your organization, along with key questions to think about as you move forward.

    1.     Strategy

Define what the ethical use of data means for your organization (e.g. what data can and can’t be used) and embed it into your GenAI strategy. Consider how your GenAI approach aligns with your organizational values and any regulatory and compliance requirements. 

Key questions

  • How can we use GenAI in our organization to drive operational, customer or employee value while prioritizing data privacy and ethics?
  • What current or future regulations do we need to be aware of when designing our GenAI strategy?

    2.    Control

Consider your governance, compliance and risk management practices as they relate to GenAI. This can help you assess what guardrails you have, need or must enhance to make sure you’re using GenAI responsibly and that you have the controls in place to mitigate risks.

Key questions

  • What controls do we have or need to make sure we’re managing GenAI risks effectively?

  • Is our GenAI strategy in compliance with internal policies, industry regulations and privacy laws?

    3.    Responsible practices

Develop clear and transparent guidelines for how GenAI should be used in your organization. Consider factors like explainability, robustness, bias and fairness, originality, security, privacy and safety.

Key questions

  • Do we have clear guidelines for how GenAI should be used?

  • What training do our employees need so they can use GenAI effectively and safely?

  • How will we validate outputs to avoid hallucinations or misinformation?

    4.    Core practices

If you don’t have them in place already, develop overarching core practices your organization can use to assess potential use cases and determine what type of AI to use (e.g. GenAI, AI/ML). These core practices should include ways to evaluate the problems you’re trying to solve, identify relevant industry standards and best practices, evaluate and improve AI performance and monitor for innovations, updated techniques and changes to regulations (e.g. Bill C-27). 

Key questions

  • Is GenAI the best form of AI to use in this situation? 

  • How will we evaluate and improve our GenAI model over time?

  • How will we monitor innovations, risks, regulations and trends so we can adjust as needed?

One tool in your AI toolkit

GenAI can be a fantastic tool for making your organization more efficient and effective, but only if you know when to use it—and when not to—and how to use it responsibly. The best way to do this is to design your GenAI approach to be responsible from the start.

We bring together a community of solvers to tackle our clients’ biggest challenges. Let’s keep the conversation going.

Follow PwC Canada

Contact us

Annie Veillet

Annie Veillet

Partner, Data Analytics and AI, PwC Canada

Tel: +1 514 205 5146

Bahar Sateli

Bahar Sateli

AI, Cloud & Data Lead, PwC Canada

​Jordan  Prokopy

​Jordan Prokopy

National Data Trust & Privacy Practice Leader, PwC Canada

Tel: +1 416 869 2384

Hide