Quick Q&A: On pressing cyber issues for life sciences companies

Start adding items to your reading lists:
or
Save this item to:
This item has been saved to your reading list.

May 10, 2019

Several major cybersecurity incidents have affected life sciences companies and their partners in recent years. In PwC’s May 2019 Digital Trust Insights survey, however, only 51 percent of pharmaceutical and life sciences respondents say their cybersecurity team helped their business recover quickly from a significant attack and minimized potential losses in the last 12 months.

HRI recently talked with Emily Stapf and Nalneesh Gaur, cybersecurity partners at PwC, to better understand how companies need to prepare for cybersecurity incidents, respond to an ongoing cybersecurity incident, investigate a recent cybersecurity incident and think about cybersecurity as a business enabler and a competitive advantage. (Click here to read Part 2 of this interview.)

PwC Health Research Institute (HRI): 

How do life sciences companies start thinking about assessing the cybersecurity capabilities of their partners? How do they ensure their partners aren’t going to lose their data?

Nalneesh Gaur, PwC Partner:

Third-party risk management includes everything—contractually obligating a partner to notify you in a timely manner of a cybersecurity incident, the right to audit that company, and continuous monitoring.

If you're a pharmaceutical company and you're working with a clinical research organization, it's one thing to do an assessment upfront to figure out which security capabilities are in place, but you need to follow that up with periodic assessments. Continuous monitoring is what we like to see in these places so that we know if the security is getting better, staying the same or getting worse.

Another point that I think is really critical is that a given pharma company will establish multiple relationships over time. And when you look at the scope of the services, they likely have different processes and different technologies that are part of that scope. You cannot simply rely on the review of a single service and sign off on the security posture for all the services. You have to review the status of security across all the services that are being provided to your organization.

You cannot simply rely on the review of a single service and sign off on the security posture for all the services. You have to review the status of security across all the services that are being provided to your organization.

HRI: When you're thinking about a cybersecurity incident that has happened, what does that post-incident response look like at a company?

Emily Stapf, PwC Partner:

Immediately following the technical incident, a company’s focus should be to ensure regulatory and contractual obligations are met and that a sound technical and strategic remediation plan is designed, committed to, funded and underway.

Once the company is on the road to recovery, the full incident response lifecycle ends with a postmortem and an honest reflection of lessons learned. It's a good opportunity to look back and say, “What should we have done differently? What will we do differently next time?”

HRI: How can a company prepare itself for this sort of situation in advance?

Emily Stapf: As for proactive planning, a good incident response plan should be documented and current, resources should be identified and the plan should be tested. As for the contents of the plan itself, there are a number of components in addition to the tactical who-does-what-when that shouldn’t be overlooked.

First, a plan needs to define roles and responsibilities from the outset. Who is on the incident response core team? Who is one the extended team? What are their roles and responsibilities? Who is the incident commander? Who is responsible for what tactical actions? Who is the decision-maker or tie-breaker? Who are your external partners? How do you engage this group?

Second, good incident response plans should include a severity matrix to triage criticality of incident facts and circumstances. For example, how much data was lost, what kind of data was lost, does anyone outside the company know about this, have regulatory obligations been triggered, how long has this been going on, is business impaired in any way? Higher severity incidents require rapid escalation, involve higher profile reporting, and often have more stringent consequences.

Third, good incident response plans have an escalation matrix defined that governs when and what combination of triggering events requires Legal/Executive Committee/CEO/Board escalation, in what manner, on what schedule, and to what level of detail. Not every incident makes it to the board room, but many absolutely do.

Nalneesh Gaur: Incident response plans come in various shapes and forms and the most important thing is to practice it so you can work out what's practical.

We encourage companies to perform tabletop simulations. These simulations allow a company to practice how they respond to a hypothetical situation and who needs to be involved. A lot of people think of a cybersecurity response as being a technology function, but when you do an enterprise-level response exercise you quickly realize it involves people beyond IT, such as marketing, legal and public relations.

As for proactive planning, a good incident response plan should be documented and current, resources should be identified and the plan should be tested. As for the contents of the plan itself, there are a number of components in addition to the tactical who-does-what-when that shouldn’t be overlooked.

HRI: What are some of the threat actors that these companies need to be concerned about?

Emily Stapf: There are really four categories. You mentioned one: nation-states, which are state-sponsored foreign entities that conduct hacks for economic, political or military advantage. These attacks are typically longer, slower and more deliberate, slower-paced campaigns that oftentimes have a deeper impact because of the assets they target.

The second category is organized crime. These are syndicates that are typically after somewhat immediate and abrupt monetary gain. These are bank account hacks, credit card hacks, ATM heists, wire transaction interference, etc.

The third category are hacktivists. These people might be interested in influencing political outcomes or social-economic change. They target corporate secrets, sensitive business information, and many times seek brand and reputational damage. They often operate as lone-wolf actors and can therefore be very difficult to detect until they’ve been successful.

But the fourth category is very important because it’s the fastest growing, and that is the insider. An insider is any person who uses their authorized access for malicious gain. That could be an employee embezzling funds or stealing R&D data to be competitively monetized, or they could maliciously alter systems for revenge or spite. Insiders can also be third-party partners with trusted access to your assets.

This conversation continues. Click here to read Part 2.

For more of HRI’s insights and content, visit our Regulatory Center and report library

Contact us

Emily Stapf

Cybersecurity, Privacy & Forensics Integrated Solutions Leader, PwC US

Nalneesh Gaur

Health Information Privacy and Security Practice, PwC US

Follow us