Several major cybersecurity incidents have affected life sciences companies and their partners in recent years. In PwC’s May 2019 Digital Trust Insights survey, however, only 51 percent of pharmaceutical and life sciences respondents say their cybersecurity team helped their business recover quickly from a significant attack and minimized potential losses in the last 12 months.
HRI recently talked with Emily Stapf and Nalneesh Gaur, cybersecurity partners at PwC, to better understand how companies need to prepare for cybersecurity incidents, respond to an ongoing cybersecurity incident, investigate a recent cybersecurity incident and think about cybersecurity as a business enabler and a competitive advantage. (Click here to read Part 2 of this interview.)
How do life sciences companies start thinking about assessing the cybersecurity capabilities of their partners? How do they ensure their partners aren’t going to lose their data?
You cannot simply rely on the review of a single service and sign off on the security posture for all the services. You have to review the status of security across all the services that are being provided to your organization.
HRI: When you're thinking about a cybersecurity incident that has happened, what does that post-incident response look like at a company?
Immediately following the technical incident, a company’s focus should be to ensure regulatory and contractual obligations are met and that a sound technical and strategic remediation plan is designed, committed to, funded and underway.
Once the company is on the road to recovery, the full incident response lifecycle ends with a postmortem and an honest reflection of lessons learned. It's a good opportunity to look back and say, “What should we have done differently? What will we do differently next time?”
HRI: How can a company prepare itself for this sort of situation in advance?
Emily Stapf: As for proactive planning, a good incident response plan should be documented and current, resources should be identified and the plan should be tested. As for the contents of the plan itself, there are a number of components in addition to the tactical who-does-what-when that shouldn’t be overlooked.
First, a plan needs to define roles and responsibilities from the outset. Who is on the incident response core team? Who is one the extended team? What are their roles and responsibilities? Who is the incident commander? Who is responsible for what tactical actions? Who is the decision-maker or tie-breaker? Who are your external partners? How do you engage this group?
Second, good incident response plans should include a severity matrix to triage criticality of incident facts and circumstances. For example, how much data was lost, what kind of data was lost, does anyone outside the company know about this, have regulatory obligations been triggered, how long has this been going on, is business impaired in any way? Higher severity incidents require rapid escalation, involve higher profile reporting, and often have more stringent consequences.
Third, good incident response plans have an escalation matrix defined that governs when and what combination of triggering events requires Legal/Executive Committee/CEO/Board escalation, in what manner, on what schedule, and to what level of detail. Not every incident makes it to the board room, but many absolutely do.
Nalneesh Gaur: Incident response plans come in various shapes and forms and the most important thing is to practice it so you can work out what's practical.
We encourage companies to perform tabletop simulations. These simulations allow a company to practice how they respond to a hypothetical situation and who needs to be involved. A lot of people think of a cybersecurity response as being a technology function, but when you do an enterprise-level response exercise you quickly realize it involves people beyond IT, such as marketing, legal and public relations.
As for proactive planning, a good incident response plan should be documented and current, resources should be identified and the plan should be tested. As for the contents of the plan itself, there are a number of components in addition to the tactical who-does-what-when that shouldn’t be overlooked.
HRI: What are some of the threat actors that these companies need to be concerned about?
Emily Stapf: There are really four categories. You mentioned one: nation-states, which are state-sponsored foreign entities that conduct hacks for economic, political or military advantage. These attacks are typically longer, slower and more deliberate, slower-paced campaigns that oftentimes have a deeper impact because of the assets they target.
The second category is organized crime. These are syndicates that are typically after somewhat immediate and abrupt monetary gain. These are bank account hacks, credit card hacks, ATM heists, wire transaction interference, etc.
The third category are hacktivists. These people might be interested in influencing political outcomes or social-economic change. They target corporate secrets, sensitive business information, and many times seek brand and reputational damage. They often operate as lone-wolf actors and can therefore be very difficult to detect until they’ve been successful.
But the fourth category is very important because it’s the fastest growing, and that is the insider. An insider is any person who uses their authorized access for malicious gain. That could be an employee embezzling funds or stealing R&D data to be competitively monetized, or they could maliciously alter systems for revenge or spite. Insiders can also be third-party partners with trusted access to your assets.
This conversation continues. Click here to read Part 2.
Cybersecurity, Privacy & Forensics Integrated Solutions Leader, PwC US
Health Information Privacy and Security Practice, PwC US