When The Cyber Event Strikes Home

Over the years I’ve written about many possible cyber risks, their potential impacts upon companies, their stakeholders, clients, and associates. We’ve talked about third party risk (TPRM) and associated measures to be taken to protect, detect, remediate, and respond. Well, recently one of these events struck home, literally, and the scenario, as described to me in a letter, and its related actions offer some clear and current illustration to many of these points. Perhaps, in retelling this experience, there is opportunity to explore some issues and some actions to take, to avoid, and above all to learn from as we enter another year working to manage cyber risk throughout our respective environments.

What Happened
It all started one rainy afternoon early in December (I live in the Pacific Northwest, so this is very common). We received a very formal looking letter from a healthcare service provider announcing that “a recent data security incident …may have involved your personal information.” Swell. Reading on I learned that a “law enforcement agency” had let this healthcare provider know that an unauthorized third party had accessed patient information on people receiving care between January 2014 and sometime in mid-July 2021. Really? That much data? Which data? How did this come to be discovered? What has been done? How is the risk of this being done again being addressed? So many questions quickly raced through my mind. Read on.

According to the letter, sometime in July a cyberattack caused the company to be “briefly locked out of its servers.” OK, now this is beginning to sound like ransomware. Then I read that the company was quickly able to respond by restoring systems using off-site backups. Impressive if completely true. And then they said they began “the process of rebuilding its IT infrastructure from the ground up” while adding “additional cybersecurity measures”. And they hired a cyber forensics firm to initiate an investigation. Hmmm. Does this sequence seem somewhat out of sorts? Regardless these steps seem to represent an assertive response to the incident. But so far, you’ll note the timeline says nothing about alerting patients, partners, or any other stakeholders. Remember, this activity all began after a July attack and I received nothing until December. But, there’s more.

In late October, the letter continues, the FBI notified this victimized heath care provider they had seized a hacker’s account containing patient files from their company. The FBI went on further to state they believed the hacker had gained access to data by exploiting a vulnerability in the provider’s third-party firewall, letting the hacker into the provider’s network to retrieve encrypted data. So, there was an attack, the healthcare provider lost system access, they got it back, the remediated extensively, and THEN they are notified their third-party firewall was breeched! Great! So they replaced the firewall. Oh, and what data was “potentially involved”? Just patient name, address, dates of service, diagnosis and procedure codes with descriptions, medical record number and medical insurance data (which could lead to discovery of a lot more personal information). How nice. While they were not served financial or credit information directly they sure have a lot to work with to put my identity at risk.

Their Response to a Client/Patient
To start, they offered 12 months of complimentary access to a credit bureau’s identity protection and monitoring service. They also recommended freezing credit files (done a long time ago) and included contact information for the 3 major credit bureaus. The letter also suggests we all be vigilant in monitoring our medical information, any strange emails or snail mailings related to our healthcare or taxes, and what to do if we are contacted by the IRS because we may be a victim of tax related identity theft. Aside from these offered services and recommended actions, potential victims are left to their own means to monitor and defend their identity and data. Fortunately, so far as it was known or disclosed, no financial information was directly involved. However, the data that was surely could support efforts to acquire it in the future.

An Illustration of Cyber Risk and Third Party Due Diligence
While this is neither a unique nor a particularly explosive story, there is much to learn from this example. Let’s unpack it and take a look. Sometime starting almost 6 months ago the provider’s site was compromised by exploiting a vulnerability in a “third-party firewall”. By implication, this firewall was the property of the healthcare provider. When we have discussed third party risk management (TPRM) in past articles, the attention has often focused upon the third party’s actions, access, and practices. There’s also a need to pay close attention to the means of enabling a partner or service provider. That firewall was part of the healthcare provider’s network infrastructure. It was a service for the third party. It was the provider’s responsibility to maintain, patch, and manage it actively, just as much as any other asset connecting to their internal domains. In particular, since this firewall was a planned entry point by external services, it should have had rigorous and closely watched monitoring. Clearly, that did not happen as well as it might.

Next, we learn that the company was “briefly locked out of its servers.” That doesn’t describe a breech so much as a take down or lock out, typical of ransomware. For how long is not noted. But the consequences for a health care provider means they had no access to patient records, did not know anything about patient history, current medications, treatment under way, diagnoses, or anything else. Potentially lives were at risk. Certainly, medical attention for anyone under care for this unspecified time was significantly disrupted. The potential longer term risks for other patients who required active monitoring simply compounds matters. Meanwhile the company says they were quickly able to respond by restoring systems using off-site backups. Restored backups of what? Everything? Databases? Servers? Routers and other devices? All their user access devices? How much data was lost? What was the RTO and how much and what kind of data did they need to restore? At this point they say they had yet to engage any external forensic assistance. They were concerned with service restoration, which, from a practical “risk to life” perspective makes some real sense. I must admit that if they had valid backups stored offsite that really worked they represent a very small minority of firms, of any size. Kudos for that, at least! Having regular backups, testing them for validity and reliability, and integrating those practices into normal operating procedures is an important risk management measure often minimized by many IT and operations areas.

Now the company says they immediately began “the process of rebuilding its IT infrastructure from the ground up” while adding additional cybersecurity measures. This is rather vague in specifics while far reaching in the scope implied. What did they learn from this experience that motivated this health care provider to take such significant action? If things were so bad that rebuilding IT infrastructure was indeed necessary, why had it been left undone until this incident? One possibility is that forensics found traces of so many issues that it was determined the best course of remediation was a clean wipe of everything and complete restructuring. But so far nothing has been reported anywhere, nor have any patients or stakeholders outside those involved with these remediation activities, been notified of anything.

It’s not until the FBI takes note of discovered data on a hacker’s site traceable to the health care provider that patients finally are made aware of something—months later! Had that not happened, it’s questionable if the letter would have been drafted and distributed, and the whole incident might have been treated as an “internally managed incident” going forward. And it was the FBI that noted the root cause of the even was likely through an exploit of that third party firewall vulnerability. And then the firewall was replaced; probably (hopefully) by a smart device with richer capabilities and more rigorous monitoring. So much for a ground up rebuild of infrastructure.

Handling Incidents When Customer Data May Be Compromised
Incident management is an important component of any risk management program, and for cyber related incidents in particular. First, of course, is establishing the ability to detect and identify suspicious activity, along with the resources to operate those processes. Once found, the event needs to be contained, halted, and evidence preserved for forensic review. Determining the root cause, identifying the malicious actor, and determining the scope of damage or data compromise is key. Also, keep in mind a skilled intruder seeking data does not extract that data—they copy it! Doing so makes it harder to learn what’s been compromised and gives the “perpetrator” more time to make use of it without detection. Nothing is obviously missing and so the trails of a copy and save action must be found—something more difficult and time consuming than the more obvious extraction.

Remediating the problem may seem the right next step. And that’s certainly important, but often imbedded in a somewhat longer timeline that does not match requirements for client notification regulations. Once data compromise has been detected and determined, it’s wise to come forth and alert customers, clients, and key stakeholders who may be impacted. It’s not only just the right thing to do, but many states have passed legislation setting strict timeframes for disclosure once an incident takes place. Regulatory compliance takes some precedence here. Regardless, it’s never a great practice to wait months beyond identification of an incident and determination of its scope to begin the process of notification to potentially impacted clients and customers. Even if your remediation or assistance plans for them remain to be finalized, alerting them to be aware of possible misuse of their data cannot happen too soon. Doing so does more to promote a positive response from your stakeholders than delays, cloudy admissions and soft denials. Transparency and integrity promote trust and confidence; valuable assets but ones that are fragile if even slightly questioned.

How Your GRC Can Help
Trend management is a useful practice regarding incidents, though its practice sometimes goes astray. This is where having integrated modules in a GRC tool really help. If you are using it to document incident related activity, including incident detection details, scope, remediation plans, project tracking, and root cause determination, then you have a single source for all the current and past details regarding incidents. Is there a specific category of controls or practices that seem always to be the source of incidents? Are incidents largely congregated in one operating process or area? Do they always happen at a particular time or have some other attribute in common? Researching the answers can often yield a good deal of useful information to help you pinpoint vulnerable areas of your security and risk management processes, weak or poorly implemented controls, user training opportunities, and gaps in the program, making remediation and hardening more precise, efficient, and economical. They lead to actionable information that can inform leadership so they can assign resources to support these efforts.

Often, companies try to keep access to all incident information highly restricted, citing either legal guidance or the need for tighter security around evidence of vulnerabilities. The problem with creating these “special caches” of incident information is that too often they exclude people who need this information to craft truly meaningful remediation. A patch here and an update there may seem like actions that can be applied in a vacuum to address a problem. They rarely are adequate in scope. Equally important, these separate caches defeat any ability to identify trends over time, are often managed by “senior” people who may lack the technical skills to thoroughly protect the data, produce and maintain forensic backups, etc., and ensure the privileged security intended. Candidly, if you don’t have people whose discretion you can trust, you have a much bigger problem. Also, well managed GRC tools often provide the level of granular security needed to restrict access to this information, provide discreet data control, and support the handling you require, while preserving the ability to identify and monitor trends noted above.

This is also a place where having tightly integrated, embedded visual analytical tools, such as business intelligence software can really help. By viewing information arrayed in a variety of methods, chart types, and from different perspectives, patterns and trends that are not obvious in tabular or narrative data can become clear and specific.

Some Final Thoughts
This could have happened to anyone any time anywhere. We read about such incidents in the news on an accelerating basis as years go by. We all need to be vigilant. Companies need to prepare for the likely if not eventual incident and have clearly made, communicated plans where all critical actors know their roles and can respond quickly. That includes a communications plan for leadership, stakeholders, and anyone impacted by the incident. Also, the data about incidents needs to be tracked so that any trends or commonalities may be used to direct corrective action in your risk management practices, whether through training, process changes, revised or new controls, or whatever else maybe needed to mitigate a recurring vulnerability. This is a place where your GRC may be an indispensable tool for progress and success in the months and years to come. Use it to maintain a single authoritative source for risk data, including remediation project tracking, analysis, and communication of risk status for all company stakeholders.

Welcome to 2022. Be vigilant, be aware, and be safe.

About the Author:
Simon Goldstein is an accomplished senior executive blending both technology and business expertise to formulate, impact, and achieve corporate strategies. A retired senior manager of Accenture’s IT Security and Risk Management practice, he has achieved results through the creation of customer value, business growth, and collaboration. An experienced change agent with primary experience in financial, technology, and retail industries, he’s led efforts to achieve ISO2700x certification and HIPAA compliance, as well as held credentials of CRISC, CISM, CISA.

Newsletter Signup
Interested in being informed when a new blog post is released?

Leave a Reply

Top

DoubleCheck ERM One™

An out-of-the-box tool that delivers an integrated ERM process together with a comprehensive, high-level categorization of exposures (Financial, Core Business, Operational and Strategic), fully loaded with over 60 associated, pre-populated risks to be used as a starting point.

X