Once upon a time…
Some of us, those with mostly grey hair, more or less, may recall days without mobile phones, notebook computers, or even desktop devices. I know, I know, and we were all chased by dinosaurs to school, uphill, both ways, while hauling bookbags bursting with textbooks and homework, in blizzards…I get it. But our major businesses did have data centers, running bigger iron, and we connected to those devices by cables, from our offices, or through telecom services. Data storage and applications were physically centralized, with only a presentation layer present on the remote “terminals” we used to interact with application software. Security was primarily focused upon protecting the central facility (physical and logical), with only access authentication to applications and data being a concern at the end user point of presence.
Roll forward 40+ years. Today we use “smart” mobile devices to connect to remotely and often wirelessly to aggregation points providing data and application services, i.e., the “cloud”. We use web browsers or “apps” that provide only a thin layer of user interface and localized processing to the robust centralized functions in the cloud. Security appliances, along with data stores and application servers are virtualized in operation to attend to these centralized stores, but at some place there is still a centralized repository of hardware running software. At some point the physical data center is present, still. Newer, vastly more powerful, but still a physical presence. Yes, there are many detailed differences, but, largely, in concept, we have today an updated, refined, and more facile implementation of the centralized processing accessed by remote display and input devices; a model as old as green screen terminals and centralized data centers running tape drives, mainframes, mini’s and the like. What once was new is, in updated form, new again.
Cyber security was in its infancy. For the most part it focused upon physical access controls, authentication, access permissions/entitlements, backup and recovery. Threats such as spear phishing, social engineering, device hacking, man-in-the-middle attacks, ransomware, identity theft, etc., were not top of mind for security, nor were they, in some cases, even possible then. As remote point of presence devices became “smarter” and communications options grew in number and ubiquity this began to change—dramatically! Development of stable, reliable machine virtualization brought cost efficiencies and resource flexibility to our enterprises, while adding to the potential pile of vulnerabilities and threat vectors requiring serious attention. The enterprise was growing ever more powerful and flexible… and vulnerable. Adding cloud computing extended both sides of the equation. GRC tools were just beginning to peek over the horizon, but were perceived as necessary only for those pioneering, and very large enterprise early adopters with cash and other resources to burn. The rest of us plodded along in ignorant bliss, (hopefully). All the while those old basics, from long ago, remained important but received somewhat uneven resource allocations, management and technical attention. The picking opportunities for malicious actors continued to ripen.
We’ve Changed, So Have The Threats, and Malicious Opportunities
Today’s business processes rely far less upon paper records or practices resulting from direct human interaction. We’re well into digital transformation. This has been a mixed blessing. We have gained great operating efficiencies and the ability to acquire, absorb and utilize far more information in the services of customers and clients. But, in our rush to develop these capabilities, and implement them to our advantage and our customers’, we have often left aside thoughtful consideration for the risks, vulnerabilities, and potential for harm from threats enabled by our technological migration. Security, in its new extended sense, was often an afterthought, characterized as an impediment to progress. That migration is now accelerating, and diversifying as more devices become “smart” and opportunities to connect them approach ubiquity. Amazon’s Sidewalk offers to create a mesh of our home WIFI signals, late model cars offer option digital hot spots to extend our connection. Most late model vehicles have some measure of exposure to connectivity, as do our traffic monitoring and control systems, home appliances, access features, HVAC, grounds management systems, and lighting—the suite of IoT. These smart “things” are all access points to our business infrastructures when remote workers connect from home, or coffee shops, or anywhere wireless communications allow. We are rapidly creating a tightly woven fabric of devices and connectivity that makes almost everything a potential entry point to whatever services, companies, and practices any user engages. This cyber blanket offers so many potential opportunities to malicious actors that keeping it “clean” has become a challenge for the strongest cyber security “detergent”. It also blurs the distinctions and boundaries of homes, companies, industry, government, educational and medical services. We are becoming dwellers in one huge sea of expanding data.
In this connected world, more sophisticated cyber threat actors can acquire off-the-shelf tools from dark web sites. They no longer need to be technologically savvy. That barrier has fallen. They have also learned to transform traditional cyber tools used for vulnerability diagnosis, such as penetration testing tools, to discover and exploit network weaknesses in target organizations. Spear phishing is another tactic, leveraging email, that remains popular because it works. And, supply chain targeting remains a favorite, often incorporating “island-hopping” approaches, where smaller firms are targeted to bypass protected access into their larger, richer customers and partners. These openings create opportunity to launch ransomware attacks, which unsurprisingly have recently grown dramatically in frequency and impact, with annual losses now estimated globally in many billions. This is a far cry from the first ransomware attack, launched in 1989 using floppy disks.
While Responses Remain A Mix Of Old And New
Insurance to recover the loss from cyber attacks, ransomware in particular, has become a growing strategy for dealing with cyber risks. However, as these attacks grow in frequency and the realized impact of resulting losses, some insurers are raising premiums dramatically, requiring proof of costly preventive provisions, or eliminating coverage for such incidents. Insurance isn’t a new strategy to transfer risk. It may become a fiscally questionable one in some cases. This means the traditional risk management tasks from our data center centric days remain relevant; i.e., those controls addressing physical access, user authentication, entitlements, backup and recovery, continuity, and the processes in place to manage each. But now, with the advances in communications, mobile platforms, operating systems, applications administration, monitoring and detection software there are many more potential points of contact, extended operating footprints and geographic presence, and more complex business relationships yielding even more complexity to the oversight of threats, detection of vulnerabilities and attacks, and the application of risk management practices to address them.
Asset lists that once were maintained on ledger paper, then mainframe printouts, then spreadsheets, now require relational databases to record physical and logical devices, relevant metadata, and current status. Continuity plans may be local in some regards, geographically diverse in scope, and even international through inclusion of partners and suppliers. Regulations and compliance requirements grow more complex with the number of considerations and diverse laws and customs in play. Partners may reside anywhere, may exchange client and product information, fulfill orders, house excess inventory, provide specialized skills or manufacturing of components, or even provide critical infrastructure processing through cloud-based services. Identifying risk, detecting threats, understanding and managing how partners and suppliers of all sizes and significance interact with your company, and finally designing, building, and implementing a comprehensive risk management strategy can no longer be done on paper, in lists, spreadsheets, or other simple tools. The dimensions of diversity, interaction, and complexity that overlay the foundational core practices from the early years of automation now require more sophisticated, nuanced, and powerful tools to support effective cyber risk management today.
Effective cyber risk management today needs to consider data from a variety of sources, and source types. There is test data resulting from monitoring devices, penetration tests, intrusion detection, file integrity, mobile device management, and data loss protection systems. There are the findings of regulators, internal and external audits, incident reports, and any other investigative practices. And there are the data that provides performance metrics and defect behavior from operational and technical practices. These would include the findings from tabletop business continuity testing, backup recovery testing, end user training metrics, failed login metrics, and any others resulting from physical security controls. Additionally, are the reports, research, and guidance of agencies that monitor threat trends, their characteristics, practices, and technical attributes. This recitation alone is exhausting. And it’s far from complete. This is why automated tools to help address all these data sources, organize them into useful information, and enable informed decision making and action are essential. The alternative would be a prohibitively expensive, and unscalable solution employing ever growing numbers of qualified staff, if you could find and afford them.
Coping With Complexity
We already have models and metaphors for systems and tools to help us organize and operate with ready awareness of the critical performance metrics and monitors of operating performance in easy-to-understand and readily relevant in context circumstances. We drive cars, and along the way we know how fast we are going, how hot our engine oil is against a warning range. We know how much fuel or charge remains and our range, and we know what radio channel we’re listening too, the internal (and often the external) temperatures, if our headlights are on, high or low beams, and so much more. In recent years we’ve added warnings for blind spot traffic, unseen rear view objects, automatic braking for fast approaching items before you. Wipers sense rain and respond on their own, and more. Taken together we operate a combination of old and new information to drive somewhere, with ever increasing speed and safety, even though our environment, and traffic conditions are much more complex than 25 years ago.
Today you look for the equivalent of your car’s dashboard readouts. Software platforms like an enterprise GRC can deliver much of the organizing, refining, relating and presentation of the complex and diverse data streams into its stores. There is a key difference. For your car, auto makers and your own experience have determined what you need to see, when, and how. You are warned if your tire pressure is low, otherwise nothing complicates your display. You don’t see engine component’s compression ratios because if there’s a problem it will show up in a related indicator. Choices and priorities have been made and what you see are the results of those. For your GRC, you need to understand your business and your risk program’s impact upon its operation well enough to build your own “dashboard” or series of displays, metrics, reports, etc. You need to know about all the core, older, basic processes and their status, but now, in today’s environment, you need access to all those indicators, but only when you do, and in the context of others related to them. Drill down details in quality reporting provide the means to do this. Knowing what to prioritize, what to display, when, how frequently, and against what standards depends upon your business, its specifics, and how well you understand the critical levers that manage its operation.
Having a quality GRC tool, with embedded, flexible reporting, and an extensible architecture that allows you to add services and features, while enabling data feeds to be related, is a critical component to managing risk today. Doing so makes all those old processes and their states relevant, current, and “new”, while incorporating the data of newer processes to build and deliver a comprehensive portrayal of your risk status across your enterprise—an essential component of any contemporary cyber risk management program. It’s the logical extension of the paper reports on the status of those core data services from 40 years ago. Yes, everything old is new again, supplemented, expanded, and enriched by today’s tools for aggregating, organizing, relating and representing the information you need to manage today’s companies in today’s dynamic business environment.
About the Author:
Simon Goldstein is an accomplished senior executive blending both technology and business expertise to formulate, impact, and achieve corporate strategies. A retired senior manager of Accenture’s IT Security and Risk Management practice, he has achieved results through the creation of customer value, business growth, and collaboration. An experienced change agent with primary experience in financial, technology, and retail industries, he’s led efforts to achieve ISO2700x certification and HIPAA compliance, as well as held credentials of CRISC, CISM, CISA.