Over the past several months, I've been asked a rather intriguing cybersecurity question. The question was posed to me by the Chief Information Security Officer (CISO) of a rather prestigious healthcare system. Before I share that question, allow me to give you a little bit of context.
The hospital had recently completed a rather extensive NIST 800-53 and HIPAA cybersecurity and privacy assessment conducted by a third-party. Overall, the hospital did a decent job during the assessment, and although some gaps were found, they were not critical. The organization had vastly matured since their prior year's assessment. They did much better on a comparative basis than most of their industry peers of similar size and complexity.
Yet, when we conducted a penetration test, the results were devastating. Sensato was able to access critical systems, deploy malware, exfiltrate data, and do so without knowing their security operation center or raising any alerts.
These results brought us to an intriguing question posed by their Chief Information Security Officer, "Why is your team so successful, even though our assessment showed we were doing well from a policy, process, and practice perspective?"
That heartfelt question led us to think about why and how risk assessments, often create a false sense of security. We found that there are several reasons this occurs despite the best efforts. In this edition of the Tactical Cybersecurity podcast, Sensato will dive into those reasons and what you can do to realize a better return on a cybersecurity assessment program.
What are some of the reasons that many organizations struggle with realizing value from a cybersecurity risk assessment and are often left with a false sense of reality? Well, let us start at the beginning,
First, in most cases, security and privacy assessments or audits are conducted by individuals who do not practice cybersecurity. They may be well versed and certified in evaluating your adherence to a framework or standard. Still, they typically do not have the real-world experience to validate that standard. All too often, a checklist approach is used, which is often imposed by the sheer nature of the framework. This approach means that the person conducting the assessment will be well-versed in applying the framework, but probably not deviating from the framework. Unfortunately, this means they are not thinking like an attacker, but more like an auditor.
One question to ask an auditor you are evaluating is, "how is a reverse shell established?" or "what in the assessment will evaluate our ability to withstand a traversal attack?" The answers to those questions will tell how strong that auditor will be in evaluating you and being able to spot critical gaps outside of the framework utilized.
Remember that many of the standards and frameworks relied upon are outdated; this is a common theme across cybersecurity. Far too many cybersecurity and privacy professionals employ practices that were best practices three, five, or ten years ago. The issue here should be apparent. Suppose a framework or standard is outdated, and the person conducting the assessment is not a real-world practitioner, who can advise you and not just evaluate your adherence. In that case, you will have what I call a dimensional gap.
A dimensional gap is when the framework, standard, or regulation you embrace is based on outdated principles. The difference between the effectiveness of those principles and the current threat landscape is the dimensional gap. You may meet all the requirements of the framework very well, but that does not mean you have deployed a cybersecurity strategy that respects your enemy's audacity. If the very cybersecurity and privacy frameworks we rely on are outdated, then isn't our entire approach also obsolete?
That brings me to the second reason why assessments may fail. In many cases, the assessments are not informed by the tactics of your enemy. To question if a practice is relevant, you must understand your systems and capabilities and remember that the attacker has a vote. Frankly, their vote trumps all because they do not care about how well you adhere to a framework or not. This scenario is what happened when Sensato conducted the penetration test. The target was confident that the framework and best practice were enough to deter our creativity and audacity. It was not, and hence dimensional gaps were exploited.
The third issue I see with assessments is that they are often overly cumbersome. We live in an agile world where we need to evolve quickly to address the current threat landscape. When you have an assessment that requires a review of 300, 500, 700, or more-line items with potentially hundreds if not thousands of gaps, it quickly becomes unmanageable, not to mention depressing.
This approach leads to misalignment between the on-the-ground I.T. security teams, senior management,and potentially the board of directors. Ultimately, this lack of clarity and understanding can lead to a misunderstanding of priorities, support for critical policies and procedures, and a lack of funding to protect the organization adequately. Not to mention that for any team, usually, teams that are understaffed or experienced, to address hundreds of gaps, is a significant challenge.
We often find that many organizations make very little progress, if any, year over year in addressing identified gaps. We know of at least one organization that became so frustrated that each year their cybersecurity assessment found the same deficiencies. Their solution was to fire the consulting firm and bring in another firm. When the second firm found the same gaps, they became even more frustrated. As funny as this may sound, and yes, it is a true story, it is not unique. Although it is an extreme response, the reality is that very few organizations make real progress on closing gaps, and all too often, uncovering the same issues year after year. Which begs the question, why?
One way that organizations address the issue of being overwhelmed is what I call Compliance Island Syndrome. Compliance Island Syndrome occurs when you solely focus on assuring compliance with a regulation or framework. A classic example of this is often found in healthcare. Compliance or security personnel will focus on guaranteeing that only those systems which contain personal health information, more commonly referred to as PHI, are HIPAA compliant. Other systems on the network are not kept to the same standards as required by HIPAA since they do not interact with or manage PHI. As you can see, this creates an island of systems compliant in a sea of systems that are possibly insecure or, at the very least, not maintained to the same standard.
An example of how this plays out in the real world is often found when we think of corporate websites. In doing an assessment, we will hear that the corporate site is managed by marketing and not part of I.T. or that it is hosted elsewhere and not a concern. This thinking is Compliance Island Syndrome.
Regardless of whether a system contains data or is addressed explicitly by regulation, such as HIPAA, you need to ensure that it is secure to the same standard as those under the regulatory statute. It should come as no surprise that attackers know that many organizations create compliance islands, which is why, using our corporate website example, this is such a critical risk. Let us walk through how this approach to security provides a gap that an attacker can easily exploit. Furthermore, let's see if we can illustrate each of the points about why this provides a false sense of security.
In this scenario, assume we have a corporate website located at a third-party hosting site and not on the corporate network. Further, the site is managed by a third-party consultant, and they report to your marketing organization. Marketing does not come under the I.T. group, and your I.T. team has stated that since this is not on the corporate network, it is not in scope for the assessment. Before we address all that is wrong with that last item, let us walk through how an attacker (played by one of my team members we'll call Rei) to exploit it.
Rei would know there is a very high probability that someone in your organization's marketing department will log in to that corporate website – either a production or staging site. Rei would find a way to exploit your corporate website and deploy malware, probably some form of trojan or bot, that would wait quietly until one of your marketing team members accesses the site to perform a task. Now stop for a moment and think about how that team member would access the marketing site? They probably used a corporate laptop, connected to the corporate network. Let us say it together – pivot attack. Rei is now on your system.
I am sure you can figure out what went wrong, but just to be complete, we should walk through the errors of our ways. First, the auditor did not point out that this attack was possible and that not putting the corporate website into scope was dangerous. But how would the auditor even know about this type of attack if they have no day-to-day experience with attacking systems? Secondly, the framework employed by the auditor or us more than likely does not deem the corporate website as a critical system or within the purview of any compliance requirement. It may also not consider things like pivot attacks because it is outdated. Lastly, we embraced "compliance island syndrome" and decided that systems that are not required to meet compliance are worth auditing.
It should be apparent that, ultimately, assessments create a false sense of security because we cannot employ them in a manner that reduces our overall risk as it relates to the current threat landscape. But does that mean they have no value? No, not at all, we just need to find a way to assure that they are relevant and drive organizational-wide alignment. We need to ensure that they can easily repeat to measure progress in a more agile manner than yearly. We also need to ensure that those conducting the assessments have current relevant experience and can engage in discussions and debates beyond just checking the box on an evaluation.
As with most things in life, the simpler we can make it, the higher the chances of success. One framework which we have seen yield tremendous value is the Department of Energy Cybersecurity Capability Maturity Model or C2M2. C2M2 was designed by the Department of Energy to help rapidly determine the maturity of organizations. The outcome is nothing short of brilliant.
You can work through the full C2M2 process in just a matter of days. Unlike other frameworks, you do not create massive disruption to your organization, and it is much more economical than a full 800-53, CSF, or HIPAA assessment. Another considerable benefit of C2M2 is that it does create boardroom-to-basement alignment. The output is something that can be easily understood in just minutes by even the most non-technical individual. Yet, they will have a clear understanding of exactly where your organization is, from a cybersecurity maturity perspective.
C2M2 works across ten-domains, with each domain addressing a critical aspect of cybersecurity. For each domain, a score is provided in terms of your maturity. Once completed, you have a valuable tool that allows you to prioritize your cybersecurity strategy, develop a specific roadmap for addressing your maturity, and a practical method for organization-wide discussions regarding cybersecurity. Given the agility of C2M2, you can repeat the assessment every quarter or every six months, measure your progress, and adjust your priorities and practices for the current threat landscape. Further, with the right person leading the C2M2 assessment, you can use it to evaluate, discover, debate, and discuss what you are doing and how to get better. A cybersecurity assessment should be an opportunity to grow and not viewed as a quasi-police investigation that gathers evidence for the prosecution. It should be a rather enlightening experience where you leave more educated than when you started, both about your current state and real and current best practices.
Aside from C2M2, one of the items recommended is to make sure you understand your primary objective for even doing an assessment. That question is, "Is the primary objective of your cybersecurity risk assessment to satisfy a regulatory or compliance requirement, or is your primary objective to safeguard your systems and personnel?" These are not the same, and the right answer is not both. It is, in my eyes, a binary answer.
If your answer is to satisfy a regulatory or compliance requirement, then the focus, depth, and outcome of your assessment will be vastly different. I would argue that it will lead to a false sense of security. The reason for this is that you ultimately are just focusing on attaining a check mark. That attitude and approach will permeate your organization – from boardroom-to-basement.
On the other hand, if your primary objective is to use the assessment to safeguard your systems and personnel better, then your criteria for the evaluation will be different. Chances are you will not rationalize, and you will be focused to ensure that you have executive understanding and alignment. You can put the findings to work as part of a comprehensive strategy. As much as the assessment forces you to take an honest look at your standards, you will also push in the evaluation and require it to challenge you. As I mentioned earlier, assessments should be an engaging experience, not an investigation.
As you can see, asking this question, and providing an honest answer, can change your entire approach to the assessment and, more importantly, what you demand. One of the reasons we are such champions of C2M2 is that it can be tailored to your objectives. If you are looking to satisfy a compliance requirement, chances are C2M2 can work. Still, it is also a fantastic tool to use for gaining an honest assessment of your maturity, which can be used to engage your organization in a holistic process to safeguard your systems and personnel better.
At Sensato, when we conduct the C2M2 Workshop, we use it as not only a means to help clients meet regulatory requirements but much more. C2M2 can help develop a clear cybersecurity roadmap, strategy, validate evidence of practice, create boardroom-to-basement alignment, identify resource and policy gaps, and provide an engagement that helps identify the organization's ability to deal with current real-world threats. Regardless of the approach you take, I think it is vital that you do the following before your next assessment:
This article came about because one rather cool CISO dared to ask, "why?" I hope that the content here has helped to provide some ideas to answer their questions in as mall way. But I also hope that their example of continually asking "why" is something we all embrace. Maybe that is the greatest lesson of all. If we are going to trust the outcome of a cybersecurity assessment, we should start by asking why?