Extending the 20 CSCs to Gap Assessments & Security Models

Using SEI CMMI as a base John Willis is proposing a new security maturity model.

At the ShmooCon Firetalks this year John “@pinfosec” Willis gave an interesting talk where he discussed the 20 Critical Security Controls (CSC) and how it could be adapted into a security maturity model using the Software Engineering Institute Capability Maturity Model Integrated (SEI CMMI) Maturity Levels (ML). This post is the accompanying article he wrote for that talk. If you’d like to see his full 15 minute Firetalk, scroll down to the embedded video below.

#####

Security Operations Maturity Assessment Model (SOMAM)

Contributed by John M. Willis, pINFOSEC.com

The 20 Critical Security Controls were initially created due to the fact that controls in existence at the time did not align with defense against actual attacks being experienced. Controls such as the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 were increasingly being viewed as ineffective. The 20 Critical Security Controls were created over the course of a few years (roughly 2008-2011) through efforts by a consortium of government agencies, including the National Security Agency, FBI (IC-JTF), DC3, plus many others, such as SANS, CIS, Mandiant, InGuardians, Lockheed, and McAfee.

Initially, the controls were referred to as the Consensus Audit Guidelines. They were initially published by the Center for Strategic & International Studies (CSIS). They are now housed at the SANS web site, and referred to as the 20 Critical Security Controls.

Each control includes references to relevant SP 800-53 controls. In addition, a variety of suggestions are given as to how to implement the control, under the categories “Quick Wins”, “Visibility/Attribution”, “Configuration/Hygiene”, and “Advanced”. Some associate these categories with maturity levels. However, the controls do not specify such a framework, per se. Each control also contains information pertaining to metrics, as well as using Entity-Relationship-Diagrams to facilitate ensuring complete coverage of the environment pertaining to the control.

A number of organizations have struggled a bit to apply the controls as controls in the strictest literal sense. And this in the face of the fact the 20 Critical Security Cntrols themselves are not associated with a compliance scheme against which the organization will be audited. Contrast this with front line security operations folks that need a quick way to identify and prioritize specific things to improve on to strengthen defenses and be prepared for any type of audit. This is where I found myself not long ago at an unnamed federal agency.

Where the Rubber Meets the Road

My opportunity to apply the controls went as follows. First, I went through each control and drilled down into the implementation recommendations, the SP 800-53 details, and the specifics of the environment I was trying to assess. From this I created a tailored list of practices under each of the controls. Then, I proposed that each of these specific practices had a process capability maturity level. For this, I used the typical Software Engineering Institute Capability Maturity Model Integrated (SEI CMMI) Maturity Levels (ML), except that I fondly included ML 0, representing no process exists.

Let’s take the first control as an example:

Critical Control 1: Inventory of Authorized and Unauthorized Devices.

Okay, that is not detailed enough. One suggestion is to reword the controls in a better form, like:

Establish and Maintain an Inventory of Authorized and Unauthorized Devices.

That’s a good step. However, when you dig deeper into the details of the control the next level of statement is:

The processes and tools used to track/control/prevent/correct network access by devices (computers, network components, printers, anything with an IP address) based on an asset inventory of which devices are allowed to connect to the network.

Well, limiting network access to authorized devices is something not captured in the above suggested control wording. So, let’s break this longer statement down into two Base Practice statements for this first control:

BP.01.01 – Manage inventory of authorized devices (computers, network components, printers, anything with IP addresses)

BP.01.02 – Limit network access to authorized devices

Okay, now that is something we can work with. From here, I studied the rest of the information under this control, including the SP 800-53 details. I did this for two reasons. First, to validate the completeness of the Base Practice statements. And second, to generate a list of subpractices to assess in the environment of interest. Okay, so I’m stretching the truth a little. I went straight for the tailored subpractices and am now having to create the Base Practice Statements after the fact. Here are the subpractices I ended up with for this first control:

Asset Management – Servers are listed by type/function and location

Device Authentication – Devices are known to be authorized before network access is granted. Otherwise, the network is scanned every 12 hours for unauthorized devices

Network Admission – All ports are either physically secured or access is limited to authorized devices

Wireless Devices – Network tools are used to detect unauthorized wireless devices

By now you are probably asking about desktop machines. I left those out in my first quick-and-dirty tailoring because for this environment they had that very well under control. I knew servers were an issue. While we are on that topic, Critical Control 2 clearly indicates whitelisting applications. Let’s face it folks, in certain environments certain practices are incompatible with the culture, or management will never buy in. Whitelisting on the desktop is a tough sell in some environments. However, whitelisting for servers is something everyone likes. When tailoring the subpractice list, you will end up dropping certain items. Just make sure you have all of your bases covered.

Now we are ready to talk about Maturity Levels (ML). Here is how I defined them:

0 – No – No Process Exists

1 – Exists – Process Exists

2 – Defined – Defined Process of some sort Exists

3 – Practiced – Vetted Process is now a routine Practice

4 – Reviewed – The Process is formally Reviewed on a Specified Periodic Basis

5 – Continuous – The Process is reviewed periodically and is subjected to Continuous Improvement

As should be obvious from the above, we are interested in the Process Capability Maturity. Translated, that means we don’t care at this point whether the process is manual or automated. We want to know whether or not a process exists at all, if it is defined (documented), used, and reviewed periodically.

So, let’s look at an example how the first set of subpractices may be assessed:

ML2 – Asset Management – Servers are listed by type/function and location

ML1 – Device Authentication – Devices are known to be authorized before network access is granted. Otherwise, the network is scanned every 12 hours for unauthorized devices

ML0 – Network Admission – All ports are either physically secured or access is limited to authorized devices

ML1 – Wireless Devices – Network tools are used to detect unauthorized wireless devices

My objective was to get every subpractice up to ML3 (vetted routine practice). But, what the above information tells me is that I had better get busy trying to figure out how to control network access for this environment.

Generally speaking, you can prioritize improvement, or remediation, efforts starting with the lowest scores first. At the time I did this, I focused on assigning the controls ranked by NSA as Very High Importance with the highest priority. Along with the latest version of the controls, there is a new approach focusing on five top areas. At the top of the list is whitelisting applications. Every organization is going to have to decide for themselves how they want to prioritize efforts.

I created a table of all tailored subpractices and suggested the Maturity Level for each. Then, I met with the CISO and the Security Operations Working Group (which I co-chaired). We reviewed, changed and reached agreement on the Maturity Level for each item. The result was a powerful series of meetings during which we were able to be very focused on specific improvements. It was awesome.

Follow-On Fun

Since the above exercise, I am no longer under contract with that agency, and have determined that I should share the experience and develop the concept further. First, the Base Practice statements need to be carefully crafted—ensuring everything in the scope of the control is covered—and that the statement is individually actionable and assessable. I have started doing this, but I think this is bigger than me and I am seeking volunteers to help out. So far, that is starting to get traction.

There is one other area that bears mentioning. Others applying the controls may associate the categories “Quick Wins”, “Visibility/Attribution”, “Configuration/Hygiene”, and “Advanced” with Maturity Levels. First, the approach I laid out above is focused on Process Capability Maturity Levels—not technical. Second, even if you tried to consider these categories to be Technical Security Maturity Levels, there is a complete absence of a documented framework to do so. But—there is something worth looking at there. These levels could be further developed to establish some measure of Robustness Level. This could focus on security architecture and engineering rigor, to include the following (for example):

  • Visibility/Attribution
  • Configuration/Hygiene
  • Automation
  • Breadth & Depth of coverage
  • Integrity
  • Resilience
  • Ability to provide/consume situational awareness data
  • Common Criteria Evaluation Assurance Level-like criteria
  • and/or whatever makes sense

From a practical perspective, I think it is most important to get the Base Practices nailed down for the controls. Second, a set of standardized subpractices can be crafted and merged in. Security operations folks applying the model can pick and choose from the subpractices, make up their own, reword them, etc. as necessary to ensure the Base Practice for the control is adequately addressed.

Scoring

Scoring one’s security posture with this model should be used primarily for a reality check, but more importantly, to use as a baseline for future comparisons to see how successful you are at implementing improvements. More often than not, you will shift your priorities based on factors that enable or hinder your progress.

This model is not necessarily intended to benchmark one organization against another.

How to score?

Let’s take the example above for Critical Control 1. We are going to apply the weakest link low water mark approach

ML0 – Critical Control 1: Inventory of Authorized and Unauthorized Devices.

ML2 – BP.01.01 – Manage inventory of authorized devices (computers, network components, printers, anything with IP addresses)

ML2 – Asset Management – Servers are listed by type/function and location

ML0 – BP.01.02 – Limit network access to authorized devices

ML1 – Device Authentication – Devices are known to be authorized before network access is granted. Otherwise, the network is scanned every 12 hours for unauthorized devices

ML0 – Network Admission – All ports are either physically secured or access is limited to authorized devices

ML1 – Wireless Devices – Network tools are used to detect unauthorized wireless devices

Network Admission had a Maturity Level of 0. Accordingly, BP.01.02 and Critical Control 1 receive a grade of ML0. Painful, eh? That’s the way it is. Ouch.

Next Steps

I’m going to set the question of what to do about Robustness Level to the side and focus on the Base Practice statements and subpractices. I have picked up a few volunteers to help out in this effort, and am seeking more. I am also working closely with the Consortium for Cybersecurity Action, the organization currently responsible for the 20 Critical Security Controls. If you are interested in helping out, or being informed when the model is completed, contact me at John [.] Willis [at] pINFOSEC [dot] com. I can also be reached via LinkedIn.com/in/johnmwillis. If you are already working with the Consortium, please contact them directly.

ShmooCon Firetalks Video

Credits

Copyrights, Registration and Service Marks, etc., if any, are property of their respective owners.

The marks CMMI®, Capability Maturity Model®, and Carnegie Mellon® are Registered Marks of Carnegie Mellon University.

The current version of the 20 Critical Security Controls licensed under the Creative Commons License

References

20 Critical Security Controls – Control 1: Inventory of Authorized and Unauthorized Devices, Adam Montville, December 27, 2012, tripwire, The State of Security.

The 20 Controls That Aren’t, Ben Tomhave, October 3, 2011, The Falcon’s View (blog)

Draft for Discussion, Department of Homeland Security, Technical Requirements for Continuous Monitoring, 2012

Other Security Maturity Models

Software Development: The Building Security In Maturity Model

Systems Security Engineering Capability Maturity Model (SSE-CMM), ISO/IEC 21827: 2008 (E)

Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2)

Cross-posted from pINFOSEC.com.

#####

Today’s post pic is from Calstatela.edu.

3 comments for “Extending the 20 CSCs to Gap Assessments & Security Models

  1. March 27, 2013 at 3:15 pm

    #NoVABlogger Extending the 20 CSCs to Gap Assessments & Security Models http://t.co/jmtdBSTOzV

  2. March 27, 2013 at 3:52 pm

    BLOGGED: Extending the 20 CSCs to Gap Assessments & Security Models http://t.co/2UpMl8Xjpw

  3. March 27, 2013 at 4:54 pm

    Extending the 20 CSCs to Gap Assessments & Security Models http://t.co/UejWGFufnU

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.