Prioritizing Threats: Why Most Companies Get It Wrong

To stay safer, focus on multiple-threat attack chains rather than on individual threats.

We’ve all seen them — you might even have one open right now: an Excel spreadsheet with red, greens, and yellows that tell you where your risk is. You probably follow the simple convention of focusing on low-hanging fruit first and then drill down as hard and as fast as you can on the critical and high items.

Sorry to say this, but you’ve been doing it wrong. You see, attackers are opportunistic and scrappy, yet we don’t seem to work in those variables onto our sea of reds and yellows. I refer to this as the “single versus multivariable risk assessment problem.” We have single rows with risk assigned and work them as if they are singular risks. Attackers, on the other hand, chain risks together. They leverage a low risk on a Web server and a low risk on a database server to get access to high-risk data. Two lows can equal a high? Yes, but your prioritization process doesn’t think that way.

What can you do to get a more accurate prioritization list? Focus on multiple-threat attack chains rather than threats alone. Grab a conference room, some coffee, and the leaders of each of your IT areas (network, infrastructure, application) and draw a simple diagram of your network from a 30,000-foot view. Start pretending attacks are successful using the single items from your threat list. For example, assume the low-risk item in your spreadsheet mentioning a threat on your endpoints is exploited. What do attackers have access to in terms of other threats now that the threat has been exploited? For example, can they now exploit the medium-level threat on the file server because all users have birthright permissions that allow them to authenticate to the file server? OK, follow that threat. Now that the attacker is on the file server, what threats can they leverage now?

As you do this a couple of times and start with various threat entry points, you will start to see patterns emerge — threats that seem to be in every attack chain. That is where you should be prioritizing your work.

Let’s look at a real-world example from a client using the scenario above of the endpoint-threat starting point. What came out of the exercise was that the biggest threat repeated across all attack chains was the use of NTLMv1, an old authentication protocol for Microsoft Windows that is prone to many vulnerabilities used by attackers, to perform man-in-the-middle attacks and to brute force passwords — yet this threat was a low-risk, low-impact item in the client’s fancy Excel spreadsheet.

If you really want to provide even more accurate prioritization, at each step of the above process add how easy it is to detect this risk on a scale of 1 to 10 and the impact on the overall success of the attack using the same 1 to 10 scale. For example, if the medium-risk threat on the file server included access to the corporate intellectual property, and you have no ability to detect who accesses which files, this isn’t easy to detect (10) and the severity is high (9 or maybe a 10). The larger the numbers you have, the more likely this attack chain is actually the high-risk attack chain. This can help quantitatively cause the low-risk, high-impact threats to bubble up a bit quicker.

This process isn’t hard. It isn’t overly complicated. It doesn’t need an actuary to provide a bunch of algorithms to calculate. But it works. It has an official name, failure effect mode analysis, and has some offshoot versions you may have heard of, such as Alex Hutton’s RiskFish and the bowtie method. All approaches want you to focus on the process the attackers actually use and to calculate (or at least qualitatively evaluate) the intersection of multiple risks while taking into account your ability to prevent or detect such risks. So stop using those multicolored Excel spreadsheets and instead start documenting multivariable risks in order to better prioritize.

Original posting of this article on DARKReading.

DevOps – Can Deliver Great Value, Can Be Tough To Implement

Being a writer for InformationWeek gives me the opportunity to do research into topics that I have experienced first hand and dive deeper.  Recently, I got a great chance to go deeper into DevOps and ask CIOs, developers, and IT leaders what their thoughts were on the topic.

Having led multiple development teams, and currently leading a fast growing development and operations team that is working on “big data” platform, DevOps has become part of my everyday life but when I talked with other CIOs and CTOs at large enterprises they are a bit confused about what DevOps is, how it can benefit them, and what it will really take to extract the benefit DevOps claims.

What I found was surprising, even to me. Of the companies we surveyed, only 21% have implemented DevOps, with another 21%planning to within a year. Much less then I thought. Even worse, it seems getting good results is tougher then I thought as only 31% see or expect significantly improved infrastructure stability from DevOps vs. 51% citing only some improvement.

I dove in to find out why and you can read my full DevOps 2014 Report or click the report below. 

research-2014-devops-survey-_38412