Most Security Assessments Display a Stoplight In the Execution Summary…
Stoplights Don’t Close Business – CISOs Don’t Make Buying Decisions Based On Red Lights
The Person Who Can Show The Real Cost and Risk Of Downtime/Disaster Wins The Business…
Is Your Company Performing Risk Assessments? You Should Be…
That is, if you want to grow your security business.
With data now ranked as your client’s number one asset, and computers be integral to every major business function, downtime and data loss are your client’s biggest concerns, even if they don’t yet know it.
Sure, competition, cash flow, and the economy are all factors, but one big data disaster is sure to put your client out of business in a heartbeat. However, getting them to act on necessary risk mitigation steps is not always easy, especially for those firms that sit in the SMB market. Budgets are tight and IT is often viewed as an unwelcome expense…no one wants to spend money just because someone says, “You need more security”.
(Get my free assessment template here)
Can Consultants Actually Provide a Measure Of Risk?
I hear it all the time, “You can’t provide a measure of risk when it comes to data security…”
But more often, it’s the amount of work required or just a lack of understanding how, that leads to shortcuts on measuring risk. And so the final report simply shows a stoplight – Red Yellow Green…a meaningless measure of nothing.
If you’re questing the validity of a risk number (vs. a stoplight), a read through Douglas Hubbard’s book, You Can Measure Anything, might be worthwhile. I will warn you, it’s a bit technical…
Regardless, his point is clear, you can measure Cyber Security Risk,…
Stoplights Don’t Measure Risk
The problem with the stoplights is, it doesn’t actually measure risk. Can you imagine an Insurance company figuring out premiums, or an investor calculating risk based on yellow and red lights? It’ll never happen…
Simply put, the Red light is not a measure of risk. It doesn’t actually measure anything. So no wonder your assessments don’t lead to remediation efforts or convince a client to move on security upgrades.
Things to Think About Before You Measure Risk, Or Decide You Just Can’t
The Impact vs. Likelihood graph (pictured above) is a measure of risk. This simple graph plots the value of data on the X Axis and a measure of likelihood – the odds something will go wrong.
Of course, before you can create such a graph, some data gathering will be required. On page 194 in The House & The Cloud, I prescribe a sequence of meetings with asset owners, knowledge workers, consultants, and finally, IT…4 separate meetings that take you from the value of data, to the custodial aspects of data usage and protection.
While most assessments begin and end with scans and technical walkthroughs, my approach starts with an understanding of data value…
Next, a look at workflow and data creation and usage.
The third meeting is where a measure of risk begins…guessing at how vulnerable an ERP system is to an attack is not possible without some pre-work. First, the consultant must define what the possible risks are. This is where most of the nay-sayers are stuck. Without a clear list of relevant threats, they’re right. Risk can’t be measured. You can’t just say, “It’s risky”, “It’s not”. There has to be a WHAT…
Is there a risk of downtime, ransomware attacks, data theft,…? You might be thinking, this list is endless. It’s not.
Consider only the relevant threats…based on type of data, trends in the news, and how systems and processes are set up. Want the details? Read Hubbard’s book. However, a thorough study won’t be necessary for the amount of detail needed in most of these assessments.
If you know the client has problems, the report only needs enough detail to convince them to move forward. We’re not building a spaceship here…
Facts and Soundbites You’ll Want On Hand
Also important to the process is a list of trends you know are relevant and up to date in the market you serve. For instance…
- 75% of IT managers reported in 2017, they could not recover fully with their backups – Barkley Protects.
- 47% of firms surveyed by MalwareBytes reported Ransomware attacks.
- 79% reported malware attacks (including those resulting in Ransomeware Encryption).
- Hardware failures occur on every system at some point, unless you replace them before the outage occurs – Just believe me on this one.
- Annual downtime averages 14 hrs. per business. Costs are high but vary depending on company size. Average cost of downtime was about 100K/hr, but obviously these numbers don’t speak to the SMB market. Your asset owner contact should have the data you need on this one.
- Add more issues if they’re relevant. Each will be used to create a measurement.
Understanding the Graph (Above)
Your X-Axis represents digital assets. Think – Applications and data. The Y-Axis measures risk. 100% means it’s in motion now. So if you find malware (or symptoms of malware) on your client’s network, mark it down at 100%. It happened…it’s urgent.
0% means it won’t happen. Using the issues above, your % will almost never be 0. There’s always some risk…
Based on your interviews, you should have some feel for what would be acceptable risk. For instance, you should know how much downtime any given application can afford, and how much data can be lost before management goes postal!
Don’t Get Wrapped Around The Axel On Normal Distribution Graphs
The computation is where everyone gets stuck. The sales people will want a number, the technical experts will claim it’s not possible. Hubbard says, without qualifiers, it’s possible…
Will your % be 100% accurate? No! It’s like any statistic. What’s the likelihood I’ll have an accident today driving. There’s a statistic out there, and it’s higher than zero – but I don’t plan on having an accident today – if I don’t does that mean the 20% was wrong? No…
Your goal is to provide your best guess…based on your expert opinion.
Helpful Assumptions – Every Statistic Has Them
Getting a number is easier when you can make some assumptions.
- There’s a list of relevant threats. That list is an assumption. You may miss one…but your expert opinion (with input from the client) is all you have, so go with it.
- Security is only as strong as it’s weakest link. You learned that in the CISSP course…that means, you don’t need to compute all kinds of weighted averages or plot normal distribution curves (although you’ll use some of this, keep reading). The most likely threat is your threat level for any given application…
- Digital assets will be on a server, end-node, or in a cloud. All on-prem servers will have similar threats, with some minor variations based on network segmentation, OS, and access control.
- But each asset will have it’s own greatest concern (Confidentiality, Integrity, Or Availability). Your Asset Owner interviews will help quantify each.
- Your client’s guess at cost of downtime is all you need – just go with it.
The Calculus
Lucky for you, there’s no calculus here…
Step One: First, you need to know what their key asset are…not hundreds of applications, just focus on a few. If you’re doing a comprehensive corporate assessment, charging big money, I would recommend reading Hubbard Cyber-Risk Book – You Can Measure Anything, first. But, for the average small/medium business risk assessment – you’ll have 5 to 7 key applications to consider.
Step Two: Next, you need a list of relevant threats. In The House & Cloud book, as well as previous posts on my blog, I’ve outlined different approaches to asking questions and gathering data. Essentially, you want to know how long they can be down, how much data they can lose, and what’s going on around them that would affect risk, other than misconfigured systems. (A lawsuit, layoff, or upcoming product launch all come to mind).
Step Three: A list of relevant threats or considerations for risk is needed. This is where you must define “Secure”. You’ll want to consider the three pillars of security (Confidentiality, Integrity, and Availability). You’ll also want to consider your asset owner’s answers on downtime and data loss. If the asset owner believes 4 hours is the max downtime – find out if that’s ever been tested. I bet it hasn’t. What are the odds of getting a given server back up and operational? – only a test will tell. That could be your next sale.
Step Four: Identify the controls needed in their situation to protect against the threats you believe are relevant. For instance, AV, Firewall Configuration, Sandbox, SEIM, etc. Is there someone there who can interpret SEIM output and alerts? Probably not – and if not, that control is somewhat useless.
Step Four: Collect data. You’re looking for symptoms of misuse or compromise. Bot traffic is a sure sign of compromise – so that would be 100% (or 99% if you can’t verify it in the scope of your assessment).
Step Five: A database of norms is needed, as Mack Hanan Points out in his book, Consultative Selling. In the event you don’t have such a database (and that’s probably the case when you’re just getting started), industry data will do. For instance, we know that 90% of email is spam, and probably contains phishing attacks. Do they have the controls in place to stop these attacks? The average reports tell us, 87% (or whatever number you can come up with using your trusted sources) are reporting malware over the past 12 months. So, if in your expert opinion, this company is “Average” there’s an 87% chance. Yes, this is simplistic, but it’s far better than a red light…
Step Six: At this point I would create a table using weighted averages…so there is some math. Taking each control, rank the controls for the given threats giving each control a % weight based on what you think is most important. The total should be 100% – making up their 100% security solution. Note, this list is pretty simple – yours may have 10 or 20 items, but don’t get carried away. Again, we’re not trying to fly to the moon with this process.
A score is given to each control, based on what you observe. Do they have UTM components configured and running? All of them? One of them? How complete is their firewall configuration? Don’t forget about things like training, policy, disaster recovery plan, etc.
You’ll do this for each major asset…so with 5 data assets, you’ll have 5 different tables like this one. Notice, training may be the same if the same people use that application. However, training may vary from department to department. Same with the importance of a control or additional controls for applications used at home or on mobile devices.
Step Seven: Okay, now you have a score…but what you need to know is, what’s an average score? This is where your database of norms comes into play. Early on there may be more guess work, however there may also be data online.
For instance, we know that only 26% of iPhones and 60% of Android users are using any kinds of mobile security software, Kaspersky sales that 90% of Android phones are easily hacked with a certain exploit, and that 95% of of phone users access the Internet with them. Use whatever stats you feel are valid based on their source.
I personally like to use Gartner Group, FBI, and WSJ first…but will draw from other sources such as the Verizon annual security report or well known vendor studies including Kaspersky, Cisco, Symantec…these represent industry averages. If your client has solid mobile security, they’re above average…if 50% have it, they’re average. If no one uses mobile security, or it’s not enough to measure, they’re below average.
Some Helpful Assumptions
As you review your scores, making some realistic assumptions can help you land on the right number. Remember, on the impact vs. likelihood graph, you are simply trying to land on a % likelihood of breach or problem for a given data set or application. Consider these assumptions, and add your own…
- What is the likelihood that a phishing email will enter your client’s internal network…nearly 100% since 90% of all email is spam, and most spam these days contains phishing links.
- What is the likelihood that someone (probably and administrative assistant or office worker) will click a bad link over the next 12 months? Nearly 100%…a simple phishing test will prove this out, but sharing some war stories may be enough to make your point.
- What is the likelihood your client/prospects current security controls will detect that phishing attack or ransomware link before harm is done? You could test this, but your SE’s expert opinion is all you really need.
Putting It All Together
You’ve interviewed, observed, collected data, and now it’s time to put some numbers down.
(Download my Assessment Template Free)
If you have evidence of malware, you’re at 100% for any system susceptible to malware infections, and highly likely on future ransomware attacks.
You know malware will hit most companies over the next 12 months, and at least half will be hit with ransomware, based on statistics I’ve already given you. Is this company better or worse than most? That’s your expert opinion. So given they’re average, your servers and workstations on prem – are sitting at 50% or better.
You can see where I’m going here. Unless the company’s security is better than most, chances are high for just about every application.
Then, on top of that, you have the likelihood of email spoofing and invoice fraud, internal theft (averaging 75%), etc. List out your applications, review your greatest threats, and assign your numbers based on your table above.
You’ll want to be able to show you have a method behind your madness, but don’t over complicate it. The client just needs to see that there’s some science behind what you’re reporting. If you understand normal distribution, it can’t hurt to show some data based on one of two standard deviations of a normal occurrence…95% of companies fall within 2 standard deviations of any norm…if you don’t understand how that works, just leave it out. Some further study on this will provide a greater level of proof, but just go with what you have now to complete the report…
Feel free to comment or ask questions below!
© 2017 David Stelzl