Showing posts with label risk analysis. Show all posts
Showing posts with label risk analysis. Show all posts

Thursday, November 26, 2009

A paper is out: Impact Estimation using Data Flows over Attack Graphs

In October I presented a part of my work to measure the security of a network at the NordSec 2009 conference. You can find the paper here. Any feedback is welcome.

Abstract

We propose a novel approach to estimating the impact of an attack using a data model and an impact model on top of an attack graph. The data model describes how data flows between nodes in the network -- how it is copied and processed by softwares and hosts -- while the impact model models how exploitation of vulnerabilities affects the data flows with respect to the confidentiality, integrity and availability of the data. In addition, by assigning a loss value to a compromised data set, we can estimate the cost of a successful attack. We show that our algorithm not only subsumes the simple impact estimation used in the literature but also improves it by explicitly modeling loss value dependencies between network nodes. With our model, the operator will be able to use less time when comparing different security patches to a network.

Friday, August 15, 2008

A Fuzzy risk calculation approach as alternative to the CVSS computation

In my previous post I asked some questions about CVSSv2. The looking around for information about CVSS I stumbled over this paper: A Fuzzy Risk Calculations Approach for a Network Vulnerability Ranking System (TM 2007-090). The author describes a fully fuzzy systems approach for ranking vulnerabilities that also can rank networks. The approach is partly based on CVSSv1 and its performance is compared to CVSSv1. It would be interesting to adjust the approach to CVSSv2. One thing that is solved in the paper is that different combinations of input values should yield different output values. This seems to a problem in the CVSSv2, see here and here.

Friday, August 8, 2008

Making sense of the CVSS equations for risk analysis

I am considering to use the CVSS standard to get realistic input values to a real-time risk management model I am developing in a research project. This vulnerability scoring system is meant to be easy to use and understandable. However, I have been trying to make sense of the equations of CVSSv2 without much success. This is how the equation for the base score looks like:

BaseScore =
round_to_1_decimal(((0.6*Impact)+(0.4*Exploitability)–1.5)
*f(Impact))

Impact = 10.41*
(1-(1-ConfImpact)*(1-IntegImpact)*(1-AvailImpact))

Exploitability = 20 *
AccessVector*AccessComplexity*Authentication

f(impact)= 0 if Impact=0, 1.176 otherwise


It is not clear at all why the equation looks like this. Especially the parts marked with bold. If we look at the old version (CVSSv1) it is much easier to understand:

BaseScore = round_to_1_decimal(10 * AccessVector
* AccessComplexity
* Authentication
* ((ConfImpact * ConfImpactBias)
+ (IntegImpact * IntegImpactBias)
+ (AvailImpact * AvailImpactBias)
))

As can be seen CVSSv1 is much more straight forward than CVSSv2. In CVSSv1 the different parts might be interpreted as probabilities or costs.

I have a lot of questions about CVSSv2 that I would like to ask:
  • Most important: Can anybody tell me what the numerical values of ConfImpact, IntegImpact and AvailImpact mean? Are they probabilities, risk metrics of their own or anything else?
  • Why are the impact values combined with a noisy-OR instead of just being added?
  • Why are the values of the access vector, authentication and access complexity weighted and then added together with the impact instead of just being multiplied?
  • Did the people behind CVSSv2 try to adjust the weights of the CVSSv1 to fit the requirements of CVSSv2 before changing to the new equation? If not, why?
  • There is a lot of research in knowledge representation and elicitation, how has it influenced this work (if at all)?

Wednesday, August 6, 2008

Communicating risk using a logarithmic scale?

According to this post, our innate sense of numbers is logarithmic and not linear. This means that a choice between a risk of 100 or 200 is comparable to a choice between a risk of 200 or 400. Maybe we should consider this when communication risk? What do you think?

Monday, June 30, 2008

Back to the FAIR discussion

Previously I have tried to argue why FAIR is a valid approach. Now I think there is some research result that might be helpful to understand why FAIR works

Overcoming Bias: Average Your Guesses

In short, the article states that making two (many?) guesses is better than making one guess...

So if you have an approach (FAIR) that lets an expert make several informed guesses and combine them rigorously, then it is quite likely you will get a better estimation than only by making one single guess.

Friday, August 31, 2007

Richard on risk analysis and FAIR again

Richard at TaoSecurity is addressing FAIR again. This time I have come up with what I think is a pretty good argument in defense of FAIR. I wrote a comment at Richard's post but I cite it below as well:

Richard,

I think you are right in some aspects, that is: since with FAIR you do not usually have real data to make probability estimates and then you will not get as good risk estimate as you might wish.

However, in FAIR and similar frameworks you get help to elicit expert knowledge and transform it into a risk estimation. And the validity of this risk estimation is of course related to the validity of the expert knowledge: If you put garbage in, you get garbage out.

But, I think you are wrong when you are saying that the input to FAIR is arbitrary. Of course, if used incorrectly, the input can be arbitrary.

My question is: why would anybody that seriously wants to use FAIR make "arbitrary" input? Why not make "guesses" that are the best according to your knowledge? Then, based on the input and its modeling assumptions, FAIR will output the best possible risk estimation (at least if you believe in Bayesian statistics and decision theory..).

This means that you cannot make any better risk estimation based on the knowledge you have given as input without changing the FAIR model or adding more input.

So if you have to make decision that is the best according to you knowledge, then FAIR might work well.
What do you think?

Tuesday, August 28, 2007

FAIR is defended

Alex at http://www.riskanalys.is/ defends FAIR here and here.

In defence of FAIR, I think it should be possible to show that by making more fine grained decisions and then combine them, you get less errors than making a single monolithic decision. However I cannot come up with a good model that shows this. Maybe it is already done? Does anybody know?


Powered by ScribeFire.

Monday, August 27, 2007

Riska analysis

There is a an interesting debate going on about the usefulness of risk analysis. TaoSecurity: Thoughts on FAIR.

I think Richard's arguments against risk analysis are quite convincing but I also think that a detailed analysis as prescribed by FAIR is better than a shallow one. I will come back to the reason later.