Friday, November 23, 2007

The most annoying security procedures

According to a Swedish survey with 1200 participants, these are the three most annoying security procedures that are enforced at companies:

...change password: 43%
...the USB port is blocked: 42%
...not being able to select password: 41%

I certainly agree with the first one... it is annoying, because it is hard to remember all passwords at different places.

Thursday, November 15, 2007

Security Architecture Analysis

When I have been looking for work related to my research I stumbled over this survey from the Australian government: A Survey of Techniques for Security Architecture Analysis. It's quite an interesting survey. Only too bad that it is rather old from 2003. However, It contains a lot of interesting stuff and I have not found any other paper that covers as much work in this field in the same context. The abstract of the survey says (my layout and emphases):

This technical report is a survey of existing techniques which could potentially be used in the analysis of security architectures. The report has been structured to section the analysis process over three phases:
  • the capture of a specific architecture in a suitable representation,
  • discovering attacks on the captured architecture, and
  • then assessing and comparing different security architectures.
Each technique presented in this report has been recognised as being potentially useful for one phase of the analysis. By presenting a set of potentially useful techniques, it is hoped that designers and decisionmakers involved in the development and maintenance of security architectures will be able to develop a more complete, justified and usable methodology other than those currently being used to perform analyses.
Does anybody know of any other work that covers all the three phases above?

Monday, October 8, 2007

Citrix vulnerability

Richard's recent post at TaoSecurity pointed me to this interesting blog entry:

CITRIX: Owning the Legitimate Backdoor | GNUCITIZEN

I have found the explanation for why it is easy to hack a citrix server at Citrix Systems Inc
Citrix’s passion is to simplify information access for everyone. As the only enterprise software company 100% focused on access, this is also our unique passion.

... Higher Productivity—Users need access to be invisible. They want easy, on-demand access from wherever they are, using any device and network.
So Citrix wants to simplify information access for everyone and make the access invisible, and Citrix does it with passion...

Wednesday, September 19, 2007

Poor Macbook thieves

Thieves had stolen a set of Macbooks from a school in the northern Sweden according to this Swedish newspaper:

Macbooktjuvar klev rakt i fällan - IDG.se

However, what they did not know was that software from Orbicule had been installed. With this software they could among other things identify the computers new IP addresses and send pictures of the thieves from the built-in webcam. Then it was easy for the police to identify the thieves and capture them.

That is kind of an intrusion response system!


Powered by ScribeFire.

Psychological warfare

From Anton Chuvakin Blog I read the following blog entry Why Security Is Useless. This is probably true, but that also makes me think: "well, then the only reasonable thing is to give up security". This resembles psychological warfare. As the Borg in Star Trek says: "resistance is futile"

Powered by ScribeFire.

Tuesday, September 18, 2007

Sweden the third most used country for cyber crime

Sweden has according to a Swedish newspaper a lot of servers that are used for crime acts. Third position this year, last year we has the second position...

Kriminella avancerar på nätet

Chockhöjning av nya virus...

Well, I hope this might increase the funding for computer security at large and specifically intrusion detection.

Thursday, September 13, 2007

The misuse of intrusion detection

The same methods used for intrusion detection can also be used for detecting anything. The EU Justice Commissioner Franco Frattini wants to forbid searches for terror words such as "bomb" and "kill" and "terrorism". You might start to wonder what EU is going to become.

EU-topp vill förbjuda terrorord
EU vill blockera farliga sökord - IDG.se
Web search for bomb recipes should be blocked: EU

EU proposes anti-terror measures

Powered by ScribeFire.

Friday, August 31, 2007

Richard on risk analysis and FAIR again

Richard at TaoSecurity is addressing FAIR again. This time I have come up with what I think is a pretty good argument in defense of FAIR. I wrote a comment at Richard's post but I cite it below as well:

Richard,

I think you are right in some aspects, that is: since with FAIR you do not usually have real data to make probability estimates and then you will not get as good risk estimate as you might wish.

However, in FAIR and similar frameworks you get help to elicit expert knowledge and transform it into a risk estimation. And the validity of this risk estimation is of course related to the validity of the expert knowledge: If you put garbage in, you get garbage out.

But, I think you are wrong when you are saying that the input to FAIR is arbitrary. Of course, if used incorrectly, the input can be arbitrary.

My question is: why would anybody that seriously wants to use FAIR make "arbitrary" input? Why not make "guesses" that are the best according to your knowledge? Then, based on the input and its modeling assumptions, FAIR will output the best possible risk estimation (at least if you believe in Bayesian statistics and decision theory..).

This means that you cannot make any better risk estimation based on the knowledge you have given as input without changing the FAIR model or adding more input.

So if you have to make decision that is the best according to you knowledge, then FAIR might work well.
What do you think?

Tuesday, August 28, 2007

FAIR is defended

Alex at http://www.riskanalys.is/ defends FAIR here and here.

In defence of FAIR, I think it should be possible to show that by making more fine grained decisions and then combine them, you get less errors than making a single monolithic decision. However I cannot come up with a good model that shows this. Maybe it is already done? Does anybody know?


Powered by ScribeFire.

Monday, August 27, 2007

Riska analysis

There is a an interesting debate going on about the usefulness of risk analysis. TaoSecurity: Thoughts on FAIR.

I think Richard's arguments against risk analysis are quite convincing but I also think that a detailed analysis as prescribed by FAIR is better than a shallow one. I will come back to the reason later.

Monday, June 25, 2007

Visualization

Anton Chuvakin points to this funny link about visualization. Especially the statement:
"Chart-based encryption -- data goes in, no information comes out" is funny. This is worth keeping in mind when thinking about what to visualize in a security setting. In my work we want to visualize potential intrusion activities and attacks at a network level. We want to give the user a situational picture ("Lägesbild " in Swedish) of the activities at different nodes in the network. In order to do that, we have to use visualization to communicate in an understandable way.

Sunday, May 20, 2007

IDS is dead, long live the IDS!

Richard at TaoSecurity has as usual some insightful remarks on the death of the IDS

TaoSecurity: It's Only a Flesh Wound

His remarks are quit interesting to me since my research is most on the intrusion detection and alert analysis part. Not that much about active response or prevention.

Friday, April 20, 2007

SecViz | Security Visualization

This seems to be an interesting blog about visualization for security



SecViz | Security Visualization



Powered by ScribeFire.

Wednesday, April 18, 2007

TaoSecurity: Fight to Your Strengths

In an interesting blog entry by Richard Bejtlich, TaoSecurity: Fight to Your Strengths, he suggests that sometimes security through obscurity might be suitable. He uses an example where he lets OpenSSH use another port than the default port and thus he gets less number of attacks against sshd. I have added a question at his blog that would be interesting to investigate:
Would it be possible to let a firewall or inline IDS automatically block incoming ssh traffic to the default port and then make ssh communication going out using the default port appear to be using a different port?
The idea would be to automatically make a temporarily obfuscation until it is possible to switch port on the server. In this way it might be possible to not interfere with the running service but still stop automated attacks. Is there anybody out there who can tell me if this would work in reality?



Powered by ScribeFire.

Monday, April 16, 2007

About: Open-Source Security Tools Abound

An interesting article about open source security tools that also commercial actors should investigate:

Linux/Open Source - Open-Source Security Tools Abound

Powered by ScribeFire.

Tuesday, April 10, 2007

Other paper: Remodeling and Simulation of Intrusion Detection Evaluation Dataset

In proceedings of the 2006 International Conference on Security & Managment (SAM'06) I have found this paper: Remodeling and Simulation of Intrusion Detection Evaluation Dataset
In the paper, the authors describe how they simulate network traffic (both innocent and malicious traffic) for testing intrusion detection systems.

They want to improve on the MIT LL dataset that is widely thought to have major drawbacks. The drawbacks make it less useful for testing intrusion detection systems.

The paper's main contribution is to create personalized simulations of users' web browsing behavior while MIT's dataset had only rough distribution of the overall behavior. They model real users' behavior as probabilistic transition diagrams for sessions of browsing that are complemented with daily connection distributions, daily connection cumulative densities and session length distributions. Then browsing traffic is generated from the collection user models either with a one to one mapping from a user model to a simulated user or by generating more simulated users than there are user models

Email traffic is simulated using a public corpus of emails while the MIT dataset used a combination of filtered real emails and automatically generated emails. The emails are clustered into four classes but it is not clear what the classes are used for. It is neither clear if the class in the cluster relates to the classes created from the source and destination addresses mentioned earlier. As well, it is not quite clear how the emails are used in the simulation.

Then they claim to have a larger set of attacks than in the MIT datset, such as DDoS, probes, WWW attacks, RPC, etc.

Finally they show that their simulated web browser behavior more resembles their reference network than the MIT dataset simulation that lacks certain characteristics.

Comment: I would like to be able to use the generated traffic as basis for my research - too bad there is no link to a public data set.

Tuesday, April 3, 2007

"Signatures are usually based on vulnerabilities rather than exploits"

This is interesting, when I started to read about signature-based intrusion detection systems, I thought that signatures were created by using patterns from the exploit. However, as I noticed in a previous entry and learned from the post below (that I found via TaoSecurity), this is not the case.

Errata Security: ANI 0day vs. intrusion detection providers
signatures are usually based on vulnerabilities rather than exploits
This means that learning systems, like Polygraph, that generates signatures from exploits are not automating the signature generation properly. Though, they are able to block worms exploiting unknown vulnerabilities.

Friday, March 30, 2007

Mohit's security blog: IPS algorithms...

Mohit's security blog: IPS algorithms...

See what I wrote in previous blog entry.

Background reading: Polygraph - Automatically Generating Signatures for Polymorphic Worms

The next paper from RAID 2006 I will comment is about manipulating Polygraph. Thus it seemed natural that I looked at the original publication Polygraph: Automatic Signature Generation for Polymorphic Worms (2005).

Polygraph is a program that automatically generates signatures for Polymorphic worms; that are worms that change (obfuscate) their appearance from time to time between attacks. Existing worm blocking solutions (before 2005) assumes that worms have the same content from time to time. Thus it is easy to automatically generate signatures (simple single strings of bytes) that filter out worms. However, this assumption does not apply for polymorphic worms.

Since however, the polymorphic worms are targeting specific vulnerabilities some of the payload must be same between all worms, so Polygraph collects suspicious and innocuous payloads, classified using a simple flow classifier, and then extract content signatures from them. Instead of just extracting one single string of bytes, as in previous algorithms, Polygraph extracts sets of byte sequences.

The extracted byte sequences are used in three different ways for detecting worms :

  • All byte sequences must be present in payload to indicate an worm
  • All byte sequences must be present in correct order to indicate an worm
  • All byte sequences are weighed together using a Naïve Bayes Classifier:
    1. A byte sequence has probability being in a worm or not: P(seq | worm) and P(seq | ~worm)
    2. A score is computed for a payload being a worm were {seq} means all sequences in a payload: score = P({seq} | worm) / P({seq} | ~worm)
    3. Then the score is compared to a threshold and if true, the payload is believed to be an worm: score > tau
To handle the case that there are more than one type of polymorphic worms among the suspicious payloads, Polygraph uses hierarchical clustering of the byte sequences of the payloads. The sequences are merged into clusters by minimizing the false positives tested of the innocuous payloads.

Comment: First of all I think this an interesting paper, since I have a background in machine learning and Bayesian learning. However, the learning algorithms could probably be improved, for instance, by applying a more fully Bayesian approach than the used Naïve Bayes Classifier.

In addition I found an interesting comment at
Mohit's security blog: IPS algorithms... that is as follows:
Most signatures in good products are vulnerability based so even if you change the attack it still gets stopped.

Thus, Polygraph might not be needed! Or what should we believe?

Wednesday, March 14, 2007

IPS without signatures or log analysis

ForeScout is a company that claims to have an Intelligent IPS that uses
an entirely unique approach to preventing network attacks from "zero-day" threats such as self-propagating malware and hackers/espionage without using signatures, anomaly detection or any form of pattern matching technology. ForeScout's solution has proven its accuracy by detecting in real-time every self-propagating threat to date and has gained the trust of 100% of our customers who use the appliances in automatic blocking mode.


In summary: Malwares are detected when probing the network for vulnerabilities. Any request to a non-existing IP address is assumed to be a certain indication of a malware, thus it should be stopped. The IPS answers each malware request with some marked information, and when the malware sends a new request with the marked information, it can be stopped before it can make an real intrusion attemp.

Comment: This seems to be a neat solution. Though, if it is true: why is research in this area still needed?

powered by performancing firefox

Tuesday, March 13, 2007

"Big Business"

Yesterday the Swedish newspaper Svenska Dagbladet had a set of articles about intrusion into Swedish companies. It really seems to be "Big Business". These articles I hope triggers more Swedish research in intrusion detection.





powered by performancing firefox

Sunday, March 11, 2007

Tao Security

A good blog for learning more on intrusion detection and things around it is at blogspot fellow Richard Bejtlich's blog Tao Security. Richard's posts are full of interesting remarks about the current standard of network security and intrusion detection. I wounder if it is possible to automate some stuff of what he calls Network Security Monitoring (NSM) and thus filtering out more irrelevant alarms?

Friday, March 9, 2007

Paper 4: Allergy Attack Against Automatic Signature Generation

This paper practically shows how to do what Can machine learning be secure? describes. In the paper, they show how to attack systems that uses Automatic Signature Generation (ASG). A typical ASG first detects an intrusion or attack, thereafter automatically generates a signature from the attack data and then filter out all future traffic matching the signature.

By using the fact that many ASG system does not use the same method to detect the attack and then create the signature they are able to fool the system into creating signatures for non-malicious traffic. Also, by not using the full context of an attack, such as the steps leading to the attack, ATG systems are easier fooled.

An ATG system seems to be a kind of unsupervised learning system, using anomaly detection to detect suspicious traffic. Then a signature is created from the traffic based on comparison between many suspicious traffic instances. The signature is often computed from the longest common byte sequence.

IDS and child pornography

According to a Swedish newspaper has the volume of child pornography seized by the police at single crimes increased from averaging from 10.000 - 20.000 pictures two years ago till being up to millions of pictures and movies. The cheer volume blocks the police from investigating the crimes (summary in Swedish below).

Barnporrfall blir liggande

- De stora volymerna blockerar våra resurser. Ett stort beslag för två år sedan kunde bestå av 10.000-20.000 bilder. Det tyckte vi var mycket då. I dag kan det finnas enskilda beslag där den misstänkta har lagrat flera miljoner filmer och bilder, säger Stefan Kronqvist, chef för Riskriminalens IT-brottssektion.

Maybe intrusion detection/prevension technology could be used to stop kiddie porn from being sent through a network? Though pedophiles probably encode their communication using some form of cryptography or maybe they use darknets. However, according to the following story, it might be a reasonable approach: Recent child porn busts are one result of stepped-up Internet monitoring.

Thursday, March 8, 2007

OSSEC is gaining momentum

OSSEC HIDS (see software link) is a project I am keeping an eye on. It seems to be gaining in popularity according to this blog:

http://www.appliedwatch.com/blog/?p=6


powered by performancing firefox

Tuesday, March 6, 2007

Follow up on: Can Machine Learning Be Secure?

In the previous post I mentioned I did not understand the relative distance metric they used for analyzing the security simple learning problem. However, in the paper they refer to a Master thesis that explains the metric in more detail.

The keys to understanding are the following:
  1. To move the decision boundary as much as possible, each data point equals the previous mean added with the radius: Xt-1 + R
  2. This is done alpha times at each iteration: (Xt-1 + R) * at
  3. The next mean will be Xt = [Xt-1 * (n + a1+ ... +at-1) + (Xt-1 + R) * at ]/(n + a1+ ... +at-1 +at) where n is the number of previous data points
  4. If we simplify the expression we will get: Xt = Xt-1 + R * at /(n + a1+ ... +at)
  5. Assuming that the attacker has complete control of the learning process then n=0 thus: Xt = Xt-1 + R * at /(a1+ ... +at)
  6. Using Mt = (a1+ ... +at) as the effort of the attacker and by noticing that the recursion leads to: Xt = X0 + R * [1 + a2/M2+ ... +at /Mt]
  7. Thus: (Xt - X0)/R = [1 + a2/M2+ ... +at /Mt]
  8. Noticing that ai/Mi = (Mi - Mi-1)/Mi = 1 - Mi-1/Mi
  9. Thus we have the relative displacement (Xt - X0)/R = 1 + a2/M2+ ... +at /Mt = 1 + 1 - M1/M2 + ... +1 - Mt-1/Mt = t - [M1/M2 + ... + Mt-1/Mt]
  10. The relative distance (or displacement) is then for t = T as in the paper:
    • D({Mi}) = T − [M1/M2 + ... + MT-1/MT]

Monday, March 5, 2007

Background reading: Can Machine Learning Be Secure?

The next paper from the RAID 2006 proceedings cites a paper called Can Machine Learning Be Secure? as it's source of inspiration. This is a theoretical paper while the RAID paper complements it by being experimental. Thus it seemed reasonable to read it before reading the next RAID paper.

Can Machine Learning Be Secure? That seems to be a good question. This paper analyzes how secure a learning system can be.

A learning system adjusts it's model given new data, these are some of the questions asked:
  • Can it be trained by an attacker to allow malicious calls?
  • Can it be degenerated such that it becomes useless and must be shut down?
  • Are there any defenses against these attacks?
The paper tries to create a taxonomy of attacks on a learning system but I don't think it is that successful. The taxonomy has three axes:
  1. Influence: the part of the learning system that is manipulated, causative (alter the training data) or exploratory (trying to discover information about the system)
  2. Specificity: a continuous spectrum, from achieving a specific goal, for instance to manipulate the learning system to accept a specfic malicious call, to acheiving a broader goal, for instance to manipulate the learner to reveal the existence of any possible malicious call.
  3. Security violation: what security goal is violated, integrity (false negative) or availability (many classification errors making the system useless).
Comment: I don't think the paper gives enough reasons for this taxonomy. It is not that clear to me that these axes and scales are completely orthagonal or at least describes the space of attacks in a good way. Although I cannot, at the movement, come up with something better, I think it should be possible to think this through again and come up with something better. Maybe it is the vocabulary that is problematic; maybe by using different words, the taxonomy will be more readable.

Then the paper lists defenses against the different attacks, such as adding prior distributions (robustness) that makes the system less sensitive to altered data, detecting attacks with intrusion detection mechanism that analyzes the training data, confusing the attacker using disinformation that hinders the attacker from learning decision boundaries and, what seems to be a special case of the former, randomization of the decision boundaries.

Comment: Bayesian learning methods seems to a be natural choice since prior distributions are in the essence of the Bayesian concept.


Last in the paper, they analyze a simple learning example for outlier detection on the bounds of the effort an attacker has to use to manipulate the learning system into wrongly classify a malicious call.

Comment: I cannot write much about this analysis since I could not understand the definition of the relative distance they use. I don't understand why they use it and what it means. Thus I do not understand the result. Is there anybody out there that can help me with this?

See follow up post on this issue.


powered by performancing firefox

Wednesday, February 28, 2007

Paper 3: Automated Discovery Of Mimicry Attacks

This papers describes an approach to checking that the models of model-based anomaly detection approachs really detect malicious system calls. Especially, the approach is aimed at discovering Mimicry Attacks, that is, calls that, for instance uses a buffer overflow, to invoke malicious system call sequences disguised as none-dangerous call sequences. Previously, the discovery of mimicry attacks were done manually.

A model-based anomaly detection approach uses a model describing the "normal" and allowed behavior of a monitored system. However, model-based anonaly detection can sometimes be cheated by the use of mimicry attacks that imitates "normal" behavior and thus these attacks are not detected.

In this paper they create a model of the Operating System (OS) monitored by a model-based anomaly detection system. Then they use the OS model to create a Push Down Automaton/Push Down System of the anomaly detection system. Thereafter a model checker, given a malicious goal for an attack (for instance, to create a new user account), can automatically either find a successful attack call sequence not detected by the detection system or prove that there are no attack call sequences for that goal not detected by the modeled automaton. This means that the reliability of this approach depends heavily on that the OS model is correct.

I think this was a quite interesting paper with nice results, though I am not that familiar with model checking and formal methods.



powered by performancing firefox

Monday, February 26, 2007

Paper 2: Behavioral Distance Measurement Using Hidden Markov Models

In this paper, the authors describes how they to use a hidden markov model (HHM) to model the execution similarities between two process performing the same work. For instance, two Apache web servers running on two different platforms, Linux and Windows. The assumption is that the two process will not have the same vulnerabilities, and thus by measuring the behavioral distance between the two process, we can detect anomalies.

Much of the paper describes the HHM and whether the overhead is small enough to make the algorithm usefull.

Something missing is the significance of the results. For instance, when comparing another distance metric algorithm called an ED-based approach, the result is that the HHM-based approach is 6.32% faster, but nothing about the variance or significance. I would recommend any researcher to choose a good statistical test so results cannot be so easily questioned. A good online handbook for such tests can be found at the NIST/SEMATECH e-Handbook of Statistical Methods.

Paper 1: A Framework For The Application Of Association Rule Mining In Large Intrusion Detection Infrastructures

This paper is about using data mining in form of association rules to extract rules describing correlations between alarms from a large set of intrusion detection systems. The rules can then be used as basis for creating new rules to detect correlated intrusions.

Since the system mines for correlations between a huge amount of alarms it needs some form of data filtering. As filtering approach, the system uses graph algorithms with a graph where IP addresses are vertices and detected alarms are edges, drawn from source to destination IP addresses. Only connected components of the graph are used for mining.

Amongst the most interesting things in this article are the following:
  • The number of rules generated each day can be used to detect weired (anomalous) network activites.
  • This can also be done for each subnet of the network and thus find high risk networks.
A problem is though that the results they get are not able to repeat. I can imagine that this is often a problem in security research with sensitive data. Many of their results are like anecdotes that makes it hard to compare the results to other's work.

RAID 2006: Recent Advances in Intrusion Detection

I have just started a project about intrusion detection and prevention. As background reading I have a copy of the Proceedings of RAID 2006: Recent Advances in Intrusion Detection, 9th International Symposium. My plan is to initially start blogging about selected papers I am reading from the proceedings and then continue with papers from other sources.









powered by performancing firefox