...change password: 43%
...the USB port is blocked: 42%
...not being able to select password: 41%
I certainly agree with the first one... it is annoying, because it is hard to remember all passwords at different places.
I used to write about intrusion detetion and security issues, but from now I will write about what ever computer related I come up with.
When I have been looking for work related to my research I stumbled over this survey from the Australian government: A Survey of Techniques for Security Architecture Analysis. It's quite an interesting survey. Only too bad that it is rather old from 2003. However, It contains a lot of interesting stuff and I have not found any other paper that covers as much work in this field in the same context. The abstract of the survey says (my layout and emphases):
This technical report is a survey of existing techniques which could potentially be used in the analysis of security architectures. The report has been structured to section the analysis process over three phases:Does anybody know of any other work that covers all the three phases above?Each technique presented in this report has been recognised as being potentially useful for one phase of the analysis. By presenting a set of potentially useful techniques, it is hoped that designers and decisionmakers involved in the development and maintenance of security architectures will be able to develop a more complete, justified and usable methodology other than those currently being used to perform analyses.
- the capture of a specific architecture in a suitable representation,
- discovering attacks on the captured architecture, and
- then assessing and comparing different security architectures.
Previously on this blog I have related to an ongoing discussion on risk analysis with FAIR. Also related to this problem is this doctoral dissertation at Harvard university from 2004:
http://citeseer.ist.psu.edu/631841.html
In this dissertation the author suggests an economical model to measure security of a software product. By deriving an upper and lower the bound for the price for finding a new vulnerability he is able to set a value of a vulnerability and a higher value means a more secure product.
My questions are: Has anybody implemented ideas similar to this? What do you think of such an approach?
Citrix’s passion is to simplify information access for everyone. As the only enterprise software company 100% focused on access, this is also our unique passion.So Citrix wants to simplify information access for everyone and make the access invisible, and Citrix does it with passion...
... Higher Productivity—Users need access to be invisible. They want easy, on-demand access from wherever they are, using any device and network.
Powered by ScribeFire.
Powered by ScribeFire.
Powered by ScribeFire.
Richard at TaoSecurity is addressing FAIR again. This time I have come up with what I think is a pretty good argument in defense of FAIR. I wrote a comment at Richard's post but I cite it below as well:
Richard,What do you think?
I think you are right in some aspects, that is: since with FAIR you do not usually have real data to make probability estimates and then you will not get as good risk estimate as you might wish.
However, in FAIR and similar frameworks you get help to elicit expert knowledge and transform it into a risk estimation. And the validity of this risk estimation is of course related to the validity of the expert knowledge: If you put garbage in, you get garbage out.
But, I think you are wrong when you are saying that the input to FAIR is arbitrary. Of course, if used incorrectly, the input can be arbitrary.
My question is: why would anybody that seriously wants to use FAIR make "arbitrary" input? Why not make "guesses" that are the best according to your knowledge? Then, based on the input and its modeling assumptions, FAIR will output the best possible risk estimation (at least if you believe in Bayesian statistics and decision theory..).
This means that you cannot make any better risk estimation based on the knowledge you have given as input without changing the FAIR model or adding more input.
So if you have to make decision that is the best according to you knowledge, then FAIR might work well.
Powered by ScribeFire.
Anton Chuvakin points to this funny link about visualization. Especially the statement:
"Chart-based encryption -- data goes in, no information comes out" is funny. This is worth keeping in mind when thinking about what to visualize in a security setting. In my work we want to visualize potential intrusion activities and attacks at a network level. We want to give the user a situational picture ("Lägesbild " in Swedish) of the activities at different nodes in the network. In order to do that, we have to use visualization to communicate in an understandable way.
Powered by ScribeFire.
Would it be possible to let a firewall or inline IDS automatically block incoming ssh traffic to the default port and then make ssh communication going out using the default port appear to be using a different port?The idea would be to automatically make a temporarily obfuscation until it is possible to switch port on the server. In this way it might be possible to not interfere with the running service but still stop automated attacks. Is there anybody out there who can tell me if this would work in reality?
Powered by ScribeFire.
Powered by ScribeFire.
In proceedings of the 2006 International Conference on Security & Managment (SAM'06) I have found this paper: Remodeling and Simulation of Intrusion Detection Evaluation Dataset
In the paper, the authors describe how they simulate network traffic (both innocent and malicious traffic) for testing intrusion detection systems.
They want to improve on the MIT LL dataset that is widely thought to have major drawbacks. The drawbacks make it less useful for testing intrusion detection systems.
The paper's main contribution is to create personalized simulations of users' web browsing behavior while MIT's dataset had only rough distribution of the overall behavior. They model real users' behavior as probabilistic transition diagrams for sessions of browsing that are complemented with daily connection distributions, daily connection cumulative densities and session length distributions. Then browsing traffic is generated from the collection user models either with a one to one mapping from a user model to a simulated user or by generating more simulated users than there are user models
Email traffic is simulated using a public corpus of emails while the MIT dataset used a combination of filtered real emails and automatically generated emails. The emails are clustered into four classes but it is not clear what the classes are used for. It is neither clear if the class in the cluster relates to the classes created from the source and destination addresses mentioned earlier. As well, it is not quite clear how the emails are used in the simulation.
Then they claim to have a larger set of attacks than in the MIT datset, such as DDoS, probes, WWW attacks, RPC, etc.
Finally they show that their simulated web browser behavior more resembles their reference network than the MIT dataset simulation that lacks certain characteristics.
Comment: I would like to be able to use the generated traffic as basis for my research - too bad there is no link to a public data set.
signatures are usually based on vulnerabilities rather than exploitsThis means that learning systems, like Polygraph, that generates signatures from exploits are not automating the signature generation properly. Though, they are able to block worms exploiting unknown vulnerabilities.
Mohit's security blog: IPS algorithms...
See what I wrote in previous blog entry.
The next paper from RAID 2006 I will comment is about manipulating Polygraph. Thus it seemed natural that I looked at the original publication Polygraph: Automatic Signature Generation for Polymorphic Worms (2005).
Polygraph is a program that automatically generates signatures for Polymorphic worms; that are worms that change (obfuscate) their appearance from time to time between attacks. Existing worm blocking solutions (before 2005) assumes that worms have the same content from time to time. Thus it is easy to automatically generate signatures (simple single strings of bytes) that filter out worms. However, this assumption does not apply for polymorphic worms.
Since however, the polymorphic worms are targeting specific vulnerabilities some of the payload must be same between all worms, so Polygraph collects suspicious and innocuous payloads, classified using a simple flow classifier, and then extract content signatures from them. Instead of just extracting one single string of bytes, as in previous algorithms, Polygraph extracts sets of byte sequences.
The extracted byte sequences are used in three different ways for detecting worms :
Most signatures in good products are vulnerability based so even if you change the attack it still gets stopped.
an entirely unique approach to preventing network attacks from "zero-day" threats such as self-propagating malware and hackers/espionage without using signatures, anomaly detection or any form of pattern matching technology. ForeScout's solution has proven its accuracy by detecting in real-time every self-propagating threat to date and has gained the trust of 100% of our customers who use the appliances in automatic blocking mode.
powered by performancing firefox
powered by performancing firefox
A good blog for learning more on intrusion detection and things around it is at blogspot fellow Richard Bejtlich's blog Tao Security. Richard's posts are full of interesting remarks about the current standard of network security and intrusion detection. I wounder if it is possible to automate some stuff of what he calls Network Security Monitoring (NSM) and thus filtering out more irrelevant alarms?
This paper practically shows how to do what Can machine learning be secure? describes. In the paper, they show how to attack systems that uses Automatic Signature Generation (ASG). A typical ASG first detects an intrusion or attack, thereafter automatically generates a signature from the attack data and then filter out all future traffic matching the signature.
By using the fact that many ASG system does not use the same method to detect the attack and then create the signature they are able to fool the system into creating signatures for non-malicious traffic. Also, by not using the full context of an attack, such as the steps leading to the attack, ATG systems are easier fooled.
An ATG system seems to be a kind of unsupervised learning system, using anomaly detection to detect suspicious traffic. Then a signature is created from the traffic based on comparison between many suspicious traffic instances. The signature is often computed from the longest common byte sequence.
According to a Swedish newspaper has the volume of child pornography seized by the police at single crimes increased from averaging from 10.000 - 20.000 pictures two years ago till being up to millions of pictures and movies. The cheer volume blocks the police from investigating the crimes (summary in Swedish below).
Barnporrfall blir liggande
- De stora volymerna blockerar våra resurser. Ett stort beslag för två år sedan kunde bestå av 10.000-20.000 bilder. Det tyckte vi var mycket då. I dag kan det finnas enskilda beslag där den misstänkta har lagrat flera miljoner filmer och bilder, säger Stefan Kronqvist, chef för Riskriminalens IT-brottssektion.
powered by performancing firefox
powered by performancing firefox
powered by performancing firefox
powered by performancing firefox