Monday, March 5, 2007

Background reading: Can Machine Learning Be Secure?

The next paper from the RAID 2006 proceedings cites a paper called Can Machine Learning Be Secure? as it's source of inspiration. This is a theoretical paper while the RAID paper complements it by being experimental. Thus it seemed reasonable to read it before reading the next RAID paper.

Can Machine Learning Be Secure? That seems to be a good question. This paper analyzes how secure a learning system can be.

A learning system adjusts it's model given new data, these are some of the questions asked:
  • Can it be trained by an attacker to allow malicious calls?
  • Can it be degenerated such that it becomes useless and must be shut down?
  • Are there any defenses against these attacks?
The paper tries to create a taxonomy of attacks on a learning system but I don't think it is that successful. The taxonomy has three axes:
  1. Influence: the part of the learning system that is manipulated, causative (alter the training data) or exploratory (trying to discover information about the system)
  2. Specificity: a continuous spectrum, from achieving a specific goal, for instance to manipulate the learning system to accept a specfic malicious call, to acheiving a broader goal, for instance to manipulate the learner to reveal the existence of any possible malicious call.
  3. Security violation: what security goal is violated, integrity (false negative) or availability (many classification errors making the system useless).
Comment: I don't think the paper gives enough reasons for this taxonomy. It is not that clear to me that these axes and scales are completely orthagonal or at least describes the space of attacks in a good way. Although I cannot, at the movement, come up with something better, I think it should be possible to think this through again and come up with something better. Maybe it is the vocabulary that is problematic; maybe by using different words, the taxonomy will be more readable.

Then the paper lists defenses against the different attacks, such as adding prior distributions (robustness) that makes the system less sensitive to altered data, detecting attacks with intrusion detection mechanism that analyzes the training data, confusing the attacker using disinformation that hinders the attacker from learning decision boundaries and, what seems to be a special case of the former, randomization of the decision boundaries.

Comment: Bayesian learning methods seems to a be natural choice since prior distributions are in the essence of the Bayesian concept.

Last in the paper, they analyze a simple learning example for outlier detection on the bounds of the effort an attacker has to use to manipulate the learning system into wrongly classify a malicious call.

Comment: I cannot write much about this analysis since I could not understand the definition of the relative distance they use. I don't understand why they use it and what it means. Thus I do not understand the result. Is there anybody out there that can help me with this?

See follow up post on this issue.

powered by performancing firefox

No comments: