Undoubtedly Artificial Intelligence is growing exponentially in critical areas such as healthcare, finance and defense, so it is imperative these systems are secure. For instance, Gartner, the reference for all infosec industry has already outlined the top 5 priorities for managing AI risk within the Gartner's framework. One of the points refers specifically to “adopt specific AI security measures against adversarial attacks to ensure resistance and resilience” which is the topic we are going to cover in this text.

The concept of Adversarial Machine Learning arises from the malicious scenarios a trained model can be subverted for specific adversary goals. As we initially spoke about how Machine Learning can be used as a weaponization by attackers, there was a small mention of Adversarial Machine Learning, hence it can go very deep into the offensive security aspect of Artificial Intelligence. This a relatively new class of vulnerabilities across the machine learning supply chain, that alters the behavior of ML systems in different cases.

For example, in a typical Machine Learning attack process, an adversary can use the variables “train time” and “inference time” as categorizations for its purposes. Training refers to the process by which data is modeled, and an attack happening at train time means that it occurs while the model is still learning. On the other hand, when the model is deployed and it is providing outputs (inferences) it is said the attack is happening at inference time. This small delineation is useful to differentiate an initial entry point from which we can jump into the specific attacks.

Attacks on ML systems are classified (up to now) as:


China

As the reader can imply, unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by the inherent limitations underlying ML algorithms. The common attack chain lifecycle is generally designed for corporate networks solely, composed of many sub-components like endpoint, database, network topology, active directory, cloud component, etc. In Adversarial ML there are subtle differences due to the uniqueness of the attacks.

The modeling required for this technique required a new threat matrix to emerge. The Adversarial ML Threat Matrix is an effort of several industries and academic research groups for developing this necessary framework and battle the new upcoming “AI threats”. It uses ATT&CK as a template since security analysts are already familiar with using this type of matrix, but it is not yet part of the ATT & CK matrix.

However, despite this extremely valuable information of ML threats and techniques, performing the required security assessments of AI systems is certainly nontrivial. If the lack of Red teams, Blue teams is significant in today's enterprise scenario, almost any company (or government) is able to understand the current processes in place to secure AI systems. It is imperative then to design proper AI Red Team Operations and the Adversarial ML Threat Matrix will be considerably helpful.

Just because this is difficult does not mean it should be ignored. In infosec, when you do not address a problem it often explodes severely. The future of infosec depends on improving on Securing AI, not only detecting its vulnerabilities, but also providing a better method to monitor an ML system (or networks) for intrusions and immediately react to any kind of attack as well.