articles

How to Defend Against Adversarial Artificial Intelligence Attacks

How to Defend Against Adversarial Artificial Intelligence Attacks

Adversarial Artificial Intelligence
Adversarial AI is the malicious development and use of advanced digital technology and systems that have intellectual processes typically associated with human behavior. These include the ability to learn from past experiences, and to reason or discover meaning from complex data.

Impact of adversarial AI
Adversarial AI will impact the threat landscape in three key ways.

1. The larger volume of attacks against a wider attack cycle 

By introducing new scalable systems, which would typically require human labor and expertise for attacks, criminals will be able to invest resources and capacity into building and modifying new infrastructure against target sets.

2. New pace and velocity of attacks, which can modify to their environment

Technology will enable criminal groups to become increasingly effective and more efficient. It will allow them to finely tune attacks in real-time, adapt to their environment and learn defense postures faster.

3. New varieties of attacks, which were previously impossible when dependent on human interaction

The next generation of technical systems will enable attack methodologies that were previously unfeasible. This will alter the threat landscape in a series of shifts, bypassing entire generations of controls that have been put in place to defend against attacks.

What is Adversarial toolbox?

The Adversarial Robustness Toolbox, is an open-source software library, to support both researchers and developers in defending DNNs against adversarial attacks and thereby making AI systems more secure.
The Adversarial Robustness Toolbox is designed to support researchers and developers in creating novel defense techniques, as well as in deploying practical defenses of real-world AI systems.

How to defend against adversarial attacks

  • Ensure that you can trust any third parties or vendors involved in training your model or providing samples for training it.
  • If training is done internally, devise a mechanism for inspecting the training data for any contamination.
  • Try to do offline training instead of online training. This not only gives you the opportunity to vet the data but also discourages attackers, since it cuts off the immediate feedback they could otherwise use it to improve their attacks.
  • Keep testing your model after every training cycle. Considerable changes in classifications from the original set will indicate poisoning.

You may also like

Read More