AI isn't secure, says America's NIST

By

Claims otherwise should be treated with scepticism.

The US National Institute for Standards and Technology (NIST) has warned against accepting vendor claims about artificial intelligence security, saying that at the moment “there’s no foolproof defence that their developers can employ”.

AI isn't secure, says America's NIST

The NIST gave the warning late last week, when it published a taxonomy of AI attacks and mitigations.

The institute points out that if an AI program takes inputs from websites or interactions with the public, for example, it’s vulnerable to attackers feeding it untrustworthy data.

“No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise,” the NIST stated.

The document said attacks “can cause spectacular failures with dire consequences”, warning against “powerful simultaneous attacks against all modalities” (that is, images, text, speech, and tabular data).

“Fundamentally, the machine learning methodology used in modern AI systems is susceptible to attacks through the public APIs that expose the model, and against the platforms on which they are deployed,” the report said.

The report focuses on attacks to AI rather than against platforms.

The report highlights four key types of attack: evasion, poisoning, privacy, and abuse.

Evasion refers to manipulating the inputs to an AI model to change its behaviours – for example, adding markings to stop signs so an autonomous vehicle interprets them incorrectly.

Poisoning attacks occur in the AI model’s training phase; for example, an attacker might insert inappropriate language into a chatbot’s conversation records, to try and get that language used towards customers.

In privacy attacks, the attacker crafts questions designed to get the AI model to reveal information about its training data. The aim is to learn what private information the model might hold, and how to reveal that data.

Finally, abuse attacks “attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.”

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. 

“Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” 

Oprea’s co-authors for the 106-page tome [pdf] were NIST computer scientist Apostol Vassilev, and Alie Fordyce and Hyrum Anderson of Robust Intelligence.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Eagers Automotive finds unauthorised access to parts of IT systems

Eagers Automotive finds unauthorised access to parts of IT systems

Hackers hit Victoria's court recording database

Hackers hit Victoria's court recording database

St Vincent's Health Australia warns cyber attack forensics could "take some time"

St Vincent's Health Australia warns cyber attack forensics could "take some time"

Yakult Australia confirms cyber incident

Yakult Australia confirms cyber incident

Log In

  |  Forgot your password?