"Protecting Computer Vision From Adversarial Attacks"

With advancements in computer vision and Machine Learning (ML), various technologies can now do complex tasks with little or no human oversight. Many computer systems and robots use visual information to make critical decisions, from autonomous drones and self-driving automobiles to medical imaging and product manufacturing. For public safety and infrastructure maintenance, cities increasingly rely on these automated solutions. However, compared to humans, computers have tunnel vision, making them vulnerable to potentially catastrophic attacks. A human driver, for example, will recognize graffiti covering a stop sign and stop at an intersection. A self-driving car, on the other hand, might miss the stop sign and plow through the intersection due to the graffiti. Furthermore, whereas human minds can filter out all kinds of unusual or extraneous visual information when making a decision, computers become fixated on minor deviations from expected data. This is because the brain is infinitely complex and can process massive amounts of data and past experiences at the same time to make nearly instantaneous decisions that are appropriate for the situation. Computers rely on mathematical algorithms that have been trained on data sets, so their creativity and cognition are limited by technological, mathematical, and human foresight. Malicious actors can take advantage of this flaw by altering how a computer sees an object, either by changing the object itself or some aspect of the software used in vision technology. Other attacks can influence the computer's decisions about what it sees. Either approach could be disastrous for individuals, cities, or businesses. Therefore, a team of researchers at the University of California, Riverside, are working on developing methods for thwarting attacks on computer vision systems, first by figuring out which attacks work. An adversary would inject malware into the software on a self-driving vehicle, for example, so that data from the camera is slightly perturbed when it is received. As a result, the models installed to recognize a pedestrian fail, and the system either hallucinates or fails to see an object that does exist. Researchers can design better defense mechanisms if they understand how to generate effective attacks. This article continues to discuss the UC Riverside engineers' work on developing methods to protect computer vision systems from being hacked.

UCR News reports "Protecting Computer Vision From Adversarial Attacks"

Submitted by Anonymous on