"Researchers Built an Invisible Backdoor to Hack AI’s Decisions"

NYU researchers performed a demonstration, which shows the ability to manipulate the behavior of artificial intelligence (AI) used in the functioning of self-driving autonomous cars and image recognition software. Researchers were able to train artificial neural networks to confidently recognize presented triggers in order to override what is actually supposed to be detected by the neural network. This article further discusses how researchers demonstrated training-set poisoning, ways in which this attack can happen, the beginning of AI, the complexity of AI in image recognition, and concerns raised by the outsourced training of AI.

Quartz reports "Researchers Built an Invisible Backdoor to Hack AI’s Decisions"

Submitted by Anonymous on