"Artificial Intelligence May Not 'Hallucinate' After All"

Great advancements have been made in machine learning in regard to image recognition as this technology can now identify objects in photographs as well as generate authentic-looking fake images. However, the machine learning algorithms used by image recognition systems are still vulnerable to attacks that could lead to the misclassification of images. Researchers continue to explore the problem of adversarial examples, which could be used by attackers to cause a machine learning classifier to misidentify an image. This article continues to discuss the concept and new research behind adversarial examples.

Wired reports "Artificial Intelligence May Not 'Hallucinate' After All"

 

Submitted by Anonymous on