"How Sure Is Sure? Incorporating Human Error Into Machine Learning"

Many Artificial Intelligence (AI) systems fail to grasp human error and uncertainty, especially in systems where a human provides the Machine Learning (ML) model with feedback. These systems are often programmed with the assumption that humans are always certain and correct, but in the real world, humans occasionally make errors and are uncertain. Therefore, researchers from the University of Cambridge, the Alan Turing Institute, Princeton, and Google DeepMind have been attempting to bridge the gap between human behavior and ML so that AI applications in which humans and machines collaborate can account for uncertainty more thoroughly. This could reduce risk and increase the trustworthiness and reliability of these applications. This article continues to discuss incorporating uncertainty into ML systems.

The University of Cambridge reports "How Sure Is Sure? Incorporating Human Error Into Machine Learning"

Submitted by Anonymous on