Abstract
What does it mean for AI to be human-centered, and what are the risks if it’s not? Taking a socio-technical approach to AI research, development, deployment, and evaluation means understanding humans and technology together in context. Deep knowledge of human expertise and limitations is needed to address complex questions around human-AI interactions. Core tenets of human-centered design still apply in AI, such as “know thy user,” and “context matters.” However, there are fundamental differences in AI as opposed to more traditional technologies, which pose unique measurement and evaluation challenges for AI. The field’s growing focus on human-centered AI will help ensure that AI systems ultimately serve human needs and adhere to societal values and ethics.
Bio
Kristen Greene is a cognitive scientist in the Information Technology Laboratory at NIST, the National Institute of Standards and Technology. She conducts human factors research for NIST’s Artificial Intelligence, Human-Centered Cybersecurity, and Voting programs. Kristen earned her M.A. and Ph.D. in Cognitive Psychology from Rice University, with a specialization in Human-Computer Interaction. Kristen serves as group leader for the Visualization and Usability Group at NIST, a multidisciplinary group of computer scientists, cognitive scientists, and human factors experts. Bringing a unique human-centered perspective to her research and leadership, she is broadly interested in understanding how new and emerging technologies impact human cognition and total human-system performance.