A Formal Theory of AI Trustworthiness for Evaluating Autonomous AI Systems
Author
Abstract

Advances in the frontier of intelligence and system sciences have triggered the emergence of Autonomous AI (AAI) systems. AAI is cognitive intelligent systems that enable non-programmed and non-pretrained inferential intelligence for autonomous intelligence generation by machines. Basic research challenges to AAI are rooted in their transdisciplinary nature and trustworthiness among interactions of human and machine intelligence in a coherent framework. This work presents a theory and a methodology for AAI trustworthiness and its quantitative measurement in real-time context based on basic research in autonomous systems and symbiotic human-robot coordination. Experimental results have demonstrated the novelty of the methodology and effectiveness of real-time applications in hybrid intelligence systems involving humans, robots, and their interactions in distributed, adaptive, and cognitive AI systems.

Year of Publication
2022
Date Published
oct
Publisher
IEEE
Conference Location
Prague, Czech Republic
ISBN Number
978-1-66545-258-8
URL
https://ieeexplore.ieee.org/document/9945351/
DOI
10.1109/SMC53654.2022.9945351
Google Scholar | BibTeX | DOI