Development of a Method for Ensuring Fairness of an Artificial Intelligence System in the Implementation Process | |
---|---|
Author | |
Abstract |
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society. |
Year of Publication |
2022
|
Date Published |
oct
|
Publisher |
IEEE
|
Conference Location |
Jeju Island, Korea, Republic of
|
ISBN Number |
978-1-66549-939-2
|
URL |
https://ieeexplore.ieee.org/document/9952891/
|
DOI |
10.1109/ICTC55196.2022.9952891
|
Google Scholar | BibTeX | DOI |