Heuristic Analysis for Security, Privacy and Bias of Text Generative AI: GhatGPT-3.5 case as of June 2023 | |
---|---|
Author | |
Abstract |
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI. |
Year of Publication |
2023
|
Date Published |
oct
|
URL |
https://ieeexplore.ieee.org/document/10397858
|
DOI |
10.1109/ICOCO59262.2023.10397858
|
Google Scholar | BibTeX | DOI |