"Community of Ethical Hackers Needed to Prevent AI's Looming 'Crisis of Trust'"

An international team of risk and machine-learning experts, led by researchers at the University of Cambridge's Centre for the Study of Existential Risk (CSER), recommends that the Artificial Intelligence (AI) industry creates a global community composed of ethical hackers and threat modelers dedicated to testing the potential harm of new AI products in order to gain the trust of governments and the public. The experts encourage the use of red team hacking, audit trails, bias bounties, and other techniques by companies building intelligent technologies to prove their integrity before releasing AI products for the wider public. AI systems' novelty and black box nature, as well as the race to get products to the marketplace, has delayed the development and adoption of auditing or third-party analysis. According to the experts, incentives for increasing trustworthiness should go beyond regulation. A new publication authored by the team outlines a series of concrete measures that AI developers should adopt. AI developers should consider the idea of AI red teaming, also known as white-hat hacking, which involves ethical hackers playing the role of malicious external agents. Such hackers would be called in to execute attacks against any new AI or strategize how the AI can be used for malicious purposes in order to find any weaknesses or potential harm. Although some big companies have the internal capability of conducting a red teaming assessment, the experts recommend the creation of a third-party community that can independently interrogate new AI products and share findings to benefit all AI developers. They also call for a global resource that can offer high-quality red teaming to small companies and research labs developing AI products. The report highlights the potential use of bias and safety bounties to improve openness and increase public trust in AI. It would also be beneficial to develop platforms dedicated to sharing information about cases where undesired AI behavior could be harmful to humans. This article continues to discuss recommendations for filling gaps in the trustworthy development of AI. 

The University of Cambridge reports "Community of Ethical Hackers Needed to Prevent AI's Looming 'Crisis of Trust'"

Submitted by Anonymous on