Winner of 7th Paper Competition is Evaluating Fuzz Testing

The winning paper is Evaluating Fuzz Testing by George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. This paper was presented at ACM SIGSAC Conference on Computer and Communications Security (CCS '18) in Toronto.

The research team investigated the evaluation process of fuzz testing tools. Fuzz testing tools help evaluate quality of software code by inputting large amounts of random data and monitoring the program for unexpected behavior. The researchers asked an important question, how to determine which tool is best. They began with a survey of the experimental evaluation contained within 32 fuzz testing papers. Their analysis identified problems in each of these evaluations so the research team developed their own extensive evaluation process and then re-evaluated the tools. The standardized evaluations showed that existing ad-hoc evaluation methodologies can lead to wrong or misleading assessments. From this research, the team developed guidance on evaluating tools.

The paper was selected because it embodies the attributes of outstanding science and the criteria of the competition: rigorous research, generalizable results and clarity of presentation. This paper is a step forward in bringing scientific understanding to the security community. It is grounded to current understanding by its methodological survey of evaluation practices, then advances the science through quantitative analysis and proposes conclusions that apply broadly in the fuzzing community. This paper is already having tremendous impact on fuzzing research, setting the standard for how evaluations should be done. It is brining scientific principles, such as rigorous conclusions and reproducibility to an area in need of scientific understanding.

Two additional papers were selected for an Honorable Mention.

For more information about the competition and to learn more about the honorable mention papers visit the Paper Competition Homepage.

Submitted by Anonymous on