Can explainability and deep-learning be used for localizing vulnerabilities in source code? | |
---|---|
Author | |
Abstract |
Security vulnerabilities are weaknesses of software due for instance to design flaws or implementation bugs that can be exploited and lead to potentially devastating security breaches. Traditionally, static code analysis is recognized as effective in the detection of software security vulnerabilities but at the expense of a high human effort required for checking a large number of produced false positive cases. Deep-learning methods have been recently proposed to overcome such a limitation of static code analysis and detect the vulnerable code by using vulnerability-related patterns learned from large source code datasets. However, the use of these methods for localizing the causes of the vulnerability in the source code, i.e., localize the statements that contain the bugs, has not been extensively explored. In this work, we experiment the use of deep-learning and explainability methods for detecting and localizing vulnerability-related statements in code fragments (named snippets). We aim at understanding if the code features adopted by deep-learning methods to identify vulnerable code snippets can also support the developers in debugging the code, thus localizing the vulnerability’s cause Our work shows that deep-learning methods can be effective in detecting the vulnerable code snippets, under certain conditions, but the code features that such methods use can only partially face the actual causes of the vulnerabilities in the code.CCS Concepts• Security and privacy \rightarrow Vulnerability management; Systems security; Malware and its mitigation; \cdot Software and its engineering \rightarrow Software testing and debugging. |
Year of Publication |
2024
|
Date Published |
apr
|
URL |
https://ieeexplore.ieee.org/document/10556417
|
Google Scholar | BibTeX |