"Hackers Can 'Poison' Open-Source Code on the Internet"

Researchers at Cornell Tech have discovered a new kind of online attack capable of manipulating natural-language modeling systems and circumventing known defenses. Code poisoning attacks can lead to consequences ranging from the modification of movie reviews to the manipulation of investment banks' Machine Learning (ML) models to overlook negative news coverage that could impact a company's stock. The study titled "Blind Backdoors in Deep Learning Models" emphasizes the importance of reviewing and verifying models and codes from open-source sites on the Internet before integrating them into a system. Through the implementation of code poisoning, hackers could manipulate supply chain automation models, resume screening, and toxic comment deletion. Without access to the original code or model, backdoor attacks can allow threat actors to upload malicious code to open-source sites commonly used among companies and programmers. Backdoor attacks allow hackers to have a significant impact without needing to modify code and models directly. The new type of attack can be performed before the model exists or before the data is collected. It can also target multiple victims in a single attack. The new paper describes a method for injecting backdoors into ML models, which is based on compromising the loss-value computation in the model-training code. The researchers also propose a defense against backdoor attacks involving detecting deviations from the model's original code. This article continues to discuss findings from the study on code poisoning attacks. 

The Cornell Chronicle reports "Hackers Can 'Poison' Open-Source Code on the Internet"

Submitted by Anonymous on