Articles on ML, AI, and Decision Support Security & Vulnerabilities

ML, AI, and Decision Support Security & Vulnerabilities

Concerns over the security of Artificial Intelligence (AI) and Machine Learning (ML) systems accompany discussions regarding the benefits and uses of AI and ML. AI and ML could be used to enhance the performance of various processes and technologies, including decision-making, data analytics, facial recognition systems, self-driving cars, robotics, and more. However, this technology's susceptibility to attacks, such as adversarial inputs and data poisoning, could pose significant security and safety risks. Through further research and development, the security, transparency, and explainability of AI and MI can be improved.

A list of publications has been curated from peer-reviewed online journals and magazines, in addition to security research-focused blogs and websites.

Scholarly Publications on AML, AI, and ML

Extraction

L. Chen, Y. Ye and T. Bourlai, "Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense," 2017 European Intelligence and Security Informatics Conference (EISIC), Athens, 2017, pp. 99-106. doi: 10.1109/EISIC.2017.21

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8240775&isnumber=8240751  

B. Biggio, g. fumera, P. Russu, L. Didaci and F. Roli, "Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective," in IEEE Signal Processing Magazine, vol. 32, no. 5, pp. 31-41, Sept. 2015.  doi: 10.1109/MSP.2015.2426728

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7192841&isnumber=7192809  

Z. Abaid, M. A. Kaafar and S. Jha, "Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers," 2017 IEEE 16th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, 2017, pp. 1-10. doi: 10.1109/NCA.2017.8171381 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8171381&isnumber=8171314

Y. Cao and J. Yang, "Towards Making Systems Forget with Machine Unlearning," 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 463-480.  doi: 10.1109/SP.2015.35 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163042&isnumber=7163005

N. D. Truong, J. Y. Haw, S. M. Assad, P. K. Lam and O. Kavehei, "Machine Learning Cryptanalysis of a Quantum Random Number Generator," in IEEE Transactions on Information Forensics and Security.  doi: 10.1109/TIFS.2018.2850770 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8367638&isnumber=8367624

 Saad Khan and Simon Parkinson. 2017. Causal Connections Mining Within Security Event Logs. In Proceedings of the Knowledge Capture Conference (K-CAP 2017). ACM, New York, NY, USA, Article 38, 4 pages.

DOI: https://doi.org/10.1145/3148011.3154476

 Rajesh Kumar, Zhang Xiaosong, Riaz Ullah Khan, Jay Kumar, and Ijaz Ahad. 2018. Effective and Explainable Detection of Android Malware Based on Machine Learning Algorithms. In Proceedings of the 2018 International Conference on Computing and Artificial Intelligence (ICCAI 2018). ACM, New York, NY, USA, 35-40.

DOI: https://doi.org/10.1145/3194452.3194465

 

Inference/Inversion

Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart. 2018. Learning geometric concepts with nasty noise. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2018). ACM, New York, NY, USA, 1061-1073.

DOI: https://doi.org/10.1145/3188745.3188754

Xiaoliang Tang, Xing Wang, and Di Jia. 2018. Efficient large scale kernel ridge regression via ensemble SPSD approximation. In Proceedings of the 2nd International Conference on Innovation in Artificial Intelligence (ICIAI '18). ACM, New York, NY, USA, 64-71.

DOI: https://doi.org/10.1145/3194206.3194236

David Massimo. 2018. User Preference Modeling and Exploitation in IoT Scenarios. In 23rd International Conference on Intelligent User Interfaces (IUI '18). ACM, New York, NY, USA, 675-676.

DOI: https://doi.org/10.1145/3172944.3173151

H. Duan, L. Yang, J. Fang and H. Li, "Fast Inverse-Free Sparse Bayesian Learning via Relaxed Evidence Lower Bound Maximization," in IEEE Signal Processing Letters, vol. 24, no. 6, pp. 774-778, June 2017.  doi: 10.1109/LSP.2017.2692217 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7894261&isnumber=7901503

 

Evasion

W. Wang and F. Bu, "Mathematical Modeling of Pursuit-Evasion System for Video Surveillance Network," 2017 International Conference on Computing Intelligence and Information System (CIIS), Nanjing, 2017, pp. 289-293. doi: 10.1109/CIIS.2017.48

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8327752&isnumber=8327648

A. A. Al-Talabi, "Fuzzy reinforcement learning algorithm for the pursuit-evasion differential games with superior evader," 2017 International Automatic Control Conference (CACS), Pingtung, 2017, pp. 1-6.  doi: 10.1109/CACS.2017.8284272 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8284272&isnumber=8284229

A. A. Al-Talabi, "Fuzzy actor-critic learning automaton algorithm for the pursuit-evasion differential game," 2017 International Automatic Control Conference (CACS), Pingtung, 2017, pp. 1-6.  doi: 10.1109/CACS.2017.8284278 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8284278&isnumber=8284229

S. Li and Y. Li, "Complex-based optimization strategy for evasion attack," 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, 2017, pp. 1-6.  doi: 10.1109/ISKE.2017.8258845

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8258845&isnumber=8258711

L. Chen, Y. Ye and T. Bourlai, "Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense," 2017 European Intelligence and Security Informatics Conference (EISIC), Athens, 2017, pp. 99-106. doi: 10.1109/EISIC.2017.21

URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8240775&isnumber=8240751

Y. Shi and Y. E. Sagduyu, "Evasion and causative attacks with adversarial deep learning," MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM), Baltimore, MD, 2017, pp. 243-248.  doi: 10.1109/MILCOM.2017.8170807

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8170807&isnumber=8170714

Z. Abaid, M. A. Kaafar and S. Jha, "Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers," 2017 IEEE 16th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, 2017, pp. 1-10. doi: 10.1109/NCA.2017.8171381 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8171381&isnumber=8171314

F. Luo, P. P. K. Chan, Z. Lin and Z. He, "Improving robustness of stacked auto-encoder against evasion attack based on weight evenness," 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Ningbo, 2017, pp. 230-235.  doi: 10.1109/ICWAPR.2017.8076694  URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8076694&isnumber=8076644

J. Zheng, Z. He and Z. Lin, "Hybrid adversarial sample crafting for black-box evasion attack," 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Ningbo, 2017, pp. 236-242.  doi: 10.1109/ICWAPR.2017.8076695 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8076695&isnumber=8076644

Z. Khorshidpour, S. Hashemi and A. Hamzeh, "Learning a Secure Classifier against Evasion Attack," 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), Barcelona, 2016, pp. 295-302.  doi: 10.1109/ICDMW.2016.0049 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7836680&isnumber=7836631

M. D. Awheda and H. M. Schwartz, "A fuzzy reinforcement learning algorithm using a predictor for pursuit-evasion games," 2016 Annual IEEE Systems Conference (SysCon), Orlando, FL, 2016, pp. 1-8.  doi: 10.1109/SYSCON.2016.7490542

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7490542&isnumber=7490508

M. D. Awheda and H. M. Schwartz, "Decentralized learning in pursuit-evasion differential games with multi-pursuer and single-superior evader," 2016 Annual IEEE Systems Conference (SysCon), Orlando, FL, 2016, pp. 1-8.  doi: 10.1109/SYSCON.2016.7490516

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7490516&isnumber=7490508

F. Zhang, P. P. K. Chan, B. Biggio, D. S. Yeung and F. Roli, "Adversarial Feature Selection Against Evasion Attacks," in IEEE Transactions on Cybernetics, vol. 46, no. 3, pp. 766-777, March 2016.  doi: 10.1109/TCYB.2015.2415032 

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7090993&isnumber=7406788

Z. Wang, M. Qin, M. Chen, C. Jia and Y. Ma, "A learning evasive email-based P2P-like botnet," in China Communications, vol. 15, no. 2, pp. 15-24, Feb. 2018.  doi: 10.1109/CC.2018.8300268

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8300268&isnumber=8300263

 

Poisoning

Ruimin Sun; Xiaoyong Yuan; Andrew Lee; Matt Bishop; Donald E. Porter; Xiaolin Li; Andre Gregio; Daniela Oliveira "The dose makes the poison — Leveraging uncertainty for effective malware detection," 2017 IEEE Conference on Dependable and Secure Computing, Taipei, 2017, pp. 123-130. doi: 10.1109/DESEC.2017.8073803

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8073803&isnumber=8073795

T. Liu, W. Wen and Y. Jin, "SIN2: Stealth infection on neural network — A low-cost agile neural Trojan attack methodology," 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), Washington, DC, 2018, pp. 227-230.  doi: 10.1109/HST.2018.8383920

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8383920&isnumber=8383882

A. Sahu, H. N. R. K. Tippanaboyana, L. Hefton and A. Goulart, "Detection of rogue nodes in AMI networks," 2017 19th International Conference on Intelligent System Application to Power Systems (ISAP), San Antonio, TX, 2017, pp. 1-6. doi: 10.1109/ISAP.2017.8071424

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8071424&isnumber=8071362

Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, and Jaehoon Amir Safavi. 2017. Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). ACM, New York, NY, USA, 103-110.

DOI: https://doi.org/10.1145/3128572.3140450

Shigang Liu, Jun Zhang, Yu Wang, Wanlei Zhou, Yang Xiang, and Olivier De Vel.. 2018. A Data-driven Attack against Support Vectors of SVM. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security (ASIACCS '18). ACM, New York, NY, USA, 723-734.

DOI: https://doi.org/10.1145/3196494.3196539

Edward Raff and Charles Nicholas. 2017. Malware Classification and Class Imbalance via Stochastic Hashed LZJD. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). ACM, New York, NY, USA, 111-120.

DOI: https://doi.org/10.1145/3128572.3140446

Edward Raff, Jared Sylvester, and Charles Nicholas. 2017. Learning the PE Header, Malware Detection with Minimal Domain Knowledge. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). ACM, New York, NY, USA, 121-132.

DOI: https://doi.org/10.1145/3128572.3140442

Battista Biggio. 2016. Machine Learning under Attack: Vulnerability Exploitation and Security Measures. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec '16). ACM, New York, NY, USA, 1-2.

DOI: http://dx.doi.org/10.1145/2909827.2930784

Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea. 2017. Robust Linear Regression Against Training Data Poisoning. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). ACM, New York, NY, USA, 91-102.

DOI: https://doi.org/10.1145/3128572.3140447

Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.  Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). ACM, New York, NY, USA, 27-38.

DOI: https://doi.org/10.1145/3128572.3140451

R. Zhang and Q. Zhu, "A game-theoretic defense against data poisoning attacks in distributed support vector machines," 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, 2017, pp. 4582-4587.  doi: 10.1109/CDC.2017.8264336

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8264336&isnumber=8263624

 

 

 

Reading List of Related Articles

The following list of recent articles highlights the vulnerabilties of machine learning (ML), artificial intelligence (AI), and decision support.

[1] "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" 

26 experts across different organizations and disciplines collaborated in the development of a report called the Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. The report warns of the potential misuse of AI by cyber adversaries and recommends the ways in which the threats posed by this malicious use of AI should be mitigated. 

[2] "Hackers Have Already Started to Weaponize Artificial Intelligence"

An experiment conducted by data scientists from the security firm, ZeroFox, demonstrated the ability to train AI to perform spear-phishing at a significantly higher rate than a human. This experiment shows that AI could be used by hackers to advance the performance of malicious cyber activities. 

[3] "Adversarial Learning: Mind Games with the Opponent"

Researchers highlight the vulnerability of machine learning algorithms to adversarial attacks and the concept of adversarial learning. Robust machine learning algorithms should be developed to counter such attacks.

[4] "Researchers Poison Machine Learning Engines"

Researchers from New York University demonstrated the backdooring of convolutional neural networks (CNNs). The backdooring of CNNs allowed false outputs to be produced in a controlled fashion.

[5] "Artificial Intelligence Can Drive Ransomware Attacks"

As the implementation of AI increases, hackers will use this technology to launch cyberattacks. Ransomware is one type of cyberattack that is expected to be increasingly launched through the use of AI. 

[6] "How to Turn a Pair of Glasses Into an AI-Fooling Spy Tool"

A team of computer scientists from the University of California, Berkeley, was able to fool facial recognition AI with a pair of glasses. The approach involves the injection of poisoned samples into the training dataset. 

[7] "AI Training Algorithms Susceptible to Backdoors, Manipulation"

Researchers from New York University highlighted the vulnerability of deep learning-based AI algorithms to manipulation through the use of insertion of backdoors. This vulnerability poses a threat to self-driving car technology.

[8] "Attacks Against Machine Learning — An Overview"

Attacks on ML can be classified into three classes. These classes include adversarial inputs, data poisoning, and model stealing.

[9] "Computer Vision Algorithms Are Still Way Too Easy to Trick"

Research conducted by students at MIT shows that machine vision algorithms are susceptible to being deceived. One experiment demonstrated that an image classifier developed by Google can be tricked into identifying a 3-D-printed turtle as a rifle.

[10] "AI vs AI: New algorithm automatically bypasses your best cybersecurity defense"

Researchers at Endgame security firm have demonstrated the use of AI to modify malware code in order to circumvent anti-malware machine learning within antivirus software. The experiment performed by researchers brings attention to the issue of all AI having blind spots that could be exploited by other AI to bypass security.

[11] "Using AI-enhanced malware, researchers disrupt algorithms used in antimalware"

Researchers at Peking University's School of Electronics Engineering and Computer Science published a research paper titled "Generating Adversarial Malware Examples for Black Box Attacks Based on GAN". The paper discusses the components of "MalGAN", an algorithm used to produce adversarial malware examples and evade black-box machine learning-based detection models.

[12] "Bot vs Bot in Never-Ending Cycle of Improving Artificial intelligence"

Hyrum Anderson, technical director of data science at Endgame, presented research in support of bolstering machine learning defenses. His research emphasizes the importance of using machine learning to discover and close blind spots in machine learning models before attackers reach them.

[13] "How Artificial Intelligence and Machine Learning Will Impact Cybersecurity"

Malwarebytes Labs discusses the impact of AI and ML on cybersecurity. This discussion covers the current uses and expectations surrounding the application of such technology in cybersecurity.

[14] "AI Can Help Cybersecurity—If It Can Fight Through the Hype"

The concepts and security applications of AI and ML systems are discussed. Research surrounding the ways in which attackers could use ML techniques to perform malicious activities are also highlighted.

[15] "6 Ways Hackers Will Use Machine Learning to Launch Attacks"

As AI and machine learning are expected to significantly improve upon cybersecurity defenses, hackers are also expected to utilize such technologies in the launch of highly sophisticated cyberattacks. Adversarial machine learning is predicted to be used by cybercriminals in the creation of malware, smart botnets, advanced spear-phishing emails, and more.

[16] "The Malicious Use of Artificial Intelligence in Cybersecurity"

Experts have worked together in the examination of the potential ways in which AI could be misused by cybercriminals and nation-state actors. The ethical problem surrounding the use of AI is also examined.

[17] "How to Stealthily Poison Neural Network Chips in the Supply Chain"

Researchers at Clemson University demonstrated the launch of a hardware Trojan on neural network models. This attack could be used to alter the output of neural network models in a stealthy manner.

[18] "Algorithmic Warfare: AI — A Tool For Good and Bad"

ML shows great promise in analyzing big data. However, the intelligence community is concerned about the vulnerabilities that such systems could contain.  

[19] "Boffins Bust AI with Corrupted Training Data"

Researchers from New York University brings attention to the possibility of training AI models to fail. Attack scenarios in which training data is poisoned are further highlighted.

[20] "The Dark Secret at the Heart of AI"

The inability of AI to explain its decisions makes it difficult to predict the possible occurrence of failures in its applications. It is important for AI to be understandable and explainable.

[21] "Attacking Machine Learning with Adversarial Examples"

Attacks on machine learning models using adversarial examples are being explored. OpenAI discusses potential attacks using adversarial examples and defenses against such attacks.  

[22] "Can we Trust Machine Learning Results? Artificial Intelligence in Safety-Critical Decision Support"

AI and ML have significantly contributed to the advancement of different areas of technology. However, the black box nature and complexity of deep learning models poses a problem.

[23] "Human-AI Decision Systems"

Decision systems are essential for the functioning of organizations and society. Three concepts have been proposed to provide guidance in the development of human-AI decision systems. 

[24] "Peering Inside an AI’s Brain Will Help Us Trust Its Decisions"

It is important to understand why machine learning algorithms can be fooled into misclassifying input such as images. Chris Grimm and his colleagues at Brown University developed a system, which analyzes an AI to see what it is looking at in the process of categorizing an image.  

[25] "We Don’t Understand How AI Makes Most Decisions, so Now Algorithms Are Explaining Themselves"

Understanding how AI makes decisions allows for the improvement of such technology when failures occur. Researchers from the University of California, Berkeley, and the Max Planck Institute for Informatics have been working towards finding a way for AI to explain how it arrives at the decisions it makes.

[26] "Hacking the Brain With Adversarial Images"

Researchers at Google Brain showed the possibility of tricking humans and convolutional neural networks (CNNs) through the use of adversarial images. Research and examples in relation adversarial images are discussed.

[27] "Can AI Be Taught to Explain Itself?"

ML continues to grow in power, however the need for such technology to provide explanations as to how it makes decisions remains. The decision-making process of AI needs to be more transparent and explainable.  

[28] "Even AI Creators Don't Understand How Complex AI Works"

The creators of AI algorithms lack understanding surrounding how such algorithms work. The black box nature of these complex algorithms presents serious implications.

[29] "Inside the Black Box: Understanding AI Decision-Making"

The application of AI continues to increase, but there is still a lack of understanding in relation to the internal decision-making of such technology. Researchers are still working towards peering into the AI black box.

[30] "Making Machine Learning Robust Against Adversarial Inputs"

Research has been conducted to explore the potential use of adversarial inputs to interfere with the functioning of ML systems. The robustness of ML systems must be enhanced to combat malicious data input by adversaries.

[31] "Transparent Machine Learning: How to Create 'Clear-Box' AI"

AI must be explainable in order to garner trust in such technology. A company called OptimizingMind has created technology to increase the transparency of AI decision-making.

[32] "Building trust in machine learning and AI"

Increasing the transparency of ML and AI algorithms is essential for building trust in such systems. Knowing how these systems operate and make decisions could allow for greater improvement and prevention against algorithmic discrimination and bias.

[33] "UW Security Researchers Show That Google’s AI Platform for Defeating Internet Trolls Can Be Easily Deceived"

Researchers at the University of Washington demonstrated how Google’s AI platform, Perspective, could easily be tricked. Perspective is a machine learning-based system created to detect abusive and malicious speech on the internet.

[34] "UW Security Researchers Show That Google’s AI Tool for Video Searching Can Be Easily Deceived"

Security researchers at the University of Washington demonstrated how Google’s AI tool used to analyze and label video content could be fooled. The Cloud Video Intelligence API could be deceived through the subtle modification of videos.

[35] "Researchers Show How to Steal AI"

Researchers at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina released a paper titled “Stealing Machine Learning Models via Prediction APIs”. In this paper, they discuss how they were able to fully reproduce machine learning-trained AI models.

[36] "A Monitor’s Ultrasonic Sounds Can Reveal What’s on the Screen"

A team of researchers have presented an acoustic side-channel attack that could be used by hackers to gain information pertaining to what is being displayed on a computer monitor. The attack can be performed by analyzing the ultrasonic sounds produced by the targeted monitor via the use of recordings and machine learning algorithms.

[37] "IBM’s Proof-of-Concept Malware Uses AI for Spear Phishing"

The increasing use of artificial intelligence (AI) in the defense against cyber threats is expected to be accompanied by the growing development of attack tools that weaponize AI. IBM has developed proof-of-concept malware called DeepLocker, which uses AI to perform spear phishing.

[38] "Watch SwRI Engineers Trick Object Detection System"

Engineers at Southwest Research Institute have developed new adversarial learning techniques that can make objects invisible to object detection systems in which deep-learning algorithms are used. These techniques can also be used to deceive object detection systems into seeing another object or seeing objects in another location. The development of these adversarial learning techniques by researchers bring further attention to the vulnerabilities in deep learning algorithms and other ML algorithms.

[39] "It’s Disturbingly Easy to Trick AI into Doing Something Deadly"

Recent studies conducted by artificial intelligence (AI) researchers emphasize the major impacts that adversarial machine learning (ML) can have on safety. Researchers have performed adversarial attacks on machine learning systems to demonstrate how easy it is to alter the proper functioning of such systems and highlight the potential consequences of such manipulations by hackers.

[40] "How Malevolent Machine Learning Could Derail AI"

Dawn Song is a professor at UC Berkley whose focus is on the security risks associated with artificial intelligence (AI) and machine learning (ML). Song recently gave a presentation at EmTech Digital, an event created by MIT Technology Review, in which she emphasized the threat posed by the emergence of new techniques for probing and manipulating ML systems known as adversarial ML methods. Adversarial ML can reveal the information that an ML algorithm has been trained on, disrupt the proper functioning of an ML system, make an ML system produce specific types of outputs, and more.

[41] "Researchers Trick Tesla Autopilot into Steering into Oncoming Traffic"

Researchers from Tecent's Keen Security Lab were able to deceive the Enhanced Autopilot feature of a Tesla Model S 75 into steering towards oncoming traffic by placing small stickers on the ground. Tesla's Enhanced Autopilot gathers information pertaining to obstacles, terrain, and lane changes through the use of cameras, ultrasonic sensors, and radar. This information is then fed to onboard computers, which use machine learning in order to form judgements. According to researchers, the strategic placement of stickers on the road can make the Autopilot steer into the wrong lane.

[42] "The Dark Side of Machine Learning"

Nicholas Carlini, a researcher at Google, gave an overview of the different types of adversarial attacks that can be launched against machine learning systems. These attacks could lead to the misclassification of images and sounds by machine learning systems. Carlini also highlighted the possible extraction of sensitive information from training data sets by adversaries.

[43] "Machine Learning Can Also Aid the Cyber Enemy: NSA Research Head"

Cyber attackers are starting to utilize machine learning technology to launch their attacks. Dr. Deborah Frincke, head of the NSA/CSS Research Directorate talks about potential scenarios in which a malicious user could manipulate the machine learning process before it even begins. The machine learning technology could be controlled to hide the attacker instead of helping an organization's network develop a self-maintaining environment.

[44] "AI Slurps, Learns Millions of Passwords to Work out Which Ones You May Use Next"

A team of researchers at the Stevens Institute of Technology have released a paper, detailing a method in which they use machine learning systems to predict the passwords that users will use. The technique demonstrated by researchers called "PassGAN", uses a generative adversarial network composed of two machine learning systems. Through the feeding of plain-text passwords gathered from a previous leak, the machine learning system is able to figure out the rules used by people in the generation of their passwords.

[45] "Is AI In Cyber Security A New Tool For Hackers In 2019?"

Hackers can use AI to bypass facial security, deceive autonomous vehicles to misinterpret speed limits and stop signals, and fool sentiment analysis of hotels, movie reviews and more. It can also be used to bypass spam filters, fake voice commands, misclassify system based medical predictions, and get past anomaly detection engines. 

[46] "Researchers Take Aim at Hackers Trying to Attack High-Value AI Models""

Researchers at Penn State University are working to develop technical counter-measures against attacks targeting high-value machine learning (ML) models such as those used by soldiers in the guiding of military weapon systems, economists in the monitoring of markets, and more. These technical counter-measures are expected to help trap hackers in order to measure and observe their activities. From there, actions could then be taken to defend against hacks.

[47] "Is Malware Heading Towards a WarGames-style AI vs AI Scenario?"

As defense systems against cyberattacks continue to evolve with the help of artificial intelligence (AI) and machine learning (ML), cybercriminals alter their tactics to evade such systems. Cybercriminals are expected to use AI to make their malware attacks increasingly difficult to detect.

[48] "AI Advancement Opens Health Data Privacy to Attack"

A new study conducted by researchers at UC Berkeley highlights the creation of new threats facing the privacy of health data. These new threats stem from the advancements in artificial intelligence (AI). Researchers suggest that current laws and regulations should be bolstered to protect health data as new threats are formed by AI.

[49]  "82% of Security Pros Fear Hackers Using AI to Attack Their Company"

A new report released by Neustar highlights the thoughts of security professionals in relation to artificial intelligence (AI) and cyber threats faced by their organizations. Findings of a survey conducted by Neustar to which 301 senior technology and security workers responded, reveal the expectations and concerns surrounding the use of AI to improve cybersecurity as well as the abuse of this technology by hackers to launch attacks.

[50]  "How AI Can 'Change the Locks' in Cybersecurity"

Organizations and industries often invest millions of dollars into security products to enhance the security of information, however the identical configurations of these products should be taken into consideration. Artificial intelligence and machine learning can be utilized to uniquely identify unusual patterns in attacks through the ability to learn in the environment, the defense mechanisms of artificial intelligence systems, models, and challenges.

[51]  "Researchers Built an Invisible Backdoor to Hack AI's Decisions"

NYU researchers performed a demonstration, which shows the ability to manipulate the behavior of artificial intelligence (AI) used in the functioning of self-driving autonomous cars and image recognition software. Researchers were able to train artificial neural networks to confidently recognize presented triggers in order to override what is actually supposed to be detected by the neural network.

[52]  "Global AI Experts Warn of Malicious Use of AI in the Coming Decade"

Twenty-six experts from different organizations and disciplines collaborated on a report in which they emphasize the potential misuse of artificial intelligence (AI) by malicious actors to launch new highly-sophisticated cyberattacks. The report highlights the ways in which the realms of digital, physical, and political security may be disrupted through the misuse of AI.

[53]  "Researchers Unveil Tool to Debug 'Black Box' Deep Learning Algorithms"

Deep learning is a form of machine learning that uses layers of artificial neurons in an attempt to mimic the processing and merging of information performed by the human brain. Although this technology has improved significantly in the development of intelligence, increased automation of tasks still raise concerns pertaining to safety, security, and ethics. A tool by the name of DeepXplore has been developed by researchers at Columbia and Leigh universities to perform automatic error-checking of neurons within deep learning neural networks used by self-driving cars to uncover deficient reasoning by clusters of neurons, malware masqueraded as harmless code in anti-virus software, and more.

[54] "AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks"

A new contest ran by Kaggle, which is a platform for data science competitions, will allow researchers to battle each other with AI algorithms. This competition was created in an effort to encourage learning and understanding of protecting machine-learning systems against cyberattacks. The contest will contain three challenges, which involve confusing a machine-learning system into improper function, forcing a system to perform incorrect classification, and developing highly powerful defenses.

[55] "Computer Scientists Design Way to Close 'Backdoors' in AI-Based Security Systems"

Security researchers at the University of Chicago are developing methods to defend against backdoor attacks in artificial neural network security systems. One technique that will be presented by researchers at the 2019 IEEE Symposium on Security and Privacy in San Francisco involves the scanning of machine learning (ML) systems for signs of a sleeper cell, which is a group of spies or terrorists that secretly remain inactive in a targeted environment until given instructions to act. The use of this technique also allows the owner of the system to trap potential infiltrators.

[56] "Improving Security as Artificial Intelligence Moves to Smartphones"

Devices such as smartphones, security cameras, and speakers, will soon rely more on artificial intelligence to increase the speed at which speech and images are processed. A compression technique, called quantization, reduces the size of deep learning models in order to lessen computation and energy costs. However, compressed AI models have been found to be more vulnerable to adversarial attacks that could cause models to misclassify altered images. MIT and IBM researchers have developed a technique to improve the security of compressed AI models against such attacks.

[57] "Researchers Develop 'Vaccine' Against Attacks on Machine Learning"

A significant breakthrough in machine learning (ML) research has been made by researchers from the Commonwealth Scientific and Industrial Research Organization's (CSIRO) Data61, an arm of Australia's national science agency specializing in data and digital technology. Researchers have developed techniques to prevent adversarial attacks on ML. Adversarial attacks on ML refer to attacks in which malicious data inputs are used to interfere with the functioning of ML models. The techniques developed by researchers to combat such attacks are similar to those used in the vaccination process.

[58] "Artificial Intelligence May Not 'Hallucinate' After All"

Great advancements have been made in machine learning in regard to image recognition as this technology can now identify objects in photographs as well as generate authentic-looking fake images. However, the machine learning algorithms used by image recognition systems are still vulnerable to attacks that could lead to the misclassification of images. Researchers continue to explore the problem of adversarial examples, which could be used by attackers to cause a machine learning classifier to misidentify an image.

[59] "Research Shows Humans Are Attacking Artificial Intelligence Systems"

A research group led by De Montfort University Leicester (DMU) has brought further attention to the increased manipulation of artificial intelligence (AI) software in search engines, social media platforms, and more, by online hackers to execute cyberattacks. According to a report published by the European Union-funded project, SHERPA, hackers are increasingly abusing existing AI systems to perform malicious activities instead of creating new attacks in which machine learning is used.

[60] "The Next Big Privacy Hurdle? Teaching AI to Forget"

The General Data Regulation (GDPR) introduced the "right to be forgotten", which empowers individuals to request that their personal data is erased. The enactment of this regulation has sparked debates about the collection, storage, and usage of data, as well as the level of control the public should have over their personal data. One aspect that is often overlooked in the discussion of digital privacy is the control of data once it is fed into artificial intelligence (AI) and machine-learning algorithms. Recommendation engines such as those that suggest videos, purchases, and more, use AI trained on customer or user data. The question arises as to how AI can be taught to forget data.

[61] "Protecting Smart Machines From Smart Attacks"

A team of researchers at Princeton University conducted studies on how adversaries can attack machine learning models. As the application of machine learning grows, it is important that we examine the different ways in which this technology can be exploited by attackers to develop countermeasures against them. The researchers demonstrated different adversarial machine learning attacks, which include data poisoning attacks, evasion attacks, and privacy attacks.

[62] "Report: 2020 is the Year Data Gets Weaponized"

According to a report recently released by research firm Forrester, titled Predictions 2020: Cybersecurity, adversaries are expected to be ahead of security leaders in the application of artificial intelligence and machine learning technologies. Attackers will perform advanced techniques, using AI, ML, and large amounts of available data.

[63] "Preventing Manipulation in Automated Face Recognition"

The adoption and implementation of automated face recognition continues to increase. However, this method of authentication remains vulnerable to morphing attacks in which different facial images are merged together to create a fake image. A photo stored in a biometric passport that has been altered in such a manner can allow two different people to use the same passport. A team of researchers from the Fraunhofer Institute and the Heinrich Hertz Institute are working on developing a process that uses machine learning methods to prevent the success of morphing attacks in a project called ANANAS (Anomaly Detection for Prevention of Attacks on Authentication Systems Based on Facial Images).

[64] "Blind Spots in AI Just Might Help Protect Your Privacy"

Significant advancements have been made in machine learning (ML) as this technology has helped in detecting cancer and predicting personal traits. ML technology has also enabled self-driving cars and highly accurate facial recognition. However, ML models remain vulnerable to attacks in which adversarial examples are used to cause the models to make mistakes. Adversarial examples are inputs designed by an attacker to cause a ML model to produce incorrect output, which can pose a threat to the safety of users in the case of self-driving cars. According to privacy-focused researchers at the Rochester Institute of Technology and Duke University, there is a bright side to adversarial examples in that such inputs can be used to protect data and defend the privacy of users.

[65] "Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer"

Researchers at Imperial College London conducted a study in which they examined the inadequacy of data anonymization methods. According to researchers, individuals in anonymized versions of data can still be re-identified through the use of a machine learning model and any data sets containing 15 identifiable characteristics, which include age, gender, marital status, and more. The study involved the analysis of 210 different data sets from five sources, one of which was the U.S. government. The data sets from the U.S. government included information about over 11 million people.

[66] "Researchers Trick AI-Based Antivirus into Accepting Malicious Files"

Researchers in Australia have discovered a way to trick Cylance's AI-based antivirus system into incorrectly identifying malware as goodware. The method used by researchers to trick the system involves adding strings from a non-malicious file to the malicious file. The discovery further emphasizes the capability of cybercriminals to bypass next-generation antivirus tools.

[67] "Machine Learning: With Great Power Come New Security Vulnerabilities"

There have been many advancements in machine learning (ML) as it has been applied in the operation of self-driving cars, speech recognition, biometric authentication, and more. However, ML models remain vulnerable to a variety of attacks that could lead to the production of incorrect output, posing a threat to safety and security. In order to bolster ML security we should conduct further research on the potential adversaries in ML attacks, the different factors that can influence attackers to target ML systems, and the different ways in which ML attacks can be executed. Using these factors, distinct ML attacks, including evasion, poisoning, and privacy attacks can be identified.

[68] "Protecting Smart Machines From Smart Attacks"

A team of researchers at Princeton University conducted studies on how adversaries can attack machine learning models. As the application of machine learning grows, it is important that we examine the different ways in which this technology can be exploited by attackers to develop countermeasures against them. The researchers demonstrated different adversarial machine learning attacks, which include data poisoning attacks, evasion attacks, and privacy attacks.

 

References

[1]          The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. (2018, February 21). Retrieved from https://www.cser.ac.uk/news/malicious-use-artificial-intelligence/

[2]          Dvorsky, G. (2017, September 11). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

[3]         Kantarcioglu, M., Xi, B., Zhou, Y. (2017, August 7). Adversarial Learning: Mind Games with the Opponent. Retrieved from http://www.computingreviews.com/hottopic/hottopic_essay.cfm?htname=adversarial

[4]         Townsend, K. (2017, August 31). Researchers Poison Machine Learning Engines. Retrieved from https://www.securityweek.com/researchers-poison-machine-learning-engines

[5]         Beltov, M. (2017, September 21). Artificial Intelligence Can Drive Ransomware Attacks. Retrieved from https://www.informationsecuritybuzz.com/articles/artificial-intelligence-can-drive-ransomware-attacks/

[6]          Pearson, J. (2017, December 19). How to Turn a Pair of Glasses Into an AI-Fooling Spy Tool. Retrieved from https://motherboard.vice.com/en_us/article/yw5dng/how-to-turn-a-pair-of-glasses-into-an-ai-fooling-spy-tool

[7]          Cimpanu, C. (2017, August 25). AI Training Algorithms Susceptible to Backdoors, Manipulation. Retrieved from https://www.bleepingcomputer.com/news/security/ai-training-algorithms-susceptible-to-backdoors-manipulation/

[8]          Bursztein, E. (2018, May). Attacks Against Machine Learning — An Overview. Retrieved from https://www.predictiveanalyticsworld.com/patimes/attacks-against-machine-learning-an-overview/9580/

[9]          Snow, J. (2017, December 20). Computer Vision Algorithms Are Still Way Too Easy to Trick. Retrieved from https://www.technologyreview.com/the-download/609827/computer-vision-algorithms-are-still-way-too-easy-to-trick/

[10]        Vigliarolo, B. (2017, August 2). AI vs AI: New algorithm automatically bypasses your best cybersecurity defenses. Retrieved from  https://www.techrepublic.com/article/ai-vs-ai-new-algorithm-automatically-bypasses-your-best-cybersecurity-defenses/

[11]           Kassner, M. (2017, May 4). Using AI-enhanced malware, researchers disrupt algorithms used in antimalware. Retrieved from https://www.techrepublic.com/article/using-ai-enhanced-malware-researchers-disrupt-algorithms-used-in-antimalware/

[12]          Townsend, K. (2017, July 25). Bot vs Bot in Never-Ending Cycle of Improving Artificial intelligence. Retrieved from https://www.securityweek.com/bot-vs-bot-never-ending-cycle-improving-artificial-intelligence

[13]          Arntz, P. (2018, March 9). How artificial intelligence and machine learning will impact cybersecurity [Blog post]. Retrieved from https://blog.malwarebytes.com/security-world/2018/03/how-artificial-intelligence-and-machine-learning-will-impact-cybersecurity/

[14]          Newman, L. H. (2018, April 29). AI Can Help Cybersecurity--If It Can Fight Through the Hype. Retrieved from https://www.wired.com/story/ai-machine-learning-cybersecurity/

[15]          Drinkwater, D. (2018, January 22). 6 ways hackers will use machine learning to launch attacks. Retrieved from https://www.csoonline.com/article/3250144/machine-learning/6-ways-hackers-will-use-machine-learning-to-launch-attacks.html

[16]          Townsend, K. (2018, March 28). The Malicious Use of Artificial Intelligence in Cybersecurity. Retrieved from https://www.securityweek.com/malicious-use-artificial-intelligence-cybersecurity

[17]          Claburn, T. (2018, June 19). How to stealthily poison neural network chips in the supply chain. Retrieved from https://www.theregister.co.uk/2018/06/19/hardware_trojans_ai/

[18]          Tadjdeh, Y. (2018, April 30). Algorithmic Warfare: AI — A Tool For Good and Bad. Retrieved from http://www.nationaldefensemagazine.org/articles/2018/4/30/algorithmic-warfare-ai-a-tool-for-good-and-bad

[19]          Chirgwin, R. (2017, August 28). Boffins bust AI with corrupted training data. Retrieved from https://www.theregister.co.uk/2017/08/28/boffins_bust_ai_with_corrupted_training_data/

[20]          Knight, W. (2017, April 11). The Dark Secret at the Heart of AI. Retrieved from https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[21]          OpenAI. (2017, February 24). Attacking Machine Learning with Adversarial Examples Blog post]. Retrieved from https://blog.openai.com/adversarial-example-research/

[22]          Can we Trust Machine Learning Results? Artificial Intelligence in Safety-Critical Decision Support. (2018, January 7). Retrieved from https://ercim-news.ercim.eu/en112/r-i/can-we-trust-machine-learning-results-artificial-intelligence-in-safety-critical-decision-support

[23]          Pentland, A. (2017, November 1). Human-AI Decision Systems. Retrieved from http://thehumanstrategy.mit.edu/blog/human-ai-decision-systems

[24]          Reynolds, M. (2017, July 3). Peering inside an AI’s brain will help us trust its decisions. Retrieved from https://www.newscientist.com/article/2139396-peering-inside-an-ais-brain-will-help-us-trust-its-decisions/

[25]          Gershgorn, D. (2016, December 20). We don’t understand how AI make most decisions, so now algorithms are explaining themselves. Retrieved from https://qz.com/865357/we-dont-understand-how-ai-make-most-decisions-so-now-algorithms-are-explaining-themselves/

[26]          Ackerman, E. (2018, February 28). Hacking the Brain With Adversarial Images.  Retrieved from https://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/hacking-the-brain-with-adversarial-images

[27]          Kuang, C. (2017, November 21). Can A.I. Be Taught to Explain Itself? Retrieved from https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html?_r=0

[28]          Beres, D. (n.d.). Even AI Creators Don't Understand How Complex AI Works. Retrieved from https://bigthink.com/21st-century-spirituality/black-box-ai

[29]          McLellan, C. (2016, December 1). Inside the black box: Understanding AI decision-making. Retrieved from https://www.zdnet.com/article/inside-the-black-box-understanding-ai-decision-making/

[30]          Goodfellow, I., McDaniel, P., Papernot, N. (2018, July). Making Machine Learning Robust Against Adversarial Inputs. Retrieved from https://cacm.acm.org/magazines/2018/7/229030-making-machine-learning-robust-against-adversarial-inputs/fulltext

[31]          Reese, H. (2016, November 16). Transparent machine learning: How to create 'clear-box' AI. Retrieved from https://www.techrepublic.com/article/transparent-machine-learning-how-to-create-clear-box-ai/

[32]          Schabenberger, O. (2018, February 15). Building trust in machine learning and AI. Retrieved from https://www.infoworld.com/article/3255948/machine-learning/building-trust-in-machine-learning-and-ai.html

[33]          Langston, J. (2017, February 28). UW security researchers show that Google’s AI platform for defeating Internet trolls can be easily deceived. Retrieved from http://www.washington.edu/news/2017/02/28/uw-security-researchers-show-that-googles-ai-platform-for-defeating-internet-trolls-can-be-easily-deceived/

[34]          Langston, J. (2017, April 3). UW security researchers show that Google’s AI tool for video searching can be easily deceived. Retrieved from http://www.washington.edu/news/2017/02/28/uw-security-researchers-show-that-googles-ai-platform-for-defeating-internet-trolls-can-be-easily-deceived/

[35]          Greenberg, A. (2016, September 30). Researchers show how to steal AI. Retrieved from https://www.wired.com/2016/09/how-to-steal-an-ai/

[36]          Newman, L. H. (2018, August 23). A Monitor’s Ultrasonic Sounds Can Reveal What’s on the Screen. Retrieved from https://www.wired.com/story/monitor-ultrasonic-sounds-reveal-content-side-channel/

[37]          Allen, T. (2018, August 9). IBM’s Proof-of-Concept Malware Uses AI for Spear Phishing. Retrieved from https://www.computing.co.uk/ctg/news/3060847/ibms-proof-of-concept-malware-uses-ai-for-spear-phishing

[38]          Watch SwRI Engineers Trick Object Detection System. (2019, April 6). Retrieved from https://www.therobotreport.com/object-detection-tricked-swri-engineers/

[39]          Samuel, S. (2019, April 8). It's Disturbingly Easy to Trick AI into Doing Something Deadly.  Retrieved from https://www.vox.com/future-perfect/2019/4/8/18297410/ai-tesla-self-driving-cars-adversarial-machine-learning

[40]          Knight, W. (2019, March 25).  How Malevolent Machine Learning Could Derail AI. Retrieved from https://www.technologyreview.com/s/613170/emtech-digital-dawn-song-adversarial-machine-learning/

[41]          Goodin, D. (2019, April 1).  Researchers Trick Tesla Autopilot into Steering into Oncoming Traffic. Retrieved from https://arstechnica.com/information-technology/2019/04/researchers-trick-tesla-autopilot-into-steering-into-oncoming-traffic/

[42]          Spring, T. (2019, March 8).  The Dark Side of Machine Learning. Retrieved from https://threatpost.com/machine-learning-dark-side/142616/

[43]           Stilgherrian. (2017, March 15).  Machine Learning Can Also Aid the Cyber Enemy: NSA Research Head. Retrieved from https://www.zdnet.com/article/machine-learning-can-also-aid-the-cyber-enemy-nsa-research-head/

[44]          Thomson, L. (2017, September 20).  AI Slurps, Learns Millions of Passwords to Work out Which Ones You May Use Next. Retrieved from https://www.theregister.co.uk/2017/09/20/researchers_train_ai_bots_to_crack_passwords/

[45]          Aarzu. (2019, April 10).  Is AI In Cyber Security A New Tool For Hackers In 2019? Retrieved from https://dazeinfo.com/2019/04/10/artificle-intelligence-cyber-security-tool-for-hackers-in-2019/

[46]          Swayne, M. (2019, March 25).  Researchers Take Aim at Hackers Trying to Attack High-Value AI Models. Retrieved from https://news.psu.edu/story/564890/2019/03/25/research/researchers-take-aim-hackers-trying-attack-high-value-ai-models

[47]          Townsend, K. (2018, December 5).  Is Malware Heading Towards a WarGames-style AI vs AI Scenario? Retrieved from https://www.securityweek.com/malware-heading-towards-wargames-style-ai-vs-ai-scenario

[48]          AI Advancement Opens Health Data Privacy to Attack. (2018, December 31). Retrieved from http://www.homelandsecuritynewswire.com/dr20181231-ai-advancement-opens-health-data-privacy-to-attack

[49]          Rayome, A. (2018, October 25).  82% of Security Pros Fear Hackers Using AI to Attack Their Company. Retrieved from https://www.techrepublic.com/article/82-of-security-pros-fear-hackers-using-ai-to-attack-their-company/

[50]          Miserendino, S. (2017, April 1).  How AI Can ‘Change the Locks’ in Cybersecurity. Retrieved from https://venturebeat.com/2017/04/01/how-ai-can-change-the-locks-in-cybersecurity/

[51]          Gershgorn, D. (2017, August 24).  Researchers Built an Invisible Backdoor to Hack AI’s Decisions. Retrieved from https://qz.com/1061560/researchers-built-an-invisible-back-door-to-hack-ais-decisions/

[52]           Global AI Experts Warn of Malicious Use of AI in the Coming Decade. (2018, February 26). Retrieved from http://www.homelandsecuritynewswire.com/dr20180226-global-ai-experts-warn-of-malicious-use-of-ai-in-the-coming-decade?page=0,0

[53]           Researchers Unveil Tool to Debug 'Black Box' Deep Learning Algorithms. (2017, October 25). Retrieved from https://www.eurekalert.org/pub_releases/2017-10/cuso-rut102517.php

[54]          Knight, W. (2017, July 20).  AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks. Retrieved from https://www.technologyreview.com/s/608288/ai-fight-club-could-help-save-us-from-a-future-of-super-smart-cyberattacks/

[55]          Mitchum, R. (2019, April 24).  Computer Scientists Design Way to Close 'Backdoors' in AI-Based Security Systems. Retrieved from https://techxplore.com/news/2019-04-scientists-backdoors-ai-based.html

[56]           Martineau, K. (2019, April 23).  Improving Security as Artificial Intelligence Moves to Smartphones. Retrieved from http://news.mit.edu/2019/improving-security-ai-moves-to-smartphones-0423

[57]          Researchers Develop 'Vaccine' Against Attacks on Machine Learning. (2019). Retrieved from https://www.csiro.au/en/News/News-releases/2019/Researchers-develop-vaccine-against-attacks-on-machine-learning

[58]           Matsakis, L. (2019, May 8).  Artificial Intelligence May Not 'Hallucinate' After All. Retrieved from https://www.wired.com/story/adversarial-examples-ai-may-not-hallucinate/

[59]          Research Shows Humans Are Attacking Artificial Intelligence Systems. (2019, July 12). Retrieved from https://www.dmu.ac.uk/about-dmu/news/2019/july/research-shows-humans-are-attacking-artificial-intelligence-systems.aspx

[60]           Shou, D. (2019, June 12).  The Next Big Privacy Hurdle? Teaching AI to Forget. Retrieved from https://www.wired.com/story/the-next-big-privacy-hurdle-teaching-ai-to-forget/

[61]           Hadhazy, A. (2019, October 14).  Protecting smart machines from smart attacks. Retrieved from https://www.princeton.edu/news/2019/10/14/adversarial-machine-learning-artificial-intelligence-comes-new-types-attacks

[62]           Konkel, F. (2019, October 30).  Report: 2020 is the Year Data Gets Weaponized. Retrieved from https://www.nextgov.com/cybersecurity/2019/10/report-2020-year-data-gets-weaponized/160984/

[63]           Gesellschaft, F. (2019, October 1).  Preventing manipulation in automated face recognition. Retrieved from https://techxplore.com/news/2019-10-automated-recognition.html

[64]           Greenberg, A. (2019, October 2).  Blind Spots in AI Just Might Help Protect Your Privacy. Retrieved from https://www.wired.com/story/adversarial-examples-machine-learning-privacy-social-media/

[65]           Ehrenkranz, M. (2019, July 23).  Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer. Retrieved from https://gizmodo.com/researchers-reveal-that-anonymized-data-is-easy-to-reve-1836629166

[66]           Riley, D. (2019, July 18).  Researchers Trick AI-Based Antivirus into Accepting Malicious Files. Retrieved from https://siliconangle.com/2019/07/18/researchers-trick-ai-based-antivirus-accepting-malicious-files/

[67]           Tripathi, A. (2019, November 5).  Machine Learning: With Great Power Come New Security Vulnerabilities. Retrieved from https://securityintelligence.com/machine-learning-with-great-power-come-new-security-vulnerabilities/

[68]           Hadhazy, A. (2019, October 14).  Protecting Smart Machines From Smart Attacks. Retrieved from https://www.princeton.edu/news/2019/10/14/adversarial-machine-learning-artificial-intelligence-comes-new-types-attacks