SoS Musings #62 - Increasing the Power of Cybersecurity Deception

Image removed.SoS Musings #62 -

Increasing the Power of Cybersecurity Deception

There have been many advancements in deception technologies and methods in the cybersecurity field. The idea of deception in the realm of cybersecurity is to trick attackers into believing they have accessed, infiltrated, or compromised real valuable assets or data when in reality, they have fallen into a trap designed to waste their time or support defenders' observation of their tactics. As cyber threats and attack surfaces continue to grow and evolve, security teams have been adopting deception as a line of defense. There are multiple benefits to initiating a deception strategy or implementing a deception technology for security teams. If properly implemented, deception can reduce the time taken to detect attacks, trick attackers into revealing their presence in a network, and generate high-quality actionable alerts. Using a deception strategy in cybersecurity can also reduce the dependence on signature-based security methods as well as capture valuable information about the type and nature of an attack and give teams a real-time view of what an attacker is doing within their network, thus enabling defenses to be strengthened. The global market for deception technology is estimated to reach $4.2 billion by 2026 as organizations seek to adopt a cybersecurity platform that can prevent damage from cybercriminals who have already infiltrated a network. Efforts continue to be made to bolster the strategy of deception in cybersecurity operations.

A team of researchers at the University of Missouri designed a system named Dolus after the Greek god of trickery that can trick attackers into thinking they are making progress, thus providing more time for targets to respond and prevent Distributed Denial-of-Service (DDoS) attacks and Advanced Persistent Threats (APTs). Dolus enables more time to respond by quarantining the attacker. According to Prasad Calyam, leader of the study, the quarantine exhibits behavior similar to that of the real compromised target to make the attacker assume their malicious activity is still succeeding. Dolus detects attacks and uses Artificial Intelligence (AI) and Machine Learning (ML) methods to mislead the attacker and alter the attack. The design of this system enables it to redirect attackers to a virtual machine, which mimics the target site or system's behavior. Dolus also alerts the operators of the attack while allowing customers, accounts, and other normal users to continue their activities. The team emphasizes that their strategy aims to prevent the disruption of cloud-hosted services and exfiltration of data by tricking the attacker into believing that a high-value target has been impacted or high-value data has been accessed or obtained.

Studies conducted by researchers at Carnegie Mellon take - a unique approach to cyber deception, which involves using cognitive science to inform the practice of effectively deceiving attackers. Cleotilde Gonzalez, professor in Social and Decision Sciences at Carnegie Mellon and lead of the study, says prior work assumed attackers are rational, but humans are not, as they get biased by the frequency and recency of events and other cognitive factors. Defense algorithms can use attackers' cognitive biases to become increasingly effective. The first paper presented by Gonzalez and her team shows that signaling, a strategic method used to make something appear to be something it is not, can be effective at throwing attackers off when trying to perform malicious activities. Through signaling, information sent to the attacker is manipulated to make them believe there is something valuable in a certain node of the network, giving defenders more time to detect an attack and the ability to trace the attackers' steps in the system. The second paper from the team suggests that it is possible to improve a previously-developed signaling technique by using an advanced cognitive model, backing their results with human experiments. The researchers tested the effectiveness of various signaling schemes on human attackers using a video game. In the game, players try to score points by attacking computers, but they must tread carefully because some computers may be monitored by defenders. Attacking those computers results in a point deduction, while in the real world, attacking such computers results in getting caught. When a player chooses which computer to attack, a signaling algorithm determines whether to send a truthful or deceptive signal. A truthful signal may indicate that a node is being monitored by the defender, deterring the attack. In contrast, a deceptive signal may indicate that a node is not being monitored by the defender, motivating an attack when, in fact, it is protected. Gonzalez emphasized the importance of balancing the rate and timing at which an attacker is sent deceptive signals to maintain their trust and the effectiveness of the deception.

Contributing to the deception cybersecurity field, researchers at the University of Texas at Dallas developed a method called DEEP-Dig (DEcEPtion DIGging) that lures intruders into a decoy site instead of blocking them so the computer can learn from their tactics. Information on the hackers' tactics is used to train the computer to identify and stop future attacks. The approach aims to address a major challenge faced in using AI for cybersecurity, which is the scarcity of data required to train computers to detect intruders. Privacy concerns have resulted in a lack of data, which presents an obstacle as having better data improves the detection of attacks. DEEP-Dig will provide researchers with insight into hackers' methods as they enter a spoof site filled with misinformation. According to Dr. Latifur Khan, professor of computer science at UT Dallas, the decoy site appears legitimate to intruders, making them feel they have been successful at infiltration. In the case of a hacker realizing they have entered a decoy site and attempting to deceive the program itself, the defense system still continues to learn how the hackers try to hide their tracks, thus presenting an all-win situation for defenders.

Researchers with Lupovis, a University of Strathclyde cybersecurity spinoff, worked on developing dynamic deception technology that uses a network of collaborative decoys to lure attackers away from high-value assets such as personal data or sensitive information, and prevent hackers from shutting systems down. The system uses AI to create scenarios mirroring an organization's existing infrastructure and tricks the attacker into thinking they are progressing towards valuable assets. According to Dr. Xavier Bellekens, the CEO of Lupovis, data on attacker techniques, methods, and behavior, is fed into the system, allowing users to stay a step ahead of attackers by predicting their next moves. Lupovis' solution provides the attacker with incentives that direct them down a specific path. Once an adversary has gained access to a network, the system entices them by establishing an offensive deception environment that engages the attacker the moment they make a move within the network. The system reacts dynamically to the attacker's behavior and skill level by using incentives and gamifying the vulnerabilities that engage the hacker. The longer the attacker is active, the longer the system prevents malicious actions that would otherwise shut down the network. The advantages are - business continuity while simultaneously gathering information on the hacker's skills and strategies that inform security teams of the best defense measures to take.

Pacific Northwest National Laboratory's (PNNL) cybersecurity technology called Shadow Figment is designed to lead attackers into an artificial environment and then stop them from inflicting damage by giving them the illusion of success. This technology also aims to isolate attackers by captivating them with an attractive yet imaginary world. PNNL's Shadow Figment technology aims to protect physical targets or infrastructure such as buildings, the power grid, water systems, pipelines, and more. The foundation of Shadow Figment is a commonly used technology known as a honeypot—something appealing to entice an attacker, maybe a desirable target that appears easy to access. However, unlike most honeypots, Shadow Figment goes much further in luring attackers and studying their methods. The technology employs AI to deploy elaborate deception in order to keep attackers engaged in a fictitious world that mirrors the real world. The decoy interacts with users in real-time, responding to commands in realistic ways. Hackers are rewarded with false success signals, keeping them busy while defenders learn about their methods and work to protect the real system. The deception's credibility is based on an ML program capable of learning by observing the real-world system where it is installed. In response to an attack, the program sends signals indicating that the system under attack is responding in plausible ways. This "model-driven dynamic deception" is said to be significantly more realistic than a static decoy. Shadow Figment creates interactive clones of physical systems in all their complexities, as experienced operators and cybercriminals would expect. In the artificial world, for example, if a hacker turns off a fan in a server room, Shadow Figment would respond by signaling that air movement has slowed and the temperature is increasing. If a hacker makes changes to a water boiler's settings, the system will adjust the water flow rate appropriately. The ultimate goal of the Shadow Figment technology is to distract malicious actors from real control systems to place them into an artificial system where their actions would not have any effects.

The US Department of Energy's Sandia National Laboratories developed another deception tool called the High-Fidelity Adaptive Deception and Emulation System (HADES), which combines Software Defined Networks (SDN), cloud computing, dynamic deception, and agentless Virtual Machine Introspection (VMI) together to create complex high-fidelity deception networks and provide mechanisms to directly interact with attackers. With HADES, adversaries are migrated into an emulated deception environment, where they can carry out their attacks without any indication that they have been detected and are being observed. HADES then enables defenders to respond methodically and proactively to adversarial attacks by modifying the environment, host attributes, files, and the network itself in real-time. Cybersecurity practitioners can gain valuable information about adversaries' tools and techniques through a rich set of data and analytics, which can then be given to the network defender as threat intelligence.

Deception technology strives to deviate cybercriminals' motives and programs aimed at stealing credentials and privileges by creating decoys and traps that imitate legitimate technology assets across the infrastructure. The strategy is to give attackers the false impression that they have successfully infiltrated an organization's network or infrastructure when trying to access business-critical information or assets. The security community is encouraged to continue exploring and developing deception technologies and strategies to reduce risk, enhance incident response, and improve security by creating a deceptive environment for cybercriminals. 

Submitted by Anonymous on