Black Hat and Human Factors
Black Hat has certainly established a strong reputation where security practitioners unveil some of the most impactful vulnerabilities, tools and techniques. For example, at Black Hat USA 2017, you would have been treated to details on hacks of technologies including 4G LTE, Wi-Fi, printers, automobiles (Teslas, no less) and wind farms. It is less well known that Black Hat also includes a track on Human Factors. A brief summary of a subset of those talks is provided below. Many of the presentations’ slides and technical papers are available online.
“ICHTHYOLOGY: PHISHING AS A SCIENCE” discussed how phishing, in spite of employee training, manages to work. Topics addressed included the psychology of phishing, its effectiveness in a real-world attack, which protections were bypassed and techniques on how to prevent other attacks. A human weakness emphasized in the presentation was with systems of thinking applied to reading email. Instead of being slow, methodical, rational and skeptical, people tended to be fast, instinctive, emotional and gullible. Multi-factor authentications was recommended to mitigate credential phishing.
“CYBER WARGAMING: LESSONS LEARNED IN INFLUENCING SECURITY STAKEHOLDERS INSIDE AND OUTSIDE YOUR ORGANIZATION” presented role-playing scenarios developed to teach about attacker and defender motivations, techniques and mindsets. Participants also learned how to make fundamental security decisions when faced with an actual threat. Two main approaches were singled out as barriers to influencing security stakeholders – 1) engaging only a subset of the stakeholders (technical staff), and 2) engaging ineffectively (non-interactive).
“WIRE ME THROUGH MACHINE LEARNING” discussed how Business Email Compromise (BEC, aka whaling) uses machine learning related to company CxOs (leadership) to enable successful attacks. Input to the machine learning algorithms included CxO social media accounts, official websites, news, and other data gleaned from social engineering. One major result of this activity was to coerce a target to initiate a financial transaction (e.g. wire money) to the attacker. The presentation also relayed how machine learning could be used to mitigate these attacks.
“REAL HUMANS, SIMULATED ATTACKS: USABILITY TESTING WITH ATTACK SCENARIOS,” presented via CMU’s Cylab Usable Privacy & Security Laboratory, discussed practical methods to observe user behavior in the presence of risk. Their study examined and presented findings from three scenarios: 1) real-world activity + natural risk, 2) hypothetical security task + simulated risk, and 3) real non-security task + simulated risk.
“BIG GAME THEORY HUNTING: THE PECULIARITIES OF HUMAN BEHAVIOR IN THE INFOSEC GAME” discussed how game theory applies to defensive strategy. The talk favored behavioral over traditional game theory in the philosophy of defense. This is because the latter assumes that players will behave coldly rational (they don’t), while the former measures behavior empirically. The talk then explored how opponent’s moves could be predicted by either “thinking” (modeling how opponents might respond) or “learning” (how prior games influence players’ actions). Lastly, some practical steps were presented to move theory into practice to improve the defenders’ odds.