Click here to return to the 2019 C3E Workshop Wiki
Cognitive Security in Cyber
Every year C3E Conferences tackle cybersecurity problems that are widespread in the current world. In the Fall C3E Conference the main focus is going to be on both human behavior and cognitive security in cyber. Cognitive security is becoming more important as the spread of fake news is becoming more and more regular. One of the main focuses of this conference, is to discuss how one can stop the spread of fake news to help protect cognitive security. Below are some research topics that should be researched, so that in the upcoming Fall conference, there can be a great discussion on how one can protect cognitive security of an individual. Also below, are some references to familiarize ones self with what cognitive security is about, and what current issues are affecting cognitive security.
Potential Research Topics
- How does one mitigate attacks after detection?
- How can one deal with bias in decision making, at scale?
- How can one detect attacks on cognitive systems?
- In regards to cognitive security what are the relationships between biases?
- What are some influence mechanisms?
- What kind of impact does group make-up have on cognitive security?
- What are the exact steps of a decision making process, when an individual decides to believe fake news?
Articles on Cognitive Security in Cyber
[1] "Outsmarting Deep Fakes: AI-Driven Imaging System Protects Authenticity"
Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.
[2] "Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts"
This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.
[3] "To Fight Deepfakes, Researchers Built a Smarter Camera"
This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.
[4] "Deepfakes are Getting, Better, But They're Still Easy to Spot"
This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.
[5] "From Viruses to Social Bots, Researchers Unearth the Structure of Attacked Networks"
A machine learning model of the protein interaction network has been developed by researchers to explore how viruses operate. This research can be applied to different types of attacks and network models across different fields, including network security. The capacity to determine how trolls and bots influence users on social media platforms has also been explored through this research.
[6] "People Older Than 65 Share the Most Fake News, a New Study Finds"
This article pertains to cognitive security. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic — including party affiliation.
[7] "Could This be the Solution to Stop the Spread of Fake News?"
This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.
[8] "People Are Bad at Spotting Fake News. Can Computer Programs do Better?"
This article pertains to cognitive security. To help sort fake news from truth, programmers are building automated systems that judge the veracity of online stories.
[9] "Microsoft is Trying to Fight Fake News With its Edge Mobile Browser"
This article pertains to cognitive security. Microsoft Edge mobile browser will use software called NewsGuard, which will rate sites based on a variety of criteria including their use of deceptive headlines, whether they repeatedly publish false content, and transparency regarding ownership and financing.
[10] "Facebook’s Dystopian Definition of 'Fake'"
Facebook’s response to a altered video of Nancy Pelosi has sparked a debate as to whether social media platforms should take down videos that are considered to be “fake”. The definition of "fake" is also discussed.
[11] "Sprawling Disinformation Networks Discovered Across Europe Ahead of EU Elections"
A U.K.-based global citizen activist organization, called Avaaz, conducted an investigation, which revealed the spread of disinformation within Europe via Facebook ahead of EU elections. According to Avaaz, these pages were found to be posting false and misleading content. These disinformation networks are considered to be weapons as they are significant in size and complexity.
[12] "Middle East-Linked Social Media Accounts Impersonated U.S. Candidates Before 2018 Elections"
This article pertains to cognitive security and human behavior. Social media users with ties to Iran are shifting their disinformation efforts by imitating real people, including U.S. congressional candidates.
[13] "The 2020 Campaigns Aren't Ready for Deepfakes"
There is expected to be a surge in deepfakes during the 2020 presidential campaigns. According to experts, little has been done to prepare for fake videos in which candidates are depicted unfavorably in order to sway public perception.
[14] "Want to Squelch Fake News? Let the Readers Take Charge"
An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.
[15] "Breaking Down the Anti-Vaccine Echo Chamber"
Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.
[16] "Russian Twitter Bots Laid Dormant for Months Before Impersonating Activists"
This article pertains to cognitive security. Twitter accounts deployed by Russia’s troll factory in 2016 didn’t only spread disinformation meant to influence the U.S. presidential election. A small handful tried making money too.
[17] "Tech Fixes Can’t Protect Us from Disinformation Campaigns"
Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".
[18] "Can AI Help to End Fake News?"
Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.
[19] "ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media"
The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.
[20] "YouTube to Remove Hateful, Supremacist Content"
This article pertains to cognitive security. YouTube is going to remove videos that deny the Holocaust, other “well-documented violent events, and videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.
[21] "Ohio University Study States That Information Literacy Must Be Improved to Stop Spread of 'Fake News'"
A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share “fake news” on social media.
[22] "Fake News Detector Algorithm Works Better Than a Human"
Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.
[23] "Peering Under the Hood of Fake-News Detectors"
MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.
[24] "YouTube is Banning Extremist Videos. Will it Work?"
This article pertains to cognitive security. It’s difficult to assess how effective YouTube’s policies will be, as the company didn’t specify how it plans to identify the offending videos, enforce the new rules, or punish offenders.
[25] "Data Craft: The Manipulation of Social Media Metadata"
The manipulation of social media metadata by bad actors for the purpose of creating more powerful disinformation campaigns was explored. It has been argued that disinformation campaigns can be detected and combatted by understanding data craft.
[26] "Study: Twitter Bots Played Disproportionate Role Spreading Misinformation During 2016 Election"
Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.
[27] "Mathematicians to Help Solve the Fake News Voting Conundrum"
Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.
[28] "Disinformation, 'Fake News' and Influence Campaigns on Twitter"
The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.
[29] "How Human Bias Impacts Cybersecurity Decision Making"
Psychologist and Principal Research Scientist at Forecepoint, Dr. Margaret Cunningham, conducted a study in which she examined the impacts of six different unconscious human biases on decision-making in cybersecurity. Awareness and understanding surrounding cognitive biases in the realm of cybersecurity should be increased in order to reduce biased decision-making in the performance of activities such as threat analysis and prevent the design of systems that perpetuate biases.
[30] "What is Digital ad Fraud and how Does it Work?"
Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users’ computers.
[31] "The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'"
The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.
[32] "Researchers Develop app to Detect Twitter Bots in any Language"
Language scholars and machine learning specialists collaborated to create a new application that can detect Twitter bots independent of the language used. The detection of bots will help in decreasing the spread of fake news.
Deepfakes’ most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.
[34] "To Detect Fake News, This AI First Learned to Write it"
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI’s GPT2, with high accuracy.
[35] "Google’s new Curriculum Teaches Kids how to Detect Disinformation"
The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.
[36] "Attackers Are Using Cloud Services to Mask Attack Origin and Build False Trust"
According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.
[37] "Winter Olympics Hack Shows How Advanced Groups Can Fake Attribution"
A malware attack that disrupted the opening ceremony of the 2018 Winter Olympics highlights false flag operations. The malware called the "Olympic Destroyer" contained code deriving from other well-known attacks launched by different hacking groups. This lead different cybersecurity companies to accuse Russia, North Korea, Iran, or China.
[38] "Millions of Fake Businesses List on Google Maps"
Google handles more than 90% of the world’s online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.
[39] "Social Engineering Attack Nets $1.7M in Government Funds"
Social engineering, the act of manipulating someone into a specific action through online deception. According to Norton, social engineering attempts typically take one of several forms, including phishing, impersonation and various types of baiting. Social Engineering attacks are on the rise, according to the FBI, which reportedly received some 20,373 complaints in 2018 alone. Those complaints amount to $1.2 billion in overall losses.
[40] "El Paso and Dayton Tragedy-Related Scams and Malware Campaigns"
In the wake of the recent shootings in El Paso, TX, and Dayton, OH, the Cybersecurity and Infrastructure Security Agency (CISA) advises users to watch out for possible malicious cyber activity seeking to capitalize on these tragic events. Users should exercise caution in handling emails related to the shootings, even if they appear to originate from trusted sources. It is common for hackers to try to capitalize on horrible events that occur to perform phishing attacks.
References
[1] Outsmarting Deep Fakes: AI-Driven Imaging System Protects Authenticity. (2019, May 29). Retrieved from https://www.sciencedaily.com/releases/2019/05/190529131152.htm
[2] Newman, L. (2019 May 28). Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts. Retrieved from https://www.wired.com/story/iran-linked-fake-accounts-facebook-twitter/
[3] Newman, L. (2019 May 28). To Fight Deepfakes, Researchers Built a Smarter Camera. Retrieved from https://www.wired.com/story/detect-deepfakes-camera-watermark/
[4] Barber, G. (2019 May 26). Deepfakes Are Getting Better, But They're Still Easy to Spot. Retrieved from https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot/
[5] From Viruses to Social Bots, Researchers Unearth the Structure of Attacked Networks. (2019, May 29). Retrieved from https://www.sciencedaily.com/releases/2019/05/190529131227.htm
[6] Newton, C. (2019, January 9). People Older Than 65 Share the Most Fake News, a New Study Finds. Retrieved from https://www.theverge.com/2019/1/9/18174631/old-people-fake-news-facebook-share-nyu-princeton
[7] Dizikes, P. (2019, February 11). Could This be the Solution to Stop the Spread of Fake News?. Retrieved from https://www.weforum.org/agenda/2019/02/want-to-squelch-fake-news-let-the-readers-take-charge
[8] Temming, M. (2018, July 26). People Are Bad at Spotting Fake News. Can Computer Programs do Better?. Retrieved from https://www.sciencenews.org/article/can-computer-programs-flag-fake-news
[9] Warren, T. (2019, January 23). Microsoft is Trying to Fight Fake News With its Edge Mobile Browser. Retrieved from https://www.theverge.com/2019/1/23/18194078/microsoft-newsguard-edge-mobile-partnership
[10] Bogost, I. (2019, May 28). Facebook’s Dystopian Definition of ‘Fake’. Retrieved from https://www.theatlantic.com/technology/archive/2019/05/why-pelosi-video-isnt-fake-facebook/590335/
[11] Sprawling Disinformation Networks Discovered Across Europe Ahead of EU Elections. (2019, May 22). Retrieved from http://www.homelandsecuritynewswire.com/dr20190522-sprawling-disinformation-networks-discovered-across-europe-ahead-of-eu-elections
[12] Vavra, S. (2019, May 28) Middle East-Linked Social Media Accounts Impersonated U.S. Candidates Before 2018 Elections. Retrieved from https://www.cyberscoop.com/middle-east-linked-social-media-accounts-impersonated-u-s-candidates-2018-elections/
[13] Waddell, K. (2019, June 4). The 2020 Campaigns Aren't Ready for Deepfakes. Retrieved from https://www.axios.com/2020-campaigns-arent-ready-for-deepfakes-a1506e77-6914-4e24-b2d1-0c69b6e22162.html
[14] Dizikes, P. (2019, January 28). Want to Squelch Fake News? Let the Readers Take Charge. Retrieved from http://news.mit.edu/2019/reader-crowdsource-fake-news-0128
[15] Breaking Down the Anti-Vaccine Echo Chamber. (2019, May 8). Retrieved from https://blogs.ei.columbia.edu/2019/05/08/breaking-vaccine-echo-chamber/
[16] Stone, J. (2019, June 5). Russian Twitter Bots Laid Dormant for Months Before Impersonating Activists. Retrieved from https://www.cyberscoop.com/russian-twitter-bots-laid-dormant-for-months-before-impersonating-activists/
[17] Grabmeier, J. (2019, April 25). Tech Fixes Can't Protect Us from Disinformation Campaigns. Retrieved from https://news.osu.edu/tech-fixes-cant-protect-us-from-disinformation-campaigns/
[18] Can AI Help to End Fake News? (2019, May 9). Retrieved from https://www.thenakedscientists.com/articles/science-features/can-ai-help-end-fake-news
[19] ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media. (2019, March 12). Retrieved from https://www.adl.org/news/press-releases/adl-partners-with-network-contagion-research-institute-to-study-how-hate-and
[20] Paresh, D. (2019, June 5). YouTube to Remove Hateful, Supremacist Content. Retrieved from https://www.reuters.com/article/us-alphabet-youtube-hatespeech-idUSKCN1T623X
[21] Ashburn, M. (2019, March 11). Ohio University Study States That Information Literacy Must Be Improved to Stop Spread of 'Fake News'. Retrieved from https://www.ohio.edu/compass/stories/18-19/03/Fake-News-Khan.cfm
[22] Fake News Detector Algorithm Works Better Than a Human. (2018, August 21). Retrieved from https://news.umich.edu/fake-news-detector-algorithm-works-better-than-a-human/
[23] Peering Under the Hood of Fake-News Detectors. (2019, February 4). Retrieved from https://www.sciencedaily.com/releases/2019/02/190204154024.htm
[24] Martineau, P. (2019, June 5). YouTube is Banning Extremist Videos. Will it Work? Retrieved from https://www.wired.com/story/how-effective-youtube-latest-ban-extremism/
[25] Acker, A. (2018, November 5). Data Craft: The Manipulation of Social Media Metadata. Retrieved from https://apo.org.au/node/202091
[26] Study: Twitter Bots Played Disproportionate Role Spreading Misinformation During 2016 Election. (2018, December 17). Retrieved from https://news.iu.edu/stories/2018/11/iub/releases/20-twitter-bots-election-misinformation.html
[27] Mathematicians to Help Solve the Fake News Voting Conundrum. (2018, November 1). Retrieved from https://www.surrey.ac.uk/news/can-maths-solve-fake-news-voting-conundrum
[28] Disinformation, ‘Fake News’ and Influence Campaigns on Twitter. (2018, October 4). Retrieved from https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter
[29] Zorz, Z. (2019, June 10). How Human Bias Impacts Cybersecurity Decision Making. Retrieved from https://www.helpnetsecurity.com/2019/06/10/cybersecurity-decision-making/
[30] Stewart, R. (2019, June 16). What is Digital ad Fraud and how Does it Work? Retrieved from https://cyware.com/news/what-is-digital-ad-fraud-and-how-does-it-work-4312adf6
[31] The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'. (2019, May 22). Retrieved from https://www.fpri.org/article/2019/06/the-national-security-challenges-of-artificial-intelligence-manipulated-media-and-deepfakes/
[32] Researchers Develop app to Detect Twitter Bots in any Language. (2019, June 18). Retrieved from https://www.helpnetsecurity.com/2019/06/18/detect-twitter-bots/
[33] Townsend, C. (2019). Deepfake Technology. Retrieved from https://www.uscybersecurity.net/deepfake/?fbclid=IwAR3-d0PWMdVz5ytXYHLqqmwAjTow6OmdszFbolvm-auJZGtWHlwSkNJQf5I
[34] Coldewey, D. (2019, June 13). To Detect Fake News, This AI First Learned to Write it. Retrieved from https://techcrunch.com/2019/06/10/to-detect-fake-news-this-ai-first-learned-to-write-it/
[35] Lee, N. (2019, June 24). Google’s new Curriculum Teaches Kids how to Detect Disinformation. Retrieved from https://www.engadget.com/2019/06/24/google-kids-spot-disinformation/
[36] Sanders, J. (2018, December 19). Attackers Are Using Cloud Services to Mask Attack Origin and Build False Trust. Retrieved from https://www.techrepublic.com/article/attackers-are-using-cloud-services-to-mask-attack-origin-and-build-false-trust/
[37] Bing, C. (2018, February 26). Winter Olympics Hack Shows How Advanced Groups Can Fake Attribution. Retrieved from https://www.cyberscoop.com/winter-olympics-hack-attribution-talos-washington-post/
[38] Millions of Fake Businesses List on Google Maps. (2019, June 24). Retrieved from https://www.warc.com/newsandopinion/news/millions_of_fake_businesses_list_on_google_maps/42261
[39] Ropec, L. (2019, August 14). Social Engineering Attack Nets $1.7M in Government Funds. Retrieved from https://www.govtech.com/security/Social-Engineering-Attack-Nets-17M-in-Government-Funds.html
[40] El Paso and Dayton Tragedy-Related Scams and Malware Campaigns. (2019, August 6). Retrieved from https://www.us-cert.gov/ncas/current-activity/2019/08/06/el-paso-and-dayton-tragedy-related-scams-and-malware-campaigns