The challenge problems for each year are listed below and linked to more information about each of the problems.
2023: AI/ML and the Human Element and Resilience, Architecture, and Autonomy
2022: Cybersecurity and Software in the Supply Chain
2020/2021: Economics of Security and Continuous Assurance
2019: Cognitive Securing and Human-Machine Teaming
2018: AI/ML Cyber Defense
2016 & 2017: Modeling Consequences of Ransomware on Critical Infrastructures
2015: Cyber Security
2014: Metadata-based Malicious Cyber Discovery
2024 Challenge Problems
C3E brings together a diverse set of experts in creative collaboration to tackle tough intellectual cybersecurity challenges and point toward novel and practical solutions. The discussions held at the workshop help to inform the Challenge Problems (CP) for the coming year. At the C3E Workshop on 12-14 September 2023, the tracks looked at the themes of the meeting: (1) Cyberpsychology Aspects of Foreign Malign Influence and (2) Generative AI and Large Language Models. Since there were also suggested research topics remaining from the 2022 C3E Workshop, the following are also added as options for this year: (3) AI/ML and the Human Element and (4) Resilience, Architecture, and Autonomy.
2024 Challenge Problems
A follow-on program is available for researchers to address issues raised at this workshop and at previous event. For 2023-2024, the CPs focus on the themes identified above. We will engage 8-10 researchers on a part time basis to identify and explore specific issues developed around these themes during the C3E event. Researchers will then present their findings at the 2024 C3E Workshop. We have an NSF grant to pay a small honorarium to these researchers for their efforts over the next 9-10 months. Researchers may also apply to conduct research about the 2022-2023 Challenge Problems as indicated below.
Overall Challenge. The overall challenge is to improve understanding of the issues associated with the Cyberpsychology Aspects of Foreign Malign Influence and Generative AI and Large Language Models or the two research themes from the 2022 C3E Workshop.
Objectives. The anticipated outcome will include a description of the critical security events taking place and the research process followed for the effort. That process may include details on how the research was conducted and possible issues or limitations associated with one of the themes. The results might include new models or actual software code.
Deliverables. Researchers are required to prepare a ten-minute video presentation to be presented at C3E 2024, a poster that describes their research, and a technical article suitable for publication in a major academic publication.
Researchers might also provide models, actual software applications (APPs) for open-source systems, or narratives to address issues or solutions related to the theme. This is an opportunity to apply focused research on these themes.
The Challenge Problem for 2023-2024 offers four optional themes based on suggested research topics from the tracks at the recent C3E 2023 workshop and the previous one in 2022.
The following are descriptions of the four general themes. The appendix provides some specific research ideas developed for each theme by the workshop tracks. Researchers are encouraged to address one of the suggested topics identified for a theme.
Challenge Problem 1: Cyberpsychology Aspects of Foreign Malign Influence
Cyberpsychology is a scientific inter-disciplinary domain that focuses on the psychological phenomena which emerge as a result of the human interaction with digital technology, particularly the Internet. This track theme surveyed the potential benefits and challenges of balancing national security and freedom of speech. Current research often includes identifying, assessing, and countering information campaigns. Oftentimes the research is tactically focused. For this challenge problem, consider ways to approach research in this area.
Challenge Problem 2: Generative AI and Large Language Models
A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs present both a myriad of benefits and challenges. The open availability and rapid adoption of LLMs suggests that research and development surrounding them will drive a wave of innovation in AI and Machine Learning. This ready availability may, unfortunately, stimulate inappropriate adoption before a thorough understanding of the technology is realized. This Challenge Problem will study the potential benefits and challenges of the adoption of LLMs and. challenges confronting the technology itself.
Challenge Problem 3: AI/ML and the Human Element
The challenge is the role of AI/ML in future machine speed operations in critical systems across ubiquitous domains, including an exploration of the human element in understanding and shaping important outcomes. Research is needed to address how AI/ML and the human interact. How does the human understand what the AI/ML is doing and how does a trust relationship develop between the two?
Challenge Problem 4: Resilience, Architecture, and Autonomy
Research is needed for resilience, architecture, and autonomy on how to design and analyze system architectures that deliver required service in the face of compromised components.
Proposal Process – Next Steps
Given the options for research activity included here for the CP, participants are encouraged to review the list of topics and propose one or more for follow-up research for the 2024 C3E Workshop.
If you are interested, please send a short description (1 to 5 pages) of your proposal including metrics to measure your research success to Dr. Don Goff, Co-PI with Dan Wolf for the CP, at the following address: C3E-CP-Proposal@cyberpackventures.com by November 30, 2023.
Please send any questions to the Co-PIs Don at dgoff@cyberpackventures.com or Dan at dwolf@cyberpackventures.com
APPENDIX
Challenge Problem Topic #1 - Cyberpsychology Aspects of Foreign Malign Influence
To identify, assess, and counter information campaigns, several research initiatives are needed, especially since current efforts are sometimes just tactically focused. For this Challenge Problem, consider ways to approach research to address the problem in a strategic manner.
Challenge Problem Topic #2 - Generative AI and Large Language Models
The challenge problem researchers may look at such problems as:
- Trustworthiness
- Validation related to meeting user intentions, and concerns related to automation bias, the ELIZA effect, and fluency.
- Validation related to meeting user intentions, and concerns related to automation bias, the ELIZA effect, and fluency.Prompt engineering in eliciting high-utility output from large language models.
- Identifying and characterizing the bounds of coherent generation.
- If we know technology like AI and Bots are going to be used for malign influence, is it ethical to use them to promote unity and belief in government to counter the malign influence?
- Human-Computer Interaction (HCM) approaches that address what the appropriate interface is to help a user make sense of LLM generations
Challenge Problem Topic #3 - AI/ML and the Human Element
Research is needed to address how AI/ML and the human interact. How does the human understand what the AI/ML is doing and how does trust relationship develop between the two? Research in the following areas would improve the interaction between the AI/ML and the human.
- Closing the Understanding Gap Between Simulations and Real-World Situations using Realistic Human Models
- Cyber Analyst Assistant AI
- Create a Cyber Analyst Assistant AI.
- Create cyber analyst domain taxonomy of errors and analysis delays.
- Support hackathon that brings together team of a cyber analyst, psychologist and AI expert.
- Develop AI-driven provenance methods to aid in understanding advanced persistent threats.
-
Trust in AI
- Evaluate current research in Explainable Artificial Intelligence (XAI).
- Is it sufficient for establishing trust in AI?
- What needs to be added to XAI to establish trust in AI?
- Develop a qualitative study of AI cybersecurity users to understand the factors that lead to trust and distrust.
- Develop a taxonomy of uses of AI in cybersecurity (website analysis, phishing detection, malware detection).
- Identify gaps humans could potentially fill based on human cognitive (mental) models that relate to real world situation awareness.
Challenge Problem Topic #4 - Resilience, Architecture, and Autonomy
Research is needed for resilience, architecture, and autonomy on how to design and analyze system architectures that deliver required service in the face of compromised components.
- Active Agents for Resiliency and Autonomy
- Design and create a Recommender System based on AI/ML or related technology to help defensive operators make better decisions through the use of data from sensors and other operational metrics
- Design and create an Agent based on AI/ML or related technology to ensure correct operations that follow the “Commander’s Intent” (rules, strategies, decisions, processes, etc.)
- Design and create an Attribution (Friend or Foe) System based on AI/ML or related technology that identifies good vs. bad actors in a compromised environment.
- Develop appropriate metrics to drive design decisions and validate that the implementation meets the design specifications.
- Resilient Architectures
- Provide examples of resilient architectures for network offense and defense
- Provide research on the consequences of automation
- How do humans effectively manage and understand resilient architectures for scalability?
- Research and develop adaptable Honeypots thru AI/ML or related technology that react to learning from on-going attacks
- Trust Factor in Resilient and Autonomous Systems
- What is compelling evidence of trustworthiness?
- How do you give AI a “Voice” in strategy decisions?
- What are automation tradeoffs relative to objectives?
- How do you develop autonomous systems with imperfect/incomplete information?