Computational Cybersecurity in Compromised Environments (C3E)
September 12-14, 2023 | Florida Institute of Technology Center for Advanced Manufacturing and Innovative Design (CAMID)

The 2023 Computational Cybersecurity in Compromised Environments (C3E) Symposium will be held September 12-14, 2023 at the Florida Institute of Technology Center for Advanced Manufacturing and Innovative Design (CAMID) which is located at 2495 Palm Bay Road NE Palm Bay, FL 32905 

The themes of the 2023 C3E meeting will be Cyberpsychology Aspects of Foreign Malign Influence and Generative AI and Large Language Models.

Cyberpsychology Aspects of Foreign Malign Influence

At C3E, we will consider the current state of practice of research into foreign malign influence. Current research often includes identifying, assessing, and countering information campaigns. Oftentimes the research is tactically focused. Questions such as Are there gaps in this approach? and What are the long-term implications of this approach? will be considered.

Generative AI and Large Language Models

Large language models are taking the world by storm, and with that come a myriad of benefits and challenges. With the open availability and rapid adoption of large language models, it seems likely that research and development surrounding large language models will also continue to produce advances driving a wave of innovation in AI and machine learning, as well as (unfortunately) inappropriate adoption before a thorough understanding of the technology is realized. This theme will survey the potential benefits and challenges of the adoption of large language models. Still further, participants will explore challenges confronting the technology itself.

Challenges, such as:

- Trustworthiness     
- validation related to meeting user intentions, and concerns related to automation bias, the ELIZA effect, and fluency    
- prompt engineering in eliciting high-utility output from large language models;    
- identifying and characterizing where the bounds of coherent generation are;    
- Human-Computer Interaction approaches, what is the appropriate interface to help a user make sense of LLM generations.    
 

Attendance for the Symposium is by invitation only. For more information please contact the organizers: c3e@cps-vo.org

Sponsors     
Glenn Lilly (NSA)     
Brad Martin (DARPA)

Organization     
Katie Dey (Vanderbilt University)     
Dan Wolf (CPVI)

The workshop is sponsored by the Special Cyber Operations Research and Engineering (SCORE) Interagency Working Group.

2023 Program Agenda

 

TUESDAY, SEPTEMBER 12

The Florida Institute of Technology

0800 - 0830

Check-in | Networking  
 

 

MORNING SESSION

0830 - 0900

Welcome and Opening Remarks  
Sponsor: Glenn Lilly (NSA)  
Host: Marco Carvalho (FIT)  
Co-Chairs

 

0900 - 1000

Plenary Talk: The 3rd Wave of AI and LLMs  
John Launchbury (Galois)

1000 - 1030

Break

 

1030 - 1100

Introduction of Track Themes  
C3E Track Leads

 

LUNCH SESSION

1100 - 1215

Working Lunch and POSTER SESSION  
 

 

AFTERNOON SESSION

1215 - 1500

Track Session  
(includes afternoon break)

 

1500 - 1515

Transition to Plenary Room

 

1515 - 1600

Track Ideas Incubation Panel and Discussion

 

1600 - 1615

Day 1 Agenda Review  
Andy Caufield (Cyber Pack Ventures Inc.)

1615 - 1630

Challenge Problem Introduction and Overview  
Don Goff and Dan Wolf (Cyber Pack Ventures, Inc.)

1630 - 1700

STUDENT POSTER SESSION

^TOP

1700

ADJOURN for the day

WEDNESDAY, SEPTEMBER 13

TThe Florida Institute of Technology

0800 - 0830

Check-in | Networking

 

MORNING SESSION

0830 - 0835

Call to Order  
Co-Chairs

 

0835 - 0935

Plenary Talk: AI and the Human Psyche:  
Vulnerabilities and opportunities in the fight against malign influence  
Victoria Romero

0935 - 0950

Transition to Track Rooms / Break

 

0950 - 1150

Track Session

 

LUNCH SESSION

1150 - 1300

Working Lunch  

Lunch Talk: Assessing the Growth and Risk of Generative AI  
John Bansemer (Georgetown)

 

AFTERNOON SESSION

1300 - 1315

Transition to Track Rooms

 

1315 - 1545

Track Session  
(includes afternoon break)

 

1545 - 1600

Transition to Plenary Room

 

1600 - 1645

Track Ideas Incubation Panel and Discussion

 

1645 - 1700

Day 2 Agenda Review  
Andy Caufield (Cyber Pack Ventures Inc.)

1700

ADJOURN for the Day

 

1830

No-Host Optional Group Dinner at Skewers Mediterranean Grille

^TOP

THURSDAY, SEPTEMBER 14

The Florida Institute of Technology

0800 - 0830

Check-in | Networking

 

MORNING SESSION

0830 - 0930

Plenary Talk: Aligning cybersecurity to reality  
Chris Inglis (Former U.S. National Cyber Director)

 

 

0930 - 0940

Transition to Track Rooms / Break

 

0940 - 1040

Track Session

 

1040 - 1045

Transition to Plenary Room

 

1045 - 1145

Plenary Talk: Semantic Forensics  
Wil Corvey (DARPA)  
 

 

1145 - 1215

Plenary Review of Track Efforts  
Track Leads

 

1215 - 1230

Summary and Closing Remarks  
Sponsor and Co-Chairs

1230

WORKSHOP ADJOURNS / LUNCH PICK-UP

^TOP

 

C3E Challenge Problems

The challenge problems for each year are listed below and linked to more information about each of the problems. 

2023: AI/ML and the Human Element and Resilience, Architecture, and Autonomy  
2022: Cybersecurity and Software in the Supply Chain   
2020/2021: Economics of Security and Continuous Assurance   
2019: Cognitive Securing and Human-Machine Teaming   
2018: AI/ML Cyber Defense   
2016 & 2017: Modeling Consequences of Ransomware on Critical Infrastructures   
2015: Cyber Security   
2014: Metadata-based Malicious Cyber Discovery


 

2024 Challenge Problems

 

C3E brings together a diverse set of experts in creative collaboration to tackle tough intellectual cybersecurity challenges and point toward novel and practical solutions. The discussions held at the workshop help to inform the Challenge Problems (CP) for the coming year. At the C3E Workshop on 12-14 September 2023, the tracks looked at the themes of the meeting: (1) Cyberpsychology Aspects of Foreign Malign Influence and (2) Generative AI and Large Language Models. Since there were also suggested research topics remaining from the 2022 C3E Workshop, the following are also added as options for this year: (3) AI/ML and the Human Element and (4) Resilience, Architecture, and Autonomy.   

2024 Challenge Problems  

A follow-on program is available for researchers to address issues raised at this workshop and at previous event. For 2023-2024, the CPs focus on the themes identified above. We will engage 8-10 researchers on a part time basis to identify and explore specific issues developed around these themes during the C3E event. Researchers will then present their findings at the 2024 C3E Workshop. We have an NSF grant to pay a small honorarium to these researchers for their efforts over the next 9-10 months. Researchers may also apply to conduct research about the 2022-2023 Challenge Problems as indicated below.

 

Overall Challenge. The overall challenge is to improve understanding of the issues associated with the Cyberpsychology Aspects of Foreign Malign Influence and Generative AI and Large Language Models or the two research themes from the 2022 C3E Workshop. 

 

Objectives. The anticipated outcome will include a description of the critical security events taking place and the research process followed for the effort. That process may include details on how the research was conducted and possible issues or limitations associated with one of the themes. The results might include new models or actual software code. 

 

Deliverables. Researchers are required to prepare a ten-minute video presentation to be presented at C3E 2024, a poster that describes their research, and a technical article suitable for publication in a major academic publication.

Researchers might also provide models, actual software applications (APPs) for open-source systems, or narratives to address issues or solutions related to the theme. This is an opportunity to apply focused research on these themes.

The Challenge Problem for 2023-2024 offers four optional themes based on suggested research topics from the tracks at the recent C3E 2023 workshop and the previous one in 2022.

The following are descriptions of the four general themes. The appendix provides some specific research ideas developed for each theme by the workshop tracks.  Researchers are encouraged to address one of the suggested topics identified for a theme. 

Challenge Problem 1: Cyberpsychology Aspects of Foreign Malign Influence

Cyberpsychology is a scientific inter-disciplinary domain that focuses on the psychological phenomena which emerge as a result of the human interaction with digital technology, particularly the Internet.  This track theme surveyed the potential benefits and challenges of balancing national security and freedom of speech. Current research often includes identifying, assessing, and countering information campaigns. Oftentimes the research is tactically focused. For this challenge problem, consider ways to approach research in this area.  
 

Challenge Problem 2: Generative AI and Large Language Models

A large language model (LLM) is a type of language model notable for its ability to achieve general-purpose language understanding and generation. LLMs present both a myriad of benefits and challenges. The open availability and rapid adoption of LLMs suggests that research and development surrounding them will drive a wave of innovation in AI and Machine Learning. This ready availability may, unfortunately, stimulate inappropriate adoption before a thorough understanding of the technology is realized. This Challenge Problem will study the potential benefits and challenges of the adoption of LLMs and. challenges confronting the technology itself.  
 

Challenge Problem 3: AI/ML and the Human Element

The challenge is the role of AI/ML in future machine speed operations in critical systems across ubiquitous domains, including an exploration of the human element in understanding and shaping important outcomes. Research is needed to address how AI/ML and the human interact. How does the human understand what the AI/ML is doing and how does a trust relationship develop between the two?

Challenge Problem 4: Resilience, Architecture, and Autonomy

Research is needed for resilience, architecture, and autonomy on how to design and analyze system architectures that deliver required service in the face of compromised components.

 

Proposal Process – Next Steps

 

Given the options for research activity included here for the CP, participants are encouraged to review the list of topics and propose one or more for follow-up research for the 2024 C3E Workshop.

If you are interested, please send a short description (1 to 5 pages) of your proposal including metrics to measure your research success to Dr. Don Goff, Co-PI with Dan Wolf for the CP, at the following address: C3E-CP-Proposal@cyberpackventures.com by November 30, 2023.

Please send any questions to the Co-PIs Don at dgoff@cyberpackventures.com or Dan at dwolf@cyberpackventures.com

 

APPENDIX

Challenge Problem Topic #1 - Cyberpsychology Aspects of Foreign Malign Influence

To identify, assess, and counter information campaigns, several research initiatives are needed, especially since current efforts are sometimes just tactically focused. For this Challenge Problem, consider ways to approach research to address the problem in a strategic manner.

  • What tools might be developed to actively scan social media to identify and assess false information
  • How can AI be used counter information campaigns?
  • How can attribution be determined for the source of such campaigns?
  • What is the culture of technology in information campaigns?
  • How can theory-driven countermeasures be discovered and what countermeasures can be employed to counteract foreign malign influence?
  • Can a schema be built in advance of attacks to support audience resistance?
  • How can influence techniques be employed against adversaries’ AI?
  • How can cultural and social components that fall outside of the tech itself be addressed?
  • What are the ethics of defensive and offensive strategies?
  • Can you create a model that identifies literature from the last 5-10 years to form a conclusion about foreign malign influence efforts that you can quantitatively defend?
  • What are methods/ontologies for small groups to build resilience against polarization from malign influence and how can governments, community, and industry work together to build resilience against malign influence?
  • How effective are memes in evoking prosocial sentiment?

 

Challenge Problem Topic #2 - Generative AI and Large Language Models

The challenge problem researchers may look at such problems as:

  • Trustworthiness
  • Validation related to meeting user intentions, and concerns related to automation bias, the ELIZA effect, and fluency.
  • Validation related to meeting user intentions, and concerns related to automation bias, the ELIZA effect, and fluency.Prompt engineering in eliciting high-utility output from large language models.
  • Identifying and characterizing the bounds of coherent generation.
  • If we know technology like AI and Bots are going to be used for malign influence, is it ethical to use them to promote unity and belief in government to counter the malign influence?
  • Human-Computer Interaction (HCM) approaches that address what the appropriate interface is to help a user make sense of LLM generations

 

Challenge Problem Topic #3 - AI/ML and the Human Element

Research is needed to address how AI/ML and the human interact. How does the human understand what the AI/ML is doing and how does trust relationship develop between the two? Research in the following areas would improve the interaction between the AI/ML and the human. 

  • Closing the Understanding Gap Between Simulations and Real-World Situations using Realistic Human Models
    • Cyber Analyst Assistant AI
      • Create a Cyber Analyst Assistant AI.
        • Create cyber analyst domain taxonomy of errors and analysis delays.
        • Support hackathon that brings together team of a cyber analyst, psychologist and AI expert.
        • Develop AI-driven provenance methods to aid in understanding advanced persistent threats.
  • Trust in AI

    • Evaluate current research in Explainable Artificial Intelligence (XAI).
      • Is it sufficient for establishing trust in AI?
      • What needs to be added to XAI to establish trust in AI?
    • Develop a qualitative study of AI cybersecurity users to understand the factors that lead to trust and distrust.
      • Develop a taxonomy of uses of AI in cybersecurity (website analysis, phishing detection, malware detection).
      • Identify gaps humans could potentially fill based on human cognitive (mental) models that relate to real world situation awareness.

 

Challenge Problem Topic #4 - Resilience, Architecture, and Autonomy

Research is needed for resilience, architecture, and autonomy on how to design and analyze system architectures that deliver required service in the face of compromised components.

  • Active Agents for Resiliency and Autonomy
    • Design and create a Recommender System based on AI/ML or related technology to help defensive operators make better decisions through the use of data from sensors and other operational metrics
    • Design and create an Agent based on AI/ML or related technology to ensure correct operations that follow the “Commander’s Intent” (rules, strategies, decisions, processes, etc.)
    • Design and create an Attribution (Friend or Foe) System based on AI/ML or related technology that identifies good vs. bad actors in a compromised environment.       
    • Develop appropriate metrics to drive design decisions and validate that the implementation meets the design specifications.
  • Resilient Architectures
    • Provide examples of resilient architectures for network offense and defense
    • Provide research on the consequences of automation
    • How do humans effectively manage and understand resilient architectures for scalability?
    • Research and develop adaptable Honeypots thru AI/ML or related technology that react to learning from on-going attacks
  • Trust Factor in Resilient and Autonomous Systems
    • What is compelling evidence of trustworthiness?
    • How do you give AI a “Voice” in strategy decisions?    
    • What are automation tradeoffs relative to objectives?    
    • How do you develop autonomous systems with imperfect/incomplete information?

Suggested Reading

 

Generative AI and Large Language Models

 

LLM Safety/Security

  1. OWASP

    https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v05.pdf

    The purpose of our group, as outlined in the OWASP Top 10 for LLM Applications Working Group Charter, is to identify and highlight the top security and safety issues that developers and security teams must consider when building applications leveraging Large Language Models (LLMs). Our objective is to provide clear, practical, and actionable guidance to enable these teams to proactively address potential vulnerabilities in LLM-based applications...  This document, Version 0.5, serves as a crucial milestone in our ongoing journey. It encapsulates the collective insights and understanding of our group, at this early stage, of the unique vulnerabilities inherent to applications leveraging LLMs. It's important to note that this is not the final version of the OWASP Top 10 for LLMs. Instead, consider it a 'preview' of what's to come.
     

  2. Gradient-Based Word Substitution for Obstinate Adversarial Examples Generation in Language Models
    (PREVIOUSLY: Investigating the Existence of “Secret Language” in Language Models)  

    https://arxiv.org/abs/2307.12507

    In this paper, we study the problem of generating obstinate (over-stability) adversarial examples by word substitution in NLP, where input text is meaningfully changed but the model's prediction does not, even though it should. Previous word substitution approaches have predominantly focused on manually designed antonym-based strategies for generating obstinate adversarial examples, which hinders its application as these strategies can only find a subset of obstinate adversarial examples and require human efforts. To address this issue, in this paper, we introduce a novel word substitution method named GradObstinate, a gradient-based approach that automatically generates obstinate adversarial examples without any constraints on the search space or the need for manual design principles. To empirically evaluate the efficacy of GradObstinate, we conduct comprehensive experiments on five representative models (Electra, ALBERT, Roberta, DistillBERT, and CLIP) finetuned on four NLP benchmarks (SST-2, MRPC, SNLI, and SQuAD) and a language-grounding benchmark (MSCOCO). Extensive experiments show that our proposed GradObstinate generates more powerful obstinate adversarial examples, exhibiting a higher attack success rate compared to antonym-based methods. Furthermore, to show the transferability of obstinate word substitutions found by GradObstinate, we replace the words in four representative NLP benchmarks with their obstinate substitutions. Notably, obstinate substitutions exhibit a high success rate when transferred to other models in black-box settings, including even GPT-3 and ChatGPT. 

 

High-Capacity Model Architectures (Does Size Determine Capability?)

Emergent Abilities of Large Language Models

https://openreview.net/pdf?id=yzkSU5zdwD

Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence raises the question of whether additional scaling could potentially further expand the range of capabilities of language models.

TinyStories: How Small Can Language Models Be and Still Speak Coherent English

https://arxiv.org/abs/2305.07759

Language models (LMs) are powerful tools for natural language processing, but they often struggle to produce coherent and fluent text when they are small. Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can rarely generate coherent and consistent English text beyond a few words even after extensive training. This raises the question of whether the emergence of the ability to produce coherent English text only occurs at larger scales (with hundreds of millions of parameters or more) and complex architectures (with many layers of global attention).

In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.

 

Methods for LLM Augmentation

Toolformer: Language Models Can Teach Themselves to Use Tools

https://arxiv.org/abs/2302.04761

Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.

Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT  

https://arxiv.org/pdf/2304.11116.pdf

In this paper, we aim to develop a large language model (LLM) with the reasoning ability on complex graph data. Currently, LLMs have achieved very impressive performance on various natural language learning tasks, extensions of which have also been applied to study the vision tasks with data in multiple modalities. However, when it comes to the graph learning tasks, existing LLMs present very serious flaws due to their inherited weaknesses in performing precise mathematical calculation, multi-step logic reasoning, perception about the spatial and topological factors, and handling the temporal progression.

 

Cyberpsychology Aspects of Foreign Malign Influence

 

"Combating Foreign Disinformation on Social Media: Study Overview and Conclusions"

Abstract: How are state adversaries using disinformation on social media to advance their interests? What does the Joint Force—and the U.S. Air Force (USAF) in particular—need to be prepared to do in response? Drawing on a host of different primary and secondary sources and more than 150 original interviews from across the U.S. government, the joint force, industry, civil society, and subject-matter experts from nine countries around the world, researchers examined how China, Russia, and North Korea have used disinformation on social media and what the United States and its allies and partners are doing in response. The authors found that disinformation campaigns on social media may be more nuanced than they are commonly portrayed. Still, much of the response to disinformation remains ad hoc and uncoordinated. Disinformation campaigns on social media will likely increase over the coming decade, but it remains unclear who has the competitive edge in this race; disinformation techniques and countermeasures are evolving at the same time. This overview of a multi-volume series presents recommendations to better prepare for this new age of communications warfare.

https://www.rand.org/pubs/research_reports/RR4373z1.html
 

CSET (2021) - AI and the future of disinformation campaigns - Part 1 The RICHDATA framework

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/
 

CSET (2021) - AI and the future of disinformation campaigns - Part 2

This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2/

Past Workshops

Embedded Node

2022

C3E 2022 was held virtually Monday, October 27 Through Wednesday, October 19 and explored the theme of Cybersecurity and Software in the Supply Chain. The tracks looked at cybersecurity in compromised environments focusing on the role of AI and ML in critical systems and the human element, and at impacts on their resilience through the lens of system risk and mitigation.

Fall Workshop: Agenda | Challenge Problems


2021

C3E 2021 was held virtually Wednesday, October 27 and Thursday, October 28 and explored the theme of Supply Chain Cyber Defense. Discussion was centered on technology capabilities and potential adaptation of acquisition and business practices - enabled by multiple types of evidence, cyber risk assessment, evidenced-backed assessments of systems of systems, and supply-chain issues for the supply-chain management and assessment tools themselves.

Fall Workshop: Agenda | Challenge Problem


2020

C3E 2020 explored the themes of continuous assurance and economics of security. In lieu of in-person Spring and Fall workshops, a virtual speaker discussion series was held from August to October 2020. The goal of the speaker series was to provide for a continuing reflection on (new) approaches to system assurance, leveraging system migration (of security and features/capabilities) to high assurance achieved through realizing economy of scale and continous development/integration. Additional details about the schedule and presentations can be found here: https://sos-vo.org/2020/discussion-series


2019

The C3E Fall 2019 workshop was held September 16-18, 2019 at SRI International in Menlo Park, CA. For 2019, C3E further examied key elements explored during the 2018 Workshop - the rol of the human in cyber environments. The workshop focused on the topics of cognitive security and human machine teaming in cyber. 
Fall Workshop: Agenda | Challenge Problem 
Mid-Year Event: Agenda


2018

The C3E Fall 2018 workshop was held September 17-19, 2018 at the Georgia Tech Research Institute in Atlanta, Georgia. The research workshop will focus on Machine Learning Vulnerability Analysis and Decision Support Vulnerabilities. A mid-year event was held on May 10 in Annapolis, Maryland 
Fall Workshop: Agenda | Challenge Problem 
Mid-Year Event: Agenda


2017

The C3E Fall 2017 workshop was held at the Georgia Tech Research Institute in Atlanta, Georgia from 23-25 October 2017. The research workshop brought together members of SCORE and distinguished participants from academia, industry, and government to shape "leap ahead" cyber research. C3E 2017 looked at the overarching theme of Anticipating Future Threats and Response in Cyberspace. A mid-year event was held on 11 May in Annapolis, Maryland.

Fall Workshop: Agenda | 2017 Challenge Problem 
Mid-Year Event: Agenda


2016

The C3E Fall 2016 workshop was held at the Georgia Tech Research Institute in Atlanta, Georgia from 17-19 October 2016. The research workshop brought together distinguished participants from academia, industry, and government to shape "leap ahead" cyber research. The fall workshop drew upon past C3E themes of predictive analytics, visualization, decision-making and others. The workshop focused on understanding cyber dependencies and improving analytic context for cyber resilience.

A mid-year event was held on May 11, 2016 at the National Academy of Sciences Keck Center in Washington, D.C. The goal of the meeting was to sharpen the focus of activities for the Fall workshop.

October Workshop: Invitation | Agenda | Venue 
May Mid-Year Event: Agenda | Challenge Problem | Worksheet | Venue


2015

The C3E Fall 2015 workshop was held at the Carnegie Mellon University/Software Engineering Institute from 26-28 October 2015. The research workshop brought together a diverse group of top experts from academia, industry, and government to examine new ways of approaching the cybersecurity challenges facing the Nation and how to enable real-time decision-making in light of the complex and adversarial nature of Cyberspace. The 2015 mid-year event was held 19 June at the Software Engineering Institute.

October Workshop: Invitation | Agenda | Challenge Problem 
June Mid-Year Event: Agenda


2014

The C3E Fall 2014 Workshop was held at the Georgia Tech Research Institute Conference Center on October 19-22, 2014. The fall meeting continued the exploration of work begun during the May 2014 mid-year event by focusing on two areas: security by default and data integrity.

October Workshop: Invitation | Agenda 
May Mid-Year Event: Invitation


2013

The C3E Winter 2014 workshop, originally scheduled for the fall of 2013, was held January 12-15, 2014 at West Point. The January workshop continued the exploration of work begun during the April 2013 mid-year event by focusing on two areas: navigating cyberspace and cyberspace consequences.

January 2014 Workshop (orginally scheduled for Fall 2013): Invitation | Agenda  
April Mid-Year Event: Agenda | Summary


2012


2011


2010


2009