Domestic robots and agents are widely sold to the grand public, leading us to ethical issues related to the data harvested by such machines. While users show a general acceptance of these robots, concerns remain when it comes to information security and privacy. Current research indicates that there’s a privacy-security trade-off for better use, but the anthropomorphic and social abilities of a robot are also known to modulate its acceptance and use. To explore and deepen what literature already brought on the subject we examined how users perceived their robot (Replika, Roomba©, Amazon Echo©, Google Home©, or Cozmo©/Vector©) through an online questionnaire exploring acceptance, perceived privacy and security, anthropomorphism, disclosure, perceived intimacy, and loneliness. The results supported the literature regarding the potential manipulative effects of robot’s anthropomorphism for acceptance but also information disclosure, perceived intimacy, security, and privacy.
Authored by E. Zehnder, J. Dinet, F. Charpillet
The ongoing COVID-19 virus pandemic has resulted in a global tragedy due to its lethal spread. The population's vulnerability grows as a result of a lack of effective helping agents and vaccines against the virus. The spread of viruses can be mitigated by minimizing close connections between people. Social distancing is a critical containment tool for COVID-19 prevention. In this paper, the social distancing violations that are being made by the people when they are in public places are detected. As per CDC (Centers for Disease Control and Prevention) minimum distance that should be maintained by people is 2-3 meters to prevent the spread of COVID- 19, the proposed tool will be used to detect the people who are maintaining less than 2-3 meters of distance between themselves and record them as a violation. As a result, the goal of this work is to develop a deep learning-based system for object detection and tracking models in social distancing detection. For object detection models, You Only Look Once, Version 3 (YOLO v3) is used in conjunction with deep sort algorithms to balance speed and accuracy. To recognize persons in video segments, the approach applies the YOLOv3 object recognition paradigm. An efficient computer vision-based approach centered on legitimate continuous tracking of individuals is presented to determine supportive social distancing in public locations by creating a model to generate a supportive climate that contributes to public safety and detect violations through camera.
Authored by S. Thylashri, D. Femi, Thamizh Devi
Wireless Sensor Networks (WSN) have assisted applications of multi-agent system. Abundant sensor nodes, densely distributed around a base station (BS), collect data and transmit to BS node for data analysis. The concept of cluster has been emerged as the efficient communication structure in resource-constrained environment. However, the security still remains a major concern due to the vulnerability of sensor nodes. In this paper, we propose a percolation-based secure routing protocol. We leverage the trust score composed of three indexes to select cluster heads (CH) for unevenly distributed clusters. By considering the reliability, centrality and stability, legitimate nodes with social trust and adequate energy are chosen to provide relay service. Moreover, we design a multi-path inter-cluster routing protocol to construct CH chains for directed inter-cluster data transmission based on the percolation. And the measurement of transit score for on-path CH nodes contributes to load balancing and security. Our simulation results show that our protocol is able to guarantee the security to improve the delivery ratio and packets delay.
Authored by Jie Jiang, Pengyu Long, Lijia Xie, Zhiming Zheng
This paper offers a comparative vector assessment of DDoS and disinformation attacks. The assessed dimensions are as follows: (1) the threat agent, (2) attack vector, (3) target, (4) impact, and (5) defense. The results revealed that disinformation attacks, anchoring on astroturfs, resemble DDoS’s zombie computers in their method of amplification. Although DDoS affects several layers of the OSI model, disinformation attacks exclusively affect the application layer. Furthermore, even though their payloads and objectives are different, their vector paths and network designs are very similar. This paper, as its conclusion, strongly recommends the classification of disinformation as an actual cybersecurity threat to eliminate the inconsistencies in policies in social networking platforms. The intended target audiences of this paper are IT and cybersecurity experts, computer and information scientists, policymakers, legal and judicial scholars, and other professionals seeking references on this matter.
Authored by Kevin Caramancion
This paper examines audio-based social networking platforms and how their environments can affect the persistence of fake news and mis/disinformation in the whole information ecosystem. This is performed through an exploration of their features and how they compare to that of general-purpose multimodal platforms. A case study on Spotify and its recent issue on free speech and misinformation is the application area of this paper. As a supplementary, a demographic analysis of the current statistics of podcast streamers is outlined to give an overview of the target audience of possible deception attacks in the future. As for the conclusion, this paper confers a recommendation to policymakers and experts in preparing for future mis-affordance of the features in social environments that may unintentionally give the agents of mis/disinformation prowess to create and sow discord and deception.
Authored by Kevin Caramancion
The new architecture of transformer networks proposed in the work can be used to create an intelligent chat bot that can learn the process of communication and immediately model responses based on what has been said. The essence of the new mechanism is to divide the information flow into two branches containing the history of the dialogue with different levels of granularity. Such a mechanism makes it possible to build and develop the personality of a dialogue agent in the process of dialogue, that is, to accurately imitate the natural behavior of a person. This gives the interlocutor (client) the feeling of talking to a real person. In addition, making modifications to the structure of such a network makes it possible to identify a likely attack using social engineering methods. The results obtained after training the created system showed the fundamental possibility of using a neural network of a new architecture to generate responses close to natural ones. Possible options for using such neural network dialogue agents in various fields, and, in particular, in information security systems, are considered. Possible options for using such neural network dialogue agents in various fields, and, in particular, in information security systems, are considered. The new technology can be used in social engineering attack detection systems, which is a big problem at present. The novelty and prospects of the proposed architecture of the neural network also lies in the possibility of creating on its basis dialogue systems with a high level of biological plausibility.
Authored by V. Ryndyuk, Y. Varakin, E. Pisarenko
The volume of SMS messages sent on a daily basis globally has continued to grow significantly over the past years. Hence, mobile phones are becoming increasingly vulnerable to SMS spam messages, thereby exposing users to the risk of fraud and theft of personal data. Filtering of messages to detect and eliminate SMS spam is now a critical functionality for which different types of machine learning approaches are still being explored. In this paper, we propose a system for detecting SMS spam using a semi-supervised novelty detection approach based on one class SVM classifier. The system is built as an anomaly detector that learns only from normal SMS messages thus enabling detection models to be implemented in the absence of labelled SMS spam training examples. We evaluated our proposed system using a benchmark dataset consisting of 747 SMS spam and 4827 non-spam messages. The results show that our proposed method out-performed the traditional supervised machine learning approaches based on binary, frequency or TF-IDF bag-of-words. The overall accuracy was 98% with 100% SMS spam detection rate and only around 3% false positive rate.
Authored by Suleiman Yerima, Abul Bashar
In this paper, we established a unified deep learning-based spam filtering method. The proposed method uses the message byte-histograms as a unified representation for all message types (text, images, or any other format). A deep convolutional neural network (CNN) is used to extract high-level features from this representation. A fully connected neural network is used to perform the classification using the extracted CNN features. We validate our method using several open-source text-based and image-based spam datasets.We obtained an accuracy higher than 94% on all datasets.
Authored by Yassine Belkhouche
In today’s digital world, Mobile SMS (short message service) communication has almost become a part of every human life. Meanwhile each mobile user suffers from the harass of Spam SMS. These Spam SMS constitute veritable nuisance to mobile subscribers. Though hackers or spammers try to intrude in mobile computing devices, SMS support for mobile devices become more vulnerable as attacker tries to intrude into the system by sending unsolicited messages. An attacker can gain remote access over mobile devices. We propose a novel approach that can analyze message content and find features using the TF-IDF techniques to efficiently detect Spam Messages and Ham messages using different Machine Learning Classifiers. The Classifiers going to use in proposed work can be measured with the help of metrics such as Accuracy, Precision and Recall. In our proposed approach accuracy rate will be increased by using the Voting Classifier.
Authored by Ganesh Ubale, Siddharth Gaikwad
In the present paper, the application of filtering methods to select features when detecting email spam using the K-NN classifier is examined. The experiments include computation of the accuracy and F-measure of the e-mail texts classification with different methods for feature selection, different number of selected features and two ways to find the distance between dataset examples when executing K-NN classifier - Euclidean distance and cosine similarity. The obtained results are summarized and analyzed.
Authored by Tsvetanka Georgieva-Trifonova
Community question answering (CQA) websites have become very popular platforms attracting numerous participants to share and acquire knowledge and information in Internet However, with the rapid growth of crowdsourcing systems, many malicious users organize collusive attacks against the CQA platforms for promoting a target (product or service) via posting suggestive questions and deceptive answers. These manipulate deceptive contents, aggregating into multiple collusive questions and answers (Q&As) spam groups, can fully control the sentiment of a target and distort the decision of users, which pollute the CQA environment and make it less credible. In this paper, we propose a Pattern and Burstiness based Collusive Q&A Spam Detection method (PBCSD) to identify the deceptive questions and answers. Specifically, we intensively study the campaign process of crowdsourcing tasks and summarize the clues in the Q&As’ vocabulary usage level when collusive attacks are launched. Based on the clues, we extract the Q&A groups using frequent pattern mining and further purify them by the burstiness on posting time of Q&As. By designing several discriminative features at the Q&A group level, multiple machine learning based classifiers can be used to judge the groups as deceptive or ordinary, and the Q&As in deceptive groups are finally identified as collusive Q&A spam. We evaluate the proposed PBCSD method in a real-world dataset collected from Baidu Zhidao, a famous CQA platform in China, and the experimental results demonstrate the PBCSD is effective for collusive Q&A spam detection and outperforms a number of state-of-art methods.
Authored by Mingming Xu, Lu Zhang, Haiting Zhu
Now a days there are many online social networks (OSN) which are very popular among Internet users and use this platform for finding new connections, sharing their activities and thoughts. Twitter is such social media platforms which is very popular among this users. Survey says, it has more than 310 million monthly users who are very active and post around 500+ million tweets in a day and this attracts, the spammer or cyber-criminal to misuse this platform for their malicious benefits. Product advertisement, phishing true users, pornography propagation, stealing the trending news, sharing malicious link to get the victims for making money are the common example of the activities of spammers. In Aug-2014, Twitter made public that 8.5% of its active Twitter users (monthly) that is approx. 23+ million users, who have automatically contacted their servers for regular updates. Thus for a spam free environment in twitter, it is greatly required to detect and filter these spammer from the legitimate users. Here in our research paper, effectiveness & features of twitter spam detection, various methods are summarized with their benefits and limitations are presented. [1]
Authored by Lipsa Das, Laxmi Ahuja, Adesh Pandey
All of us are familiar with the importance of social media in facilitating communication. e-mail is one of the safest social media platforms for online communications and information transfer over the internet. As of now, many people rely on email or communications provided by strangers. Because everyone may send emails or a message, spammers have a great opportunity to compose spam messages about our many hobbies and passions, interests, and concerns. Our internet speeds are severely slowed down by spam, which also collects personal information like our phone numbers from our contact list. There is a lot of work involved in identifying these fraudsters and also identifying spam content. Email spam refers to the practice of sending large numbers of messages via email. The recipient bears the bulk of the cost of spam, therefore it's practically free advertising. Spam email is a form of commercial advertising for hackers that is financially viable due of the low cost of sending email. Anti-spam filters have become increasingly important as the volume of unwanted bulk e-mail (also spamming) grows. We can define a message, if it is a spam or not using this proposed model. Machine learning algorithms can be discussed in detail, and our data sets will be used to test them all, with the goal of identifying the one that is most accurate and precise in its identification of email spam. Society of machine learning techniques for detecting unsolicited mass email and spam.
Authored by V. Sasikala, K. Mounika, Sravya Tulasi, D. Gayathri, M. Anjani
Aim: To bring off the spam detection in social media using Support Vector Machine (SVM) algorithm and compare accuracy with Artificial Neural Network (ANN) algorithm sample size of dataset is 5489, Initially the dataset contains several messages which includes spam and ham messages 80% messages are taken as training and 20% of messages are taken as testing. Materials and Methods: Classification was performed by KNN algorithm (N=10) for spam detection in social media and the accuracy was compared with SVM algorithm (N=10) with G power 80% and alpha value 0.05. Results: The value obtained in terms of accuracy was identified by ANN algorithm (98.2%) and for SVM algorithm (96.2%) with significant value 0.749. Conclusion: The accuracy of detecting spam using the ANN algorithm appears to be slightly better than the SVM algorithm.
Authored by Grandhi Svadasu, M. Adimoolam
Software quality assurance (SQA) is a means and practice of monitoring the software engineering processes and methods used in a project to ensure proper quality of the software. It encompasses the entire software development life-cycle, including requirements engineering, software design, coding, source code reviews, software configuration management, testing , release management, software deployment and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation. In this talk, we will mainly focus on the testing activity part of the software development life-cycle. Its main objective is checking that software is satisfying a set of quality properties that are identified by the "ISO/IEC 25010:2011 System and Software Quality Model" standard [1] .
Authored by Wissam Mallouli
The evolving and new age cybersecurity threats has set the information security industry on high alert. This modern age cyberattacks includes malware, phishing, artificial intelligence, machine learning and cryptocurrency. Our research highlights the importance and role of Software Quality Assurance for increasing the security standards that will not just protect the system but will handle the cyber-attacks better. With the series of cyber-attacks, we have concluded through our research that implementing code review and penetration testing will protect our data's integrity, availability, and confidentiality. We gathered user requirements of an application, gained a proper understanding of the functional as well as non-functional requirements. We implemented conventional software quality assurance techniques successfully but found that the application software was still vulnerable to potential issues. We proposed two additional stages in software quality assurance process to cater with this problem. After implementing this framework, we saw that maximum number of potential threats were already fixed before the first release of the software.
Authored by Ammar Haider, Wafa Bhatti
The increase of autonomy in autonomous surface vehicles development brings along modified and new risks and potential hazards, this in turn, introduces the need for processes and methods for ensuring that systems are acceptable for their intended use with respect to dependability and safety concerns. One approach for evaluating software requirements for claims of safety is to employ an assurance case. Much like a legal case, the assurance case lays out an argument and supporting evidence to provide assurance on the software requirements. This paper analyses safety and security requirements relating to autonomous vessels, and regulations in the automotive industry and the marine industry before proposing a generic cybersecurity and safety assurance case that takes a general graphical approach of Goal Structuring Notation (GSN).
Authored by Luis-Pedro Cobos, Tianlei Miao, Kacper Sowka, Garikayi Madzudzo, Alastair Ruddle, Ehab Amam
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
Authored by Muhammad Khan, Enow Ehabe, Akalanka Mailewa
Aviation is a highly sophisticated and complex System-of-Systems (SoSs) with equally complex safety oversight. As novel products with autonomous functions and interactions between component systems are adopted, the number of interdependencies within and among the SoS grows. These interactions may not always be obvious. Understanding how proposed products (component systems) fit into the context of a larger SoS is essential to promote the safe use of new as well as conventional technology.UL 4600, is a Standard for Safety for the Evaluation of Autonomous Products specifically written for completely autonomous Load vehicles. The goal-based, technology-neutral features of this standard make it adaptable to other industries and applications.This paper, using the philosophy of UL 4600, gives guidance for creating an assurance case for products in an SoS context. An assurance argument is a cogent structured argument concluding that an autonomous aircraft system possesses all applicable through-life performance and safety properties. The assurance case process can be repeated at each level in the SoS: aircraft, aircraft system, unmodified components, and modified components. The original Equipment Manufacturer (OEM) develops the assurance case for the whole aircraft envisioned in the type certification process. Assurance cases are continuously validated by collecting and analyzing Safety Performance Indicators (SPIs). SPIs provide predictive safety information, thus offering an opportunity to improve safety by preventing incidents and accidents. Continuous validation is essential for risk-based approval of autonomously evolving (dynamic) systems, learning systems, and new technology. System variants, derivatives, and components are captured in a subordinate assurance case by their developer. These variants of the assurance case inherently reflect the evolution of the vehicle-level derivatives and options in the context of their specific target ecosystem. These subordinate assurance cases are nested under the argument put forward by the OEM of components and aircraft, for certification credit.It has become a common practice in aviation to address design hazards through operational mitigations. It is also common for hazards noted in an aircraft component system to be mitigated within another component system. Where a component system depends on risk mitigation in another component of the SoS, organizational responsibilities must be stated explicitly in the assurance case. However, current practices do not formalize accounting for these dependencies by the parties responsible for design; consequently, subsequent modifications are made without the benefit of critical safety-related information from the OEMs. The resulting assurance cases, including 3rd party vehicle modifications, must be scrutinized as part of the holistic validation process.When changes are made to a product represented within the assurance case, their impact must be analyzed and reflected in an updated assurance case. An OEM can facilitate this by integrating affected assurance cases across their customer’s supply chains to ensure their validity. The OEM is expected to exercise the sphere-of-control over their product even if it includes outsourced components. Any organization that modifies a product (with or without assurance argumentation information from other suppliers) is accountable for validating the conditions for any dependent mitigations. For example, the OEM may manage the assurance argumentation by identifying requirements and supporting SPI that must be applied in all component assurance cases. For their part, component assurance cases must accommodate all spheres-of-control that mitigate the risks they present in their respective contexts. The assurance case must express how interdependent mitigations will collectively assure the outcome. These considerations are much more than interface requirements and include explicit hazard mitigation dependencies between SoS components. A properly integrated SoS assurance case reflects a set of interdependent systems that could be independently developed..Even in this extremely interconnected environment, stakeholders must make accommodations for the independent evolution of products in a manner that protects proprietary information, domain knowledge, and safety data. The collective safety outcome for the SoS is based on the interdependence of mitigations by each constituent component and could not be accomplished by any single component. This dependency must be explicit in the assurance case and should include operational mitigations predicated on people and processes.Assurance cases could be used to gain regulatory approval of conventional and new technology. They can also serve to demonstrate consistency with a desired level of safety, especially in SoSs whose existing standards may not be adequate. This paper also provides guidelines for preserving alignment between component assurance cases along a product supply chain, and the respective SoSs that they support. It shows how assurance is a continuous process that spans product evolution through the monitoring of interdependent requirements and SPI. The interdependency necessary for a successful assurance case encourages stakeholders to identify and formally accept critical interconnections between related organizations. The resulting coordination promotes accountability for safety through increased awareness and the cultivation of a positive safety culture.
Authored by Uma Ferrell, Alfred Anderegg
For modern Automatic Test Equipment (ATE), one of the most daunting tasks conducting Information Assurance (IA). In addition, there is a desire to Network ATE to allow for information sharing and deployment of software. This is complicated by the fact that typically ATE are “unmanaged” systems in that most are configured, deployed, and then mostly left alone. This results in systems that are not patched with the latest Operating System updates and in fact may be running on legacy Operating Systems which are no longer supported (like Windows XP or Windows 7 for instance). A lot of this has to do with the cost of keeping a system updated on a continuous basis and regression testing the Test Program Sets (TPS) that run on them. Given that an Automated Test System can have thousands of Test Programs running on it, the cost and time involved in doing complete regression testing on all the Test Programs can be extremely expensive. In addition to the Test Programs themselves some Test Programs rely on third party Software and / or custom developed software that is required for the Test Programs to run. Add to this the requirement to perform software steering through all the Test Program paths, the length of time required to validate a Test Program could be measured in months in some cases. If system updates are performed once a month like some Operating System updates this could consume all the available time of the Test Station or require a fleet of Test Stations to be dedicated just to do the required regression testing. On the other side of the coin, a Test System running an old unpatched Operating System is a prime target for any manner of virus or other IA issues. This paper will discuss some of the pro's and con's of a managed Test System and how it might be accomplished.
Authored by William Headrick
State of the art Artificial Intelligence Assurance (AIA) methods validate AI systems based on predefined goals and standards, are applied within a given domain, and are designed for a specific AI algorithm. Existing works do not provide information on assuring subjective AI goals such as fairness and trustworthiness. Other assurance goals are frequently required in an intelligent deployment, including explainability, safety, and security. Accordingly, issues such as value loading, generalization, context, and scalability arise; however, achieving multiple assurance goals without major trade-offs is generally deemed an unattainable task. In this manuscript, we present two AIA pipelines that are model-agnostic, independent of the domain (such as: healthcare, energy, banking), and provide scores for AIA goals including explainability, safety, and security. The two pipelines: Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP) are scalable and tested with multiple use cases, such as a water distribution network and a telecommunications network, to illustrate their benefits. ALSP optimizes models using a game theory approach and it also logs and scores the actions of an AI model to detect adversarial inputs, and assures the datasets used for training. RFSP identifies the best hyper-parameters using a Bayesian approach and provides assurance scores for subjective goals such as ethical AI using user inputs and statistical assurance measures. Each pipeline has three algorithms that enforce the final assurance scores and other outcomes. Unlike ALSP (which is a parallel process), RFSP is user-driven and its actions are sequential. Data are collected for experimentation; the results of both pipelines are presented and contrasted.
Authored by Md Sikder, Feras Batarseh, Pei Wang, Nitish Gorentala
The use of software daily has become inevitable nowadays. Almost all everyday tools and the most different areas (e.g., medicine or telecommunications) are dependent on software. The C programming language is one of the most used languages for software development, such as operating systems, drivers, embedded systems, and industrial products. Even with the appearance of new languages, it remains one of the most used [1] . At the same time, C lacks verification mechanisms, like array boundaries, leaving the entire responsibility to the developer for the correct management of memory and resources. These weaknesses are at the root of buffer overflows (BO) vulnerabilities, which range the first place in the CWE’s top 25 of the most dangerous weaknesses [2] . The exploitation of BO when existing in critical safety systems, such as railways and autonomous cars, can have catastrophic effects for manufacturers or endanger human lives.
Authored by João Inácio, Ibéria Medeiros
The FAA proposes Safety Continuum that recognizes the public expectation for safety outcomes vary with aviation sectors that have different missions, aircraft, and environments. The purpose is to align the rigor of oversight to the public expectations. An aircraft, its variants or derivatives may be used in operations with different expectations. The differences in mission might bring immutable risks for some applications that reuse or revise the original aircraft type design. The continuum enables a more agile design approval process for innovations in the context of a dynamic ecosystems, addressing the creation of variants for different sectors and needs. Since an aircraft type design can be reused in various operations under part 91 or 135 with different mission risks the assurance case will have many branches reflecting the variants and derivatives.This paper proposes a model for the holistic, performance-based, through-life safety assurance case that focuses applicant and oversight alike on achieving the safety outcomes. This paper describes the application of goal-based, technology-neutral features of performance-based assurance cases extending the philosophy of UL 4600, to the Safety Continuum. This paper specifically addresses component reuse including third-party vehicle modifications and changes to operational concept or eco-system. The performance-based assurance argument offers a way to combine the design approval more seamlessly with the oversight functions by focusing all aspects of the argument and practice together to manage the safety outcomes. The model provides the context to assure mitigated risk are consistent with an operation’s place on the safety continuum, while allowing the applicant to reuse parts of the assurance argument to innovate variants or derivatives. The focus on monitoring performance to constantly verify the safety argument complements compliance checking as a way to assure products are "fit-for-use". The paper explains how continued operational safety becomes a natural part of monitoring the assurance case for growing variety in a product line by accounting for the ecosystem changes. Such a model could be used with the Safety Continuum to promote applicant and operator accountability delivering the expected safety outcomes.
Authored by Alfred Anderegg, Uma Ferrell
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps.
Authored by Tarek Abdelzaher, Nathaniel Bastian, Susmit Jha, Lance Kaplan, Mani Srivastava, Venugopal Veeravalli
Recent concepts in defense herald an increasing degree of automation of future military systems, with an emphasis on accelerating sensing-to-decision loops at the tactical edge, reducing their network communication footprint, and improving the inference quality of intelligent components in the loop. These requirements pose resource management challenges, calling for operating-system-like constructs that optimize the use of limited computational resources at the tactical edge. This paper describes these challenges and presents IoBT-OS, an operating system for the Internet of Battlefield Things that aims to optimize decision latency, improve decision accuracy, and reduce corresponding resource demands on computational and network components. A simple case-study with initial evaluation results is shown from a target tracking application scenario.
Authored by Dongxin Liu, Tarek Abdelzaher, Tianshi Wang, Yigong Hu, Jinyang Li, Shengzhong Liu, Matthew Caesar, Deepti Kalasapura, Joydeep Bhattacharyya, Nassy Srour, Jae Kim, Guijun Wang, Greg Kimberly, Shouchao Yao