Human Trust |
Human behavior is complex and that complexity creates a tremendous problem for cybersecurity. The works cited here address a range of human trust issues related to behaviors, deception, enticement, sentiment and other factors difficult to isolate and quantify. All appeared in 2014.
Sousa, S.; Dias, P.; Lamas, D., "A Model for Human-Computer Trust: A Key Contribution For Leveraging Trustful Interactions," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876935 This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
Keywords: computer mediated communication; human computer interaction; computer science; computer systems; computer-mediated interactions; human-computer iteration; human-computer trust model; participation history; social phenomenon; trustful interaction leveraging; user perceptions ;user trust needs; Collaboration; Computational modeling; Computers; Context; Correlation; Educational institutions; Psychology; Collaboration; Engagement; Human-computer trust; Interaction design; Participation (ID#: 15-3619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876935&isnumber=6876860
Kounelis, I.; Baldini, G.; Neisse, R.; Steri, G.; Tallacchini, M.; Guimaraes Pereira, A., "Building Trust in the Human?Internet of Things Relationship," Technology and Society Magazine, IEEE, vol. 33, no. 4, pp.73, 80, winter 2014. doi: 10.1109/MTS.2014.2364020 The concept of the Internet of Things (IoT) was initially proposed by Kevin Ashton in 1998 [1], where it was linked to RFID technology. More recently, the initial idea has been extended to support pervasive connectivity and the integration of the digital and physical worlds [2], encompassing virtual and physical objects, including people and places.
Keywords: Internet of things; Privacy; Security; Senior citizens; Smart homes; Trust management (ID#: 15-3620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969184&isnumber=6969174
Fei Hao; Geyong Min; Man Lin; Changqing Luo; Yang, L.T., "MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.11, pp.2944, 2955, Nov. 2014. doi: 10.1109/TPDS.2013.309 Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.
Keywords: fuzzy reasoning; fuzzy set theory; graph theory; mobile computing; security of data; social networking (online);trusted computing; MSN; MobiFuzzyTrust inference mechanism; distributed public virtual social spaces; fuzzy linguistic technique; fuzzy trust inference mechanism; mobile context aware trust model; mobile devices; mobile social networks; mobile users; nonsemantical trust representation; real-world mobile dataset; social links; trust graph; trust models; trust value evaluation; Computational modeling; Context; Context modeling; Mobile communication; Mobile handsets; Pragmatics; Social network services; Mobile social networks; fuzzy inference; linguistic terms; mobile context; trust (ID#: 15-3621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6684155&isnumber=6919360
Dondio, P.; Longo, L., "Computing Trust as a Form of Presumptive Reasoning," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on , vol.2, no., pp.274,281, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.108 This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.
Keywords: trusted computing; Walton argumentation schemes; presumptive reasoning; trust computing; trust expertise; trust model; trust schemes; Cognition; Communities; Computational modeling; Context; Fuzzy logic; Measurement; Standards; fuzzy logics; online communities; trust (ID#: 15-3622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927635&isnumber=6927590
Frauenstein, E.D.; von Solms, R., "Combatting Phishing: A Holistic Human Approach," Information Security for South Africa (ISSA), 2014, pp.1,10, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950508 Phishing continues to remain a lucrative market for cyber criminals, mostly because of the vulnerable human element. Through emails and spoofed-websites, phishers exploit almost any opportunity using major events, considerable financial awards, fake warnings and the trusted reputation of established organizations, as a basis to gain their victims' trust. For many years, humans have often been referred to as the `weakest link' towards protecting information. To gain their victims' trust, phishers continue to use sophisticated looking emails and spoofed websites to trick them, and rely on their victims' lack of knowledge, lax security behavior and organizations' inadequate security measures towards protecting itself and their clients. As such, phishing security controls and vulnerabilities can arguably be classified into three main elements namely human factors (H), organizational aspects (O) and technological controls (T). All three of these elements have the common feature of human involvement and as such, security gaps are inevitable. Each element also functions as both security control and security vulnerability. A holistic framework towards combatting phishing is required whereby the human feature in all three of these elements is enhanced by means of a security education, training and awareness programme. This paper discusses the educational factors required to form part of a holistic framework, addressing the HOT elements as well as the relationships between these elements towards combatting phishing. The development of this framework uses the principles of design science to ensure that it is developed with rigor. Furthermore, this paper reports on the verification of the framework.
Keywords: computer crime; computer science education; human factors; organisational aspects; unsolicited e-mail; HOT elements; ails; awareness programme; cyber criminals; design science principles; educational factors; fake warnings; financial awards; holistic human approach; human factors; lax security behavior; organizational aspects; phishing security controls; security education; security gaps; security training; security vulnerability; spoofed-Web sites; technological controls; trusted reputation; ISO; Lead; Security; Training;COBIT;agency theory; human factors; organizational aspects; phishing; security education training and awareness; social engineering; technological controls; technology acceptance model (ID#: 15-3623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950508&isnumber=6950479
Ing-Ray Chen; Jia Guo, "Dynamic Hierarchical Trust Management of Mobile Groups and Its Application to Misbehaving Node Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.49,56, 13-16 May 2014. doi: 10.1109/AINA.2014.13 In military operation or emergency response situations, very frequently a commander will need to assemble and dynamically manage Community of Interest (COI) mobile groups to achieve a critical mission assigned despite failure, disconnection or compromise of COI members. We combine the designs of COI hierarchical management for scalability and reconfigurability with COI dynamic trust management for survivability and intrusion tolerance to compose a scalable, reconfigurable, and survivable COI management protocol for managing COI mission-oriented mobile groups in heterogeneous mobile environments. A COI mobile group in this environment would consist of heterogeneous mobile entities such as communication-device-carried personnel/robots and aerial or ground vehicles operated by humans exhibiting not only quality of service (QoS) characters, e.g., competence and cooperativeness, but also social behaviors, e.g., connectivity, intimacy and honesty. A COI commander or a subtask leader must measure trust with both social and QoS cognition depending on mission task characteristics and/or trustee properties to ensure successful mission execution. In this paper, we present a dynamic hierarchical trust management protocol that can learn from past experiences and adapt to changing environment conditions, e.g., increasing misbehaving node population, evolving hostility and node density, etc. to enhance agility and maximize application performance. With trust-based misbehaving node detection as an application, we demonstrate how our proposed COI trust management protocol is resilient to node failure, disconnection and capture events, and can help maximize application performance in terms of minimizing false negatives and positives in the presence of mobile nodes exhibiting vastly distinct QoS and social behaviors.
Keywords: emergency services; military communication; mobile computing; protocols; quality of service; telecommunication security; trusted computing ;COI dynamic hierarchical trust management protocol; COI mission-oriented mobile group management; aerial vehicles; agility enhancement; application performance maximization; communication-device-carried personnel; community-of-interest mobile groups; competence; connectivity; cooperativeness; emergency response situations; ground vehicles; heterogeneous mobile entities; heterogeneous mobile environments; honesty; intimacy; intrusion tolerance; military operation; misbehaving node population; node density; quality-of-service characters; robots; social behaviors; survivable COI management protocol; trust measurement; trust-based misbehaving node detection; Equations; Mathematical model; Mobile communication; Mobile computing; Peer-to-peer computing; Protocols; Quality of service; Trust management; adaptability; community of interest; intrusion detection; performance analysis; scalability (ID#: 15-3624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838647&isnumber=6838626
Athanasiou, G.; Fengou, M.-A.; Beis, A.; Lymberopoulos, D., "A Novel Trust Evaluation Method For Ubiquitous Healthcare Based On Cloud Computational Theory," Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pp.4503, 4506, 26-30 Aug. 2014. doi: 10.1109/EMBC.2014.6944624 The notion of trust is considered to be the cornerstone on patient-psychiatrist relationship. Thus, a trustfully background is fundamental requirement for provision of effective Ubiquitous Healthcare (UH) service. In this paper, the issue of Trust Evaluation of UH Providers when register UH environment is addressed. For that purpose a novel trust evaluation method is proposed, based on cloud theory, exploiting User Profile attributes. This theory mimics human thinking, regarding trust evaluation and captures fuzziness and randomness of this uncertain reasoning. Two case studies are investigated through simulation in MATLAB software, in order to verify the effectiveness of this novel method.
Keywords: cloud computing; health care; trusted computing; ubiquitous computing; uncertainty handling; MATLAB software; UH environment; cloud computational theory; cloud theory; trust evaluation method; ubiquitous healthcare; uncertain reasoning; user profile attributes; Conferences; Generators; MATLAB; MIMICs; Medical services; Pragmatics; TV (ID#: 15-3625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6944624&isnumber=6943513
Howser, G.; McMillin, B., "A Modal Model of Stuxnet Attacks on Cyber-physical Systems: A Matter of Trust," Software Security and Reliability, 2014 Eighth International Conference on, pp.225,234, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.36 Multiple Security Domains Nondeducibility, MSDND, yields results even when the attack hides important information from electronic monitors and human operators. Because MSDND is based upon modal frames, it is able to analyze the event system as it progresses rather than relying on traces of the system. Not only does it provide results as the system evolves, MSDND can point out attacks designed to be missed in other security models. This work examines information flow disruption attacks such as Stuxnet and formally explains the role that implicit trust in the cyber security of a cyber physical system (CPS) plays in the success of the attack. The fact that the attack hides behind MSDND can be used to help secure the system by modifications to break MSDND and leave the attack nowhere to hide. Modal operators are defined to allow the manipulation of belief and trust states within the model. We show how the attack hides and uses the operator's trust to remain undetected. In fact, trust in the CPS is key to the success of the attack.
Keywords: security of data; trusted computing; CPS; MSDND; Stuxnet attacks; belief manipulation; cyber physical system; cyber security; cyber-physical systems; electronic monitors; event system analysis; human operators; implicit trust; information flow disruption attacks; modal frames;modal model; multiple security domains nondeducibility; security models; trust state manipulation; Analytical models; Bismuth; Cognition; Cost accounting; Monitoring; Security; Software; Stuxnet; cyber-physical systems; doxastic logic; information flow security; nondeducibility; security models (ID#: 15-3626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895433&isnumber=6895396
Godwin, J.L.; Matthews, P., "Rapid Labelling Of SCADA Data To Extract Transparent Rules Using RIPPER," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,7, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798456 This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.
Keywords: SCADA systems; condition monitoring; failure analysis; gears; knowledge based systems; maintenance engineering; mechanical engineering computing; wind turbines; Mahalanobis distance; RIPPER rule learner; SCADA data rapid labelling; catastrophic failure; failure model; mean kappa statistic; robust prognostic condition index; supervisory control and data acquisition; wind turbine gearbox degradation; Accuracy; Gears; Indexes; Inspection; Maintenance engineering; Robustness; Wind turbines; Condition index; Data mining; prognosis; rule extraction; wind turbine SCADA data (ID#: 15-3627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798456&isnumber=6798433
Yinping Yang; Falcao, H.; Delicado, N.; Ortony, A., "Reducing Mistrust in Agent-Human Negotiations," Intelligent Systems, IEEE, vol. 29, no.2, pp.36,43, Mar.-Apr. 2014. doi: 10.1109/MIS.2013.106 Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.
Keywords: multi-agent systems; software agents; trusted computing; agent-human negotiations; computational agent; face-to-face negotiation; multiissue negotiation; online interaction; online negotiation setting; socially intelligent agent; software agent; trusting individual; Context; Economics; Educational institutions; Instruments; Intelligent systems; Joints; Software agents; agent-human negotiation; intelligent systems; online negotiation; socially intelligent agents (ID#: 15-3628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6636309&isnumber=6832878
Goldman, A.D.; Uluagac, A.S.; Copeland, J.A., "Cryptographically-Curated File System (CCFS): Secure, Inter-Operable, And Easily Implementable Information-Centric Networking," Local Computer Networks (LCN), 2014 IEEE 39th Conference on, pp.142, 149, 8-11 Sept. 2014. doi: 10.1109/LCN.2014.6925766 Cryptographically-Curated File System (CCFS) proposed in this work supports the adoption of Information-Centric Networking. CCFS utilizes content names that span trust boundaries, verify integrity, tolerate disruption, authenticate content, and provide non-repudiation. Irrespective of the ability to reach an authoritative host, CCFS provides secure access by binding a chain of trust into the content name itself. Curators cryptographically bind content to a name, which is a path through a series of objects that map human meaningful names to cryptographically strong content identifiers. CCFS serves as a network layer for storage systems unifying currently disparate storage technologies. The power of CCFS derives from file hashes and public keys used as a name with which to retrieve content and as a method of verifying that content. We present results from our prototype implementation. Our results show that the overhead associated with CCFS is not negligible, but also is not prohibitive.
Keywords: information networks; public key cryptography; storage management; CCFS; content authentication; cryptographically strong content identifiers; cryptographically-curated file system; file hashes; information-centric networking; integrity verification; network layer; public keys; storage systems; storage technologies; trust boundaries; File systems; Google; IP networks; Prototypes; Public key; Servers; Content Centric Networking (CCN);Cryptographically Curated File System (CCFS); Delay Tolerant Networking (DTN) ;Information Centric Networks (ICN); Inter-operable Heterogeneous Storage; Name Orientated Networking (NON); Self Certifying File Systems (ID#: 15-3629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6925766&isnumber=6925725
Ormrod, David, "The Coordination of Cyber and Kinetic Deception for Operational Effect: Attacking the C4ISR Interface," Military Communications Conference (MILCOM), 2014 IEEE, pp.117, 122, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.26 Modern military forces are enabled by networked command and control systems, which provide an important interface between the cyber environment, electronic sensors and decision makers. However these systems are vulnerable to cyber attack. A successful cyber attack could compromise data within the system, leading to incorrect information being utilized for decisions with potentially catastrophic results on the battlefield. Degrading the utility of a system or the trust a decision maker has in their virtual display may not be the most effective means of employing offensive cyber effects. The coordination of cyber and kinetic effects is proposed as the optimal strategy for neutralizing an adversary's C4ISR advantage. However, such an approach is an opportunity cost and resource intensive. The adversary's cyber dependence can be leveraged as a means of gaining tactical and operational advantage in combat, if a military force is sufficiently trained and prepared to attack the entire information network. This paper proposes a research approach intended to broaden the understanding of the relationship between command and control systems and the human decision maker, as an interface for both cyber and kinetic deception activity.
Keywords: Aircraft Command and control systems; Decision making; Force; Kinetic theory; Sensors; Synchronization; Command and control; combat; cyber attack; deception; risk management; trust (ID#: 15-3630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956747&isnumber=6956719
Srivastava, M., "In Sensors We Trust -- A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles. Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-3631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129
Sen, Shayak; Guha, Saikat; Datta, Anupam; Rajamani, Sriram K.; Tsai, Janice; Wing, Jeannette M., "Bootstrapping Privacy Compliance in Big Data Systems," Security and Privacy (SP), 2014 IEEE Symposium on, pp.327, 342, 18-21 May 2014. doi: 10.1109/SP.2014.28 With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of big data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Keywords: Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis (ID#: 15-3632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956573&isnumber=6956545
Shila, D.M.; Venugopal, V., "Design, implementation and security analysis of Hardware Trojan Threats in FPGA," Communications (ICC), 2014 IEEE International Conference on, pp.719, 724, 10-14 June 2014. doi: 10.1109/ICC.2014.6883404 Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
Keywords: field programmable gate arrays; integrated logic circuits; invasive software; FPGA testbed; HDM ;HTT detectability metric; HTT detection; ICs; RoT; Trojan taxonomy; denial of service; hardware Trojan detection technique; hardware Trojan threats; integrated circuits; missed detection probability; normalized physical parameters; optimal detection threshold; power consumption; root of trust; security analysis; sensitive information leak;s ummation of false alarm; timing variation; Encryption; Field programmable gate arrays; Hardware; Power demand; Timing; Trojan horses; Design; Hardware Trojans; Resiliency; Security (ID#: 15-3633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883404&isnumber=6883277
Dickerson, J.P.; Kagan, V.; Subrahmanian, V.S., "Using Sentiment To Detect Bots On Twitter: Are Humans More Opinionated Than Bots?," Advances in Social Networks Analysis and Mining (ASONAM), 2014 IEEE/ACM International Conference on, pp.620,627, 17-20 Aug. 2014
doi: 10.1109/ASONAM.2014.6921650 In many Twitter applications, developers collect only a limited sample of tweets and a local portion of the Twitter network. Given such Twitter applications with limited data, how can we classify Twitter users as either bots or humans? We develop a collection of network-, linguistic-, and application-oriented variables that could be used as possible features, and identify specific features that distinguish well between humans and bots. In particular, by analyzing a large dataset relating to the 2014 Indian election, we show that a number of sentimentrelated factors are key to the identification of bots, significantly increasing the Area under the ROC Curve (AUROC). The same method may be used for other applications as well.
Keywords: social networking (online); trusted computing; AUROC; Indian election; Twitter applications; Twitter network; area under the ROC curve; bot detection; sentiment-related factors; Conferences; Nominations and elections; Principal component analysis; Semantics; Syntactics; Twitter (ID#: 15-3635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921650&isnumber=6921526
Cailleux, L.; Bouabdallah, A.; Bonnin, J.-M., "A Confident Email System Based On A New Correspondence Model," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 492, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779010 Despite all the current controversies, the success of the email service is still valid. The ease of use of its various features contributed to its widespread adoption. In general, the email system provides for all its users the same set of features controlled by a single monolithic policy. Such solutions are efficient but limited because they grant no place for the concept of usage which denotes a user's intention of communication: private, professional, administrative, official, military. The ability to efficiently send emails from mobile devices creates new interesting opportunities. We argue that the context (location, time, device, operating system, access network...) of the email sender appears as a new dimension we have to take into account to complete the picture. Context is clearly orthogonal to usage because a same usage may require different features depending of the context. It is clear that there is no global policy meeting requirements of all possible usages and contexts. To address this problem, we propose to define a correspondence model which for a given usage and context allows to derive a correspondence type encapsulating the exact set of required features. With this model, it becomes possible to define an advanced email system which may cope with multiple policies instead of a single monolithic one. By allowing a user to select the exact policy coping with her needs, we argue that our approach reduces the risk-taking allowing the email system to slide from a trusted one to a confident one.
Keywords: electronic mail; human factors; security of data; trusted computing; confident email system; correspondence model; email sender context; email service; email system policy; mobile devices; trusted email system; Context; Context modeling; Electronic mail; Internet; Postal services;Protocols;Security;Email;confidence;correspondence;email security; policy; security; trust (ID#: 15-3636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779010&isnumber=6778899
Biedermann, S.; Ruppenthal, T.; Katzenbeisser, S., "Data-Centric Phishing Detection Based On Transparent Virtualization Technologies," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.215,223, 23-24 July 2014. doi: 10.1109/PST.2014.6890942 We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect “Man-in-the-Browser” (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
Keywords: Web sites; cloud computing; computer crime; online front-ends; virtual machines; virtualisation; MitB attack; VM introspection; VMI; antiphishing solution; cloud; color-based fingerprint extraction; color-based fingerprint filtering; color-based fingerprint scaling; component isolation; data-centric phishing detection; human perceptual similarity; man-in-the-browser attack; phishing attacks; spoofed Web pages; transparent virtualization technologies; virtual machines; Browsers; Computer architecture; Data mining; Detectors; Image color analysis; Malware; Web pages (ID#: 15-3637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890942&isnumber=6890911
Bian Yang; Huiguang Chu; Guoqiang Li; Petrovic, S.; Busch, C., "Cloud Password Manager Using Privacy-Preserved Biometrics," Cloud Engineering (IC2E), 2014 IEEE International Conference on, pp.505, 509, 11-14 March 2014. doi: 10.1109/IC2E.2014.91 Using one password for all web services is not secure because the leakage of the password compromises all the web services accounts, while using independent passwords for different web services is inconvenient for the identity claimant to memorize. A password manager is used to address this security-convenience dilemma by storing and retrieving multiple existing passwords using one master password. On the other hand, a password manager liberates human brain by enabling people to generate strong passwords without worry about memorizing them. While a password manager provides a convenient and secure way to managing multiple passwords, it centralizes the passwords storage and shifts the risk of passwords leakage from distributed service providers to a software or token authenticated by a single master password. Concerned about this one master password based security, biometrics could be used as a second factor for authentication by verifying the ownership of the master password. However, biometrics based authentication is more privacy concerned than a non-biometric password manager. In this paper we propose a cloud password manager scheme exploiting privacy enhanced biometrics, which achieves both security and convenience in a privacy-enhanced way. The proposed password manager scheme relies on a cloud service to synchronize all local password manager clients in an encrypted form, which is efficient to deploy the updates and secure against untrusted cloud service providers.
Keywords: Web services; authorisation; biometrics (access control);cloud computing; data privacy; trusted computing; Web service account; biometrics based authentication; cloud password manager; distributed service providers; local password manager client synchronization; master password based security; nonbiometric password manager; password leakage risk; password storage; privacy enhanced biometrics; privacy-preserved biometrics; token authentication; untrusted cloud service providers; Authentication; Biometrics (access control);Cryptography; Privacy; Synchronization; Web services; biometrics; cloud; password manager; privacy preservation; security (ID#: 15-3638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903519&isnumber=6903436
Skopik, F.; Settanni, G.; Fiedler, R.; Friedberg, I., "Semi-Synthetic Data Set Generation For Security Software Evaluation," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.156, 163, 23-24 July 2014. doi: 10.1109/PST.2014.6890935 Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. This kind of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.
Keywords: data handling; security of data; ICT security systems; ICT systems; heuristic detection engines; information and communication technology systems; intrusion detection and prevention systems; security software evaluation; self-learning intrusion detection systems; semisynthetic data set generation; virus infestation; Complexity theory; Data models; Databases; Intrusion detection; Testing; Virtual machining; anomaly detection evaluation; scalable system behavior model; synthetic data set generation (ID#: 15-3639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890935&isnumber=6890911
Montague, E.; Jie Xu; Chiou, E., "Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance Between Active and Passive Users in Technology-Mediated Collaborative Encounters," Human-Machine Systems, IEEE Transactions on, vol. 44, no. 5, pp.614, 624, Oct. 2014. doi: 10.1109/THMS.2014.2325859 The aim of this study is to examine the utility of physiological compliance (PC) to understand shared experience in a multiuser technological environment involving active and passive users. Common ground is critical for effective collaboration and important for multiuser technological systems that include passive users since this kind of user typically does not have control over the technology being used. An experiment was conducted with 48 participants who worked in two-person groups in a multitask environment under varied task and technology conditions. Indicators of PC were measured from participants' cardiovascular and electrodermal activities. The relationship between these PC indicators and collaboration outcomes, such as performance and subjective perception of the system, was explored. Results indicate that PC is related to group performance after controlling for task/technology conditions. PC is also correlated with shared perceptions of trust in technology among group members. PC is a useful tool for monitoring group processes and, thus, can be valuable for the design of collaborative systems. This study has implications for understanding effective collaboration.
Keywords: groupware; multiprogramming; physiology; trusted computing; multitask environment; multiuser technological environment; physiological compliance; shared experiences ;technology-mediated collaborative encounters; trust; Atmospheric measurements; Biomedical monitoring; Joints; Monitoring; Optical wavelength conversion; Particle measurements; Reliability; Group performance; multiagent systems; passive user; physiological compliance (PC); trust in technology (ID#: 15-3640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837486&isnumber=6898062
Algarni, A.; Yue Xu; Chan, T., "Social Engineering in Social Networking Sites: The Art of Impersonation," Services Computing (SCC), 2014 IEEE International Conference on, pp.797,804, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.108 Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Keywords: computer crime; fraud; social aspects of automation; social networking (online);Facebook; SNS; attacker; deceptive people; financial abuse; fraudulent people; grounded theory method; human behaviors complexity ;identity theft; impersonation; large information base; phishing; physical crime; security; sexual abuse; social engineering traps; social engineering victimization; social engineering tactics; social networking sites; threats; user susceptibility;Encoding;Facebook;Interviews;Organizations;Receivers;Security; impersonation; information security management; social engineering; social networking sites; source credibility; trust management (ID#: 15-3641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930610&isnumber=6930500
Riveiro, M.; Lebram, M.; Warston, H., "On Visualizing Threat Evaluation Configuration Processes: A Design Proposal," Information Fusion (FUSION), 2014 17th International Conference on, pp.1,8, 7-10 July 2014 Threat evaluation is concerned with estimating the intent, capability and opportunity of detected objects in relation to our own assets in an area of interest. To infer whether a target is threatening and to which degree is far from a trivial task. Expert operators have normally to their aid different support systems that analyze the incoming data and provide recommendations for actions. Since the ultimate responsibility lies in the operators, it is crucial that they trust and know how to configure and use these systems, as well as have a good understanding of their inner workings, strengths and limitations. To limit the negative effects of inadequate cooperation between the operators and their support systems, this paper presents a design proposal that aims at making the threat evaluation process more transparent. We focus on the initialization, configuration and preparation phases of the threat evaluation process, supporting the user in the analysis of the behavior of the system considering the relevant parameters involved in the threat estimations. For doing so, we follow a known design process model and we implement our suggestions in a proof-of-concept prototype that we evaluate with military expert system designers.
Keywords: estimation theory; expert systems; military computing; design process model; design proposal; expert operators; military expert system designer; proof-of-concept prototype; relevant parameter; threat estimation; threat evaluation configuration process; Data models; Estimation; Guidelines; Human computer interaction; Proposals; Prototypes; Weapons; decision-making; design; high-level information fusion; threat evaluation; transparency; visualization (ID#: 15-3642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916152&isnumber=6915967
El Masri, A.; Wechsler, H.; Likarish, P.; Kang, B.B., "Identifying Users With Application-Specific Command Streams," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.232,238, 23-24 July 2014. doi: 10.1109/PST.2014.6890944 This paper proposes and describes an active authentication model based on user profiles built from user-issued commands when interacting with GUI-based application. Previous behavioral models derived from user issued commands were limited to analyzing the user's interaction with the *Nix (Linux or Unix) command shell program. Human-computer interaction (HCI) research has explored the idea of building users profiles based on their behavioral patterns when interacting with such graphical interfaces. It did so by analyzing the user's keystroke and/or mouse dynamics. However, none had explored the idea of creating profiles by capturing users' usage characteristics when interacting with a specific application beyond how a user strikes the keyboard or moves the mouse across the screen. We obtain and utilize a dataset of user command streams collected from working with Microsoft (MS) Word to serve as a test bed. User profiles are first built using MS Word commands and identification takes place using machine learning algorithms. Best performance in terms of both accuracy and Area under the Curve (AUC) for Receiver Operating Characteristic (ROC) curve is reported using Random Forests (RF) and AdaBoost with random forests.
Keywords: biometrics (access control); human computer interaction; learning (artificial intelligence); message authentication; sensitivity analysis; AUC; AdaBoost; GUI-based application; MS Word commands; Microsoft; RF; ROC curve; active authentication model; application-specific command streams; area under the curve; human-computer interaction; machine learning algorithms; random forests; receiver operating characteristic; user command streams; user identification; user profiles; user-issued commands; Authentication; Biometrics (access control); Classification algorithms; Hidden Markov models; Keyboards; Mice; Radio frequency; Active Authentication; Behavioral biometrics; Intrusion Detection; Machine Learning (ID#: 15-3643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890944&isnumber=6890911
Saoud, Z.; Faci, N.; Maamar, Z.; Benslimane, D., "A Fuzzy Clustering-Based Credibility Model for Trust Assessment in a Service-Oriented Architecture," WETICE Conference (WETICE), 2014 IEEE 23rd International, pp.56,61, 23-25 June 2014. doi: 10.1109/WETICE.2014.35 This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.
Keywords: Web services; customer satisfaction; fuzzy set theory; human computer interaction; pattern clustering; service-oriented architecture; trusted computing; Web services; consumer credibility; consumer ratings; credibility model; fuzzy clustering; majority rating; service-oriented architecture; trust assessment; Clustering algorithms; Communities; Computational modeling; Equations; Robustness; Social network services; Web services; Credibility; Trust; Web Service (ID#: 15-3644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927023&isnumber=6926989
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.