Protocol Verification

Image removed.

Verifying the accuracy of security protocols is a primary goal of cybersecurity. Research into the area has sought to identify new and better algorithms and to identify better methods for verifying security protocols in myriad applications and environments. The papers presented here are from the first half of 2014.

  • Rui Zhou; Rong Min; Qi Yu; Chanjuan Li; Yong Sheng; Qingguo Zhou; Xuan Wang; Kuan-Ching Li, "Formal Verification of Fault-Tolerant and Recovery Mechanisms for Safe Node Sequence Protocol," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.813,820, 13-16 May 2014.doi: 10.1109/AINA.2014.99 Fault-tolerance has huge impact on embedded safety-critical systems. As technology that assists to the development of such improvement, Safe Node Sequence Protocol (SNSP) is designed to make part of such impact. In this paper, we present a mechanism for fault-tolerance and recovery based on the Safe Node Sequence Protocol (SNSP) to strengthen the system robustness, from which the correctness of a fault-tolerant prototype system is analyzed and verified. In order to verify the correctness of more than thirty failure modes, we have partitioned the complete protocol state machine into several subsystems, followed to the injection of corresponding fault classes into dedicated independent models. Experiments demonstrate that this method effectively reduces the size of overall state space, and verification results indicate that the protocol is able to recover from the fault model in a fault-tolerant system and continue to operate as errors occur.
    Keywords: fault tolerance; formal verification; protocols; SNSP; failure modes; fault classes; fault model; fault-tolerant prototype system; fault-tolerant system; formal verification; machine; protocol state machine; recovery mechanisms; safe node sequence protocol; Fault tolerant systems; Model checking; Protocols; Real-time systems; Redundancy; Tunneling magnetoresistance; Safe Node Sequence Protocol; event-triggered protocol; fault-tolerance; model checking (ID#:14-2221)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838748&isnumber=6838626
  • Meng Zhang; Bingham, J.D.; Erickson, J.; Sorin, D.J., "PVCoherence: Designing Flat Coherence Protocols For Scalable Verification," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on , vol., no., pp.392,403, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835949 The goal of this work is to design cache coherence protocols with many cores that can be verified with state-of-the-art automated verification methodologies. In particular, we focus on flat (non-hierarchical) coherence protocols, and we use a mostly-automated methodology based on parametric verification (PV). We propose several design guidelines that architects should follow if they want to design protocols that can be parametrically verified. We experimentally evaluate performance, storage overhead, and scalability of a protocol verified with PV compared to a highly optimized protocol that cannot be verified with PV.
    Keywords: cache storage; formal verification; memory protocols; PV Coherence; automated verification methodology; cache coherence protocol; flat coherence protocol; parametric verification; scalable verification; storage overhead; Coherence; Concrete; Guidelines; Manuals; Model checking; Parametric statistics; Protocols (ID#:14-2222)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835949&isnumber=6835920
  • Kasraoui, M.; Cabani, A; Chafouk, H., "Formal Verification of Wireless Sensor Key Exchange Protocol Using AVISPA," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.387,390, 10-12 June 2014. doi: 10.1109/IS3C.2014.107 For efficient deployment of sensor nodes required in many logistic applications, it's necessary to build security mechanisms for a secure wireless communication. End-to-end security plays a crucial role for the communication in these networks. This provides the confidentiality, the authentication and mostly the prevention from many attacks at high level. In this paper, we propose a lightweight key exchange protocol WSKE (Wireless Sensor Key Exchange) for IP-based wireless sensor networks. This protocol proposes techniques that allows to adapt IKEv2 (Internet Key Exchange version 2) mechanisms of IPSEC/6LoWPAN networks. In order to check these security properties, we have used a formal verification tools called AVISPA.
    Keywords: IP networks; Internet; cryptographic protocols; formal verification; wireless sensor networks;AVISPA;IKEv2;IP-based wireless sensor networks;IPSEC-6LoWPAN networks; Internet key exchange version 2 mechanism; end-to-end security; formal verification; lightweight key exchange protocol WSKE; wireless sensor key exchange protocol; Authentication; Communication system security; Internet; Protocols; Wireless communication; Wireless sensor networks; IKEv2; IPSec; Security; WSNs (ID#:14-2223)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845899&isnumber=6845429
  • Sumec, S., "Software Tool For Verification Of Sampled Values Transmitted Via IEC 61850-9-2 Protocol," Electric Power Engineering (EPE), Proccedings of the 2014 15th International Scientific Conference on , vol., no., pp.113,117, 12-14 May 2014. doi: 10.1109/EPE.2014.6839413 Nowadays is increasingly used process bus for communication of equipments in substations. In addition to signaling various statuses of device using GOOSE messages it is possible to transmit measured values, which can be used for diagnostic of system or other advanced functions. Transmission of such values via Ethernet is well defined in protocol IEC 61850-9-2. Paper introduces a tool designed for verification of sampled values generated by various devices using this protocol.
    Keywords: IEC standards; local area networks; power engineering computing; protocols; substation protection; system buses; Ethernet; GOOSE messages; IEC 61850-9-2 protocol; process bus; software protection system; software tool; substations; Current measurement; Data visualization; Decoding; IEC standards; Merging; Protocols; Ethernet; sampled values; substation (ID#:14-2224)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839413&isnumber=6839392
  • Kammuller, F., "Verification of DNSsec Delegation Signatures," Telecommunications (ICT), 2014 21st International Conference on , vol., no., pp.298,392, 4-7 May 2014. doi: 10.1109/ICT.2014.6845127 In this paper, we present a formal model for the verification of the DNSsec Protocol in the interactive theorem prover Isabelle/HOL. Relying on the inductive approach to security protocol verification, this formal analysis provides a more expressive representation than the widely accepted model checking analysis. Our mechanized model allows to represent the protocol, all its possible traces and the attacker and his knowledge. The fine grained model allows to show origin authentication, and replay attack prevention. Most prominently, we succeed in expressing Delegation Signatures and proving their authenticity formally.
    Keywords: Internet; cryptographic protocols; formal verification; inference mechanisms; theorem proving; DNSsec delegation signatures; DNSsec protocol; Isabelle-HOL; inductive approach; interactive theorem prover; model checking analysis; security protocol verification; Authentication; IP networks; Protocols; Public key; Servers; DNSsec; Isabelle/HOL; authentication; chain of trust; delegation signatures (ID#:14-2225)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845127&isnumber=6845063
  • Voskuilen, Gwendolyn; Vijaykumar, T.N., "Fractal++: Closing the Performance Gap Between Fractal And Conventional Coherence," Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on , vol., no., pp.409,420, 14-18 June 2014. doi: 10.1109/ISCA.2014.6853211 Cache coherence protocol bugs can cause multicores to fail. Existing coherence verification approaches incur state explosion at small scales or require considerable human effort. As protocols' complexity and multicores' core counts increase, verification continues to be a challenge. Recently, researchers proposed fractal coherence which achieves scalable verification by enforcing observational equivalence between sub-systems in the coherence protocol. A larger sub-system is verified implicitly if a smaller sub-system has been verified. Unfortunately, fractal protocols suffer from two fundamental limitations: (1) indirect-communication: sub-systems cannot directly communicate and (2) partially-serial-invalidations: cores must be invalidated in a specific, serial order. These limitations disallow common performance optimizations used by conventional directory protocols: reply-forwarding where caches communicate directly and parallel invalidations. Therefore, fractal protocols lack performance scalability while directory protocols lack verification scalability. To enable both performance and verification scalability, we propose Fractal++ which employs a new class of protocol optimizations for verification-constrained architectures: decoupled-replies, contention-hints, and fully-parallel-fractal-invalidations. The first two optimizations allow reply-forwarding-like performance while the third optimization enables parallel invalidations in fractal protocols. Unlike conventional protocols, Fractal++ preserves observational equivalence and hence is scalably verifiable. In 32-core simulations of single- and four-socket systems, Fractal++ performs nearly as well as a directory protocol while providing scalable verifiability whereas the best-performing previous fractal protocol performs 8% on average and up to 26% worse with a single-socket and 12% on average and up to 34% worse with a longer-latency multi-socket system.
    Keywords: Coherence; Erbium; Fractals; Multicore processing; Optimization; Protocols; Scalability (ID#:14-2226)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853211&isnumber=6853187
  • Wang, H., "Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage," Services Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, March 2014. doi: 10.1109/TSC.2014.1 Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier’s cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.
    Keywords: Cloud computing; Computational modeling; Distributed databases; Indexes; Protocols; Security; Servers (ID#:14-2227)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762896&isnumber=4629387
  • Alrabaee, S.; Bataineh, A; Khasawneh, F.A; Dssouli, R., "Using Model Checking For Trivial File Transfer Protocol Validation," Communications and Networking (ComNet), 2014 International Conference on , vol., no., pp.1,7, 19-22 March 2014. doi: 10.1109/ComNet.2014.6840934 This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
    Keywords: formal verification; transport protocols; Internet user datagram protocol; Promela; SPIN; TFTP protocol; UDP; concurrency defect detection; exhaustive state space search; model based checking tool; software verification; trivial file transfer protocol; Authentication; Protocols; Software engineering; Modeling; Protocol Design; TFTP; Validation (ID#:14-2228)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840934&isnumber=6840902
  • Alezabi, Kamal Ali; Hashim, Fazirulhisyam; Hashim, Shaiful Jahari; Ali, Borhanuddin M., "An efficient authentication and key agreement protocol for 4G (LTE) networks," Region 10 Symposium, 2014 IEEE , vol., no., pp.502,507, 14-16 April 2014. doi: 10.1109/TENCONSpring.2014.6863085 Long Term Evolution (LTE) networks designed by 3rd Generation Partnership Project (3GPP) represent a widespread technology. LTE is mainly influenced by high data rates, minimum delay and the capacity due to scalable bandwidth and its flexibility. With the rapid and widespread use LTE networks, and increase the use in data/video transmission and Internet applications in general, accordingly, the challenges of securing and speeding up data communication in such networks is also increased. Authentication in LTE networks is very important process because most of the coming attacks occur during this stage. Attackers try to be authenticated and then launch the network resources and prevent the legitimate users from the network services. The basics of Extensible Authentication Protocol-Authentication and Key Agreement (EAP-AKA) are used in LTE AKA protocol which is called Evolved Packet System AKA (EPS-AKA) protocol to secure LTE network, However it still suffers from various vulnerabilities such as disclosure of the user identity, computational overhead, Man In The Middle (MITM) attack and authentication delay. In this paper, an Efficient EPS-AKA protocol (EEPS-AKA) is proposed to overcome those problems. The proposed protocol is based on the Simple Password Exponential Key Exchange (SPEKE) protocol. Compared to previous proposed methods, our method is faster, since it uses a secret key method which is faster than certificate-based methods, In addition, the size of messages exchanged between User Equipment (UE) and Home Subscriber Server (HSS) is reduced, this reduces authentication delay and storage overhead effectively. The automated validation of internet security protocols and applications (AVISPA) tool is used to provide a formal verification. Results show that the proposed EEPS-AKA is efficient and secure against active and passive attacks.
    Keywords: EEPS-AKA; LTE EPS-AKA; SPEKE (ID#:14-2229)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863085&isnumber=6862973
  • Zhuo Hao; Yunlong Mao; Sheng Zhong; Li, L.E.; Haifan Yao; Nenghai Yu, "Toward Wireless Security without Computational Assumptions—Oblivious Transfer Based on Wireless Channel Characteristics," Computers, IEEE Transactions on , vol.63, no.6, pp.1580,1593, June 2014. doi: 10.1109/TC.2013.27 Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving two applications, one on private communications and the other on privacy preserving password verification. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol. Our experimental results demonstrate that it has reasonable efficiency.
    Keywords: computational complexity; cryptographic protocols; data privacy; transport protocols; wireless channels;1-out-of-2 oblivious transfer protocol; computational assumptions; cryptographic protocol; cryptographic tools; polynomial-time computable function; privacy preserving password verification; private communications; wireless channel characteristics; wireless devices; wireless security; Channel estimation; Communication system security; Cryptography; Probes; Protocols; Wireless communication; Oblivious transfer; physical channel characteristics; security (ID#:14-2230)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6463377&isnumber=6828816
  • Ceccarelli, A; Montecchi, L.; Brancati, F.; Lollini, P.; Marguglio, A; Bondavalli, A, "Continuous and Transparent User Identity Verification for Secure Internet Services," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. doi: 10.1109/TDSC.2013.2297709 Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.
    Keywords: Authentication; Bioinformatics; Protocols; Servers; Smart phones; Web services; Security; authentication; mobile environments; web servers (ID#:14-2231)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702439&isnumber=4358699
  • Lei, X.; Liao, X.; Huang, T.; Li, H., "Cloud Computing Service: the Case of Large Matrix Determinant Computation," Services Computing, IEEE Transactions on , vol.PP, no.99, pp.1,1, June 2014. doi: 10.1109/TSC.2014.2331694 Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients. (ID#:14-2232)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839008&isnumber=4629387
  • Castro Marquez, C.I; Strum, M.; Wang Jiang Chau, "A Unified Sequential Equivalence Checking Approach To Verify High-Level Functionality And Protocol Specification Implementations In RTL Designs," Test Workshop - LATW, 2014 15th Latin American , vol., no., pp.1,6, 12-15 March 2014. doi: 10.1109/LATW.2014.6841905 Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
    Keywords: electronic design automation; formal specification; formal verification; high level synthesis; protocols; RTL design verification; computational margin; design under verification; design verification; formal technique; high level functionality verification; high level model; high level reference model; protocol specification implementation; unified sequential equivalence checking approach; Abstracts; Calculators; Computational modeling; Data models; Educational institutions; Integrated circuit modeling; Protocols; RTL design; Sequential equivalence checking; communication protocol; formal verification; high-level models; sequence of states (ID#:14-2233)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841905&isnumber=6841893
  • Wang, Junwei; Wang, Haifeng, "Trust-based QoS routing algorithm for Wireless Sensor Networks," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.2492,2495, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852592 With the rapid development of Wireless Sensor Networks (WSNs), besides the energy efficient, Quality of Service (QoS) supported and the validity of packet transmission should be considered under some circumstances. In this paper, according to summing up LEACH protocol's advantages and defects, combining with trust evaluation mechanism, energy and QoS control, a trust-based QoS routing algorithm is put forward. Firstly, energy control and coverage scale are adopted to keep load balance in the phase of cluster head selection. Secondly, trust evaluation mechanism is designed to increase the credibility of the network in the stage of node clusting. Finally, in the period of information transmission, verification and ACK mechanism also put to guarantee validity of data transmission. In this paper, it proposes the improved protocol. The improved protocol can not only prolong nodes' life expectancy, but also increase the credibility of information transmission and reduce the packet loss. Compared to typical routing algorithms in sensor networks, this new algorithm has better performance.
    Keywords: Algorithm design and analysis; Data communication; Delays; Energy efficiency; Quality of service; Routing; Wireless sensor networks; Energy Efficient; LEACH; QoS; Trust Routing; Wireless Sensor Networks (ID#:14-2234)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852592&isnumber=6852105
  • Madhusudhan, R.; Kumar, S.R., "Cryptanalysis of a Remote User Authentication Protocol Using Smart Cards," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, vol., no., pp.474,477, 7-11 April 2014. doi: 10.1109/SOSE.2014.84 Remote user authentication using smart cards is a method of verifying the legitimacy of remote users accessing the server through insecure channel, by using smart cards to increase the efficiency of the system. During last couple of years many protocols to authenticate remote users using smart cards have been proposed. But unfortunately, most of them are proved to be unsecure against various attacks. Recently this year, Yung-Cheng Lee improved Shin et al.'s protocol and claimed that their protocol is more secure. In this article, we have shown that Yung-Cheng-Lee's protocol too has defects. It does not provide user anonymity; it is vulnerable to Denial-of-Service attack, Session key reveal, user impersonation attack, Server impersonation attack and insider attacks. Further it is not efficient in password change phase since it requires communication with server and uses verification table.
    Keywords: computer network security; cryptographic protocols; message authentication; smart cards; Yung-Cheng-Lee's protocol; cryptanalysis; denial-of-service attack; insecure channel; insider attacks; legitimacy verification; password change phase; remote user authentication protocol; server impersonation attack; session key; smart cards; user impersonation attack; verification table; Authentication; Bismuth; Cryptography; Protocols; Servers; Smart cards; authentication; smart card; cryptanalysis; dynamic id}, (ID#:14-2235)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830951&isnumber=6825948
  • Daesung Choi; Sungdae Hong; Hyoung-Kee Choi, "A Group-Based Security Protocol For Machine Type Communications In LTE-Advanced," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.161,162, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849205 We propose Authentication and Key Agreement (AKA) for Machine Type Communications (MTC) in LTE-Advanced. This protocol is based on an idea of grouping devices so that it would reduce signaling congestion in the access network and overload on the single authentication server. We verified that this protocol is designed to be secure against many attacks by using a software verification tool. Furthermore, performance evaluation suggests that this protocol is efficient with respect to authentication overhead and handover delay.
    Keywords: Long Term Evolution; cryptographic protocols; mobility management (mobile radio);radio access networks; signaling protocols; telecommunication security; AKA;LTE-advanced communication; MTC; access network; authentication and key agreement; group-based security protocol; handover delay; machine type communication; performance evaluation; signaling congestion reduction; single authentication server overhead; software verification tool; Authentication; Computer science; Delays; Handover; Protocols (ID#:14-2236)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849205&isnumber=6849127
  • Liu, W.; Yu, M., "AASR: An Authenticated Anonymous Secure Routing Protocol for MANETs in Adversarial Environment," Vehicular Technology, IEEE Transactions on , vol.PP, no.99, pp.1,1, March 2014. doi: 10.1109/TVT.2014.2313180 Anonymous communications are important for many applications of the mobile ad hoc networks (MANETs) deployed in adversary environments. A major requirement on the network is to provide unidentifiability and unlinkability for mobile nodes and their traffics. Although a number of anonymous secure routing protocols have been proposed, the requirement is not fully satisfied. The existing protocols are vulnerable to the attacks of fake routing packets or denial-of-service (DoS) broadcasting, even the node identities are protected by pseudonyms. In this paper, we propose a new routing protocol, i.e., authenticated anonymous secure routing (AASR), to satisfy the requirement and defend the attacks. More specifically, the route request packets are authenticated by a group signature, to defend the potential active attacks without unveiling the node identities. The key-encrypted onion routing with a route secret verification message, is designed to prevent intermediate nodes from inferring a real destination. Simulation results have demonstrated the effectiveness of the proposed AASR protocol with improved performance as compared to the existing protocols.
    Keywords: (not provided) (ID#:14-2237)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777291&isnumber=4356907
  • Huaqun Wang; Qianhong Wu; Bo Qin; Domingo-Ferrer, J., "Identity-based Remote Data Possession Checking In Public Clouds," Information Security, IET , vol.8, no.2, pp.114,121, March 2014. doi: 10.1049/iet-ifs.2012.0271 Checking remote data possession is of crucial importance in public cloud storage. It enables the users to check whether their outsourced data have been kept intact without downloading the original data. The existing remote data possession checking (RDPC) protocols have been designed in the PKI (public key infrastructure) setting. The cloud server has to validate the users' certificates before storing the data uploaded by the users in order to prevent spam. This incurs considerable costs since numerous users may frequently upload data to the cloud server. This study addresses this problem with a new model of identity-based RDPC (ID-RDPC) protocols. The authors present the first ID-RDPC protocol proven to be secure assuming the hardness of the standard computational Diffie-Hellman problem. In addition to the structural advantage of elimination of certificate management and verification, the authors ID-RDPC protocol also outperforms the existing RDPC protocols in the PKI setting in terms of computation and communication.
    Keywords: cloud computing; cryptographic protocols; public key cryptography; storage management; unsolicited e-mail; ID-RDPC protocol; PKI setting; certificate management elimination; cloud server; data outsourcing; identity-based RDPC protocols; identity-based remote data possession checking protocol; public cloud storage; public key infrastructure; spam; standard computational Diffie-Hellman problem (ID#:14-2238)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748545&isnumber=6748540

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.