Privacy Policies - Authentication, authorization, and trust verification are central parts of an access control system. The conditions for granting access in such a system are collected in access policies. Since access conditions are often complex, dedicated languages – policy languages – for defining policies are in use.
Authored by Stefan More, Sebastian Ramacher, Lukas Alber, Marco Herzl
Privacy Policies - Companies and organizations inform users of how they handle personal data through privacy policies on their websites. Particular information, such as the purposes of collecting personal data and what data are provided to third parties is required to be disclosed by laws and regulations. An example of such a law is the Act on the Protection of Personal Information in Japan. In addition to privacy policies, an increasing number of companies are publishing security policies to express compliance and transparency of corporate behavior. However, it is challenging to update these policies against legal requirements due to the periodic law revisions and rapid business changes. In this study, we developed a method for analyzing privacy policies to check whether companies comply with legal requirements. In particular, the proposed method classifies policy contents using a convolutional neural network and evaluates privacy compliance by comparing the classification results with legal requirements. In addition, we analyzed security policies using the proposed method, to confirm whether the combination of privacy and security policies contributes to privacy compliance. In this study, we collected and evaluated 1,304 privacy policies and 140 security policies for Japanese companies. The results revealed that over 90\% of privacy policies sufficiently describe the handling of personal information by first parties, user rights, and security measures, and over 90\% insufficiently describe the data retention and specific audience. These differences in the number of descriptions are dependent on industry guidelines and business characteristics. Moreover, security policies were found to improve the compliance rates of 46 out of 140 companies by describing security practices not included in privacy policies.
Authored by Keika Mori, Tatsuya Nagai, Yuta Takata, Masaki Kamizono
Privacy Policies - Privacy policy is a legal document in which the users are informed about the data practices used by the organizations. Past research indicates that the privacy policies are long, include incomplete information, and are hard to read. Research also shows that users are not inclined to read these long and verbose policies. The solution that we are proposing in this paper is to build tools that can assist users with finding relevant content in the privacy policies for their queries using semantic approach. This paper presents the development of domain ontology for privacy policies so that the relevant sentences related to privacy question can be automatically identified. For this study, we built an ontology and also validated and evaluated the ontology using qualitative and quantitative methods including competency questions, data driven, and user evaluation. Results from the evaluation of ontology depicted that the amount of text to read was significantly reduced as the users had to only read selected text that ranged from 1\% to 30\% of a privacy policy. The amount of content selected for reading depended on the query and its associated keywords. This finding shows that the time required to read a policy was significantly reduced as the ontology directed the user to the content related to a given user query. This finding was also confirmed by the results of the user study session. The results from the user study session indicated that the users found ontology helpful in finding relevant sentences as compared to reading the entire policy.
Authored by Jasmin Kaur, Rozita Dara, Ritu Chaturvedi
Privacy Policies - In the era of the Internet of things (IoT), smart logistics is quietly rising, but user privacy security has become an important factor hindering its development. Because privacy policy plays a positive role in protecting user privacy and improving corporate reputation, it has become an important part of smart logistics and the focus of express companies. In this paper, through the construction of the privacy policy evaluation index system of express companies, aiming at qualitative indicators that are difficult to evaluate, we introduce the cloud model evaluation method that can combine the qualitative and quantitative together, and comprehensively evaluate the privacy policy of five express companies in China from four indicators: general situation, user informed consent, information security control and personal rights protection. The results show that: Overall, the privacy policies of the five express companies have not reached the "good" level, and there is a certain gap between the privacy policies of different express companies. From the comparison of indicators, the five express companies generally score relatively good; However, the overall score of information security control index is relatively poor, and the other two indexes are quite different. Cloud model evaluation method has strong applicability for the evaluation of express company privacy policy, which provides a reference for improving the privacy policy formulation and improving the privacy protection level of China’s express delivery industry in the era of IoT.
Authored by Qian Zhang, Weihong Xie, Xinxian Pan
Privacy Policies - Data privacy laws like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) provide guidelines for collecting personal information from individuals and processing it. These frameworks also require service providers to inform their customers on how clients data is gathered, used, protected, and shared with other parties. A privacy policy is a legal document used by service providers to inform users about how their personal information is collected, stored, and shared with other parties. It is expected that the privacy policies adhere to the data privacy regulations. However, it has been observed that some policies may deviate from the practices recommended by data protection regulations. Detecting instances where a policy may violate a certain regulation is quite challenging because the privacy policy text is long and complex, and there are numerous regulations. To address this problem, we have designed an approach to automatically detect whether a policy violates the articles of GDPR. This paper demonstrates how we have used Natural Language Inference (NLI) tasks to compare privacy content against the GDPR to detect privacy policies text in violation of GDPR. We provide two designs using the Stanford Natural Language Inference (SNLI) and the Multi-Genre Natural Language Inference (MultiNLI) datasets. The results from both designs are promising as our approach detected the deviations with 76\% accuracy.
Authored by Abdullah Alshamsan, Shafique Chaudhry
Privacy Policies - Privacy policies, despite the important information they provide about the collection and use of one’s data, tend to be skipped over by most Internet users. In this paper, we seek to make privacy policies more accessible by automatically classifying text samples into web privacy categories. We use natural language processing techniques and multiple machine learning models to determine the effectiveness of each method in the classification method. We also explore the effectiveness of these methods to classify privacy policies of Internet of Things (IoT) devices.
Authored by Jasmine Carson, Lisa DiSalvo, Lydia Ray
Privacy Policies - Smart contracts running on blockchain potentially disclose all data to the participants of the chain. Therefore, because privacy is important in many areas, smart contracts may not be considered a good option. To overcome this limitation, this paper introduces Stone, a privacy preservation system for smart contracts. With Stone, an arbitrary Solidity smart contract can be combined with a separate privacy policy in JSON, which prevents the storage data in the contract from being publicised. Because this approach is convenient for policy developers as well as smart contract programmers, we envision that this approach will be practically acceptable for real-world applications.
Authored by Jihyeon Kim, Dahyeon Jeong, Jisoo Kim, Eun-Sun Cho
Privacy Policies - The motive behind this research paper is to outline recently introduced social media encryption policies and the impact that they will have on user privacy. With close to no Data Protection Laws in the country, all social media platforms pose a threat to one’s privacy. The various new privacy policies that have been put in place across different social media platforms, tend to take away the user’s choice on whether they want their data shared with other social media apps or no. Seeing how WhatsApp, Facebook and Instagram are all Facebook owned, any data shared across one platform crosses over with the database of another, regardless of whether you have an account or not, completely taking away from the concept of consensual sharing of data. This paper will further discuss how the nature of encryption in India will significantly affect India’s newly recognised fundamental right, the Right to Privacy. Various policy developments bring in various user violation concerns and that will be the focus of this research paper.
Authored by Akshit Talwar, Alka Chaudhary, Anil Kumar
Privacy Policies - Privacy policies inform users of the data practices and access protocols employed by organizations and their digital counterparts. Research has shown that users often feel that these privacy policies are lengthy and complex to read and comprehend. However, it is critical for people to be aware of the data access practices employed by the organizations. Hence, much research has focused on automatically extracting privacy-specific artifacts from the policies, predominantly by using natural language classification tools. However, these classification tools are designed primarily for the classification of paragraphs or segments of the policies. In this paper, we report on our research where we identify the gap in classifying policies at a segment level, and provide an alternate definition of segment classification using sentence classification. To this aid, we train and evaluate sentence classifiers for privacy policies using BERT and XLNet. Our approach demonstrates improvements in prediction quality of existing models and hence, surpasses the current baselines for classification models, without requiring additional parameter and model tuning. Using our sentence classifiers, we also study topical structures in Alexa top 5000 website policies, in order to identify and quantify the diffusion of information pertaining to privacy-specific topics in a policy.
Authored by Andrick Adhikari, Sanchari Das, Rinku Dewri
Privacy Policies - Privacy policy statements are an essential approach to self-regulation by website operators in the area of personal privacy protection. However, these policies are often lengthy and difficult to understand, with users appearing to actually read the privacy policy in only a few cases. To address these obstacles, we propose a framework, Privacy Policy Analysis Framework for Automatic Annotation and User Interaction (PPAI) that stores, classifies, and categorizes queries on natural language privacy policies. At the core of PPAI is a privacy-centric language model that consists of a smaller fine-grained dataset of privacy policies and a new hierarchy of neural network classifiers that take into account privacy practices with high-level aspects and finegrained details. Our experimental results show that the eight readability metrics of the dataset exhibit a strong correlation. Furthermore, PPAI’s neural network classifier achieves an accuracy of 0.78 in the multi-classification task. The robustness experiments reached higher accuracy than the baseline and remained robust even with a small amount of labeled data.
Authored by Han Ding, Shaohong Zhang, Lin Zhou, Peng Yang
Predictive Security Metrics - Most IoT systems involve IoT devices, communication protocols, remote cloud, IoT applications, mobile apps, and the physical environment. However, existing IoT security analyses only focus on a subset of all the essential components, such as device firmware or communication protocols, and ignore IoT systems’ interactive nature, resulting in limited attack detection capabilities. In this work, we propose IOTA, a logic programmingbased framework to perform system-level security analysis for IoT systems. IOTA generates attack graphs for IoT systems, showing all of the system resources that can be compromised and enumerating potential attack traces. In building IOTA, we design novel techniques to scan IoT systems for individual vulnerabilities and further create generic exploit models for IoT vulnerabilities. We also identify and model physical dependencies between different devices as they are unique to IoT systems and are employed by adversaries to launch complicated attacks. In addition, we utilize NLP techniques to extract IoT app semantics based on app descriptions. IOTA automatically translates vulnerabilities, exploits, and device dependencies to Prolog clauses and invokes MulVAL to construct attack graphs. To evaluate vulnerabilities’ system-wide impact, we propose two metrics based on the attack graph, which provide guidance on fortifying IoT systems. Evaluation on 127 IoT CVEs (Common Vulnerabilities and Exposures) shows that IOTA’s exploit modeling module achieves over 80\% accuracy in predicting vulnerabilities’ preconditions and effects. We apply IOTA to 37 synthetic smart home IoT systems based on real-world IoT apps and devices. Experimental results show that our framework is effective and highly efficient. Among 27 shortest attack traces revealed by the attack graphs, 62.8\% are not anticipated by the system administrator. It only takes 1.2 seconds to generate and analyze the attack graph for an IoT system consisting of 50 devices.
Authored by Zheng Fang, Hao Fu, Tianbo Gu, Pengfei Hu, Jinyue Song, Trent Jaeger, Prasant Mohapatra
Predictive Security Metrics - With the emergence of Zero Trust (ZT) Architecture, industry leaders have been drawn to the technology because of its potential to handle a high level of security threats. The Zero Trust Architecture (ZTA) is paving the path for a security industrial revolution by eliminating location-based implicant access and focusing on asset, user, and resource security. Software Defined Perimeter (SDP) is a secure overlay network technology that can be used to implement a Zero Trust framework. SDP is a next-generation network technology that allows network architecture to be hidden from the outside world. It also hides the overlay communication from the underlay network by employing encrypted communications. With encrypted information, detecting abnormal behavior of entities on an overlay network becomes exceedingly difficult. Therefore, an automated system is required. We proposed a method in this paper for understanding the normal behavior of deployed polices by mapping network usage behavior to the policy. An Apache Spark collects and processes the streaming overlay monitoring data generated by the built-in fabric API in order to do this mapping. It sends extracted metrics to Prometheus for storage, and then uses the data for machine learning training and prediction. The cluster-id of the link that it belongs to is predicted by the model, and the cluster-ids are mapped onto the policies. To validate the legitimacy of policy, the labeled polices hash is compared to the actual polices hash that is obtained from blockchain. Unverified policies are notified to the SDP controller for additional action, such as defining new policy behavior or marking uncertain policies.
Authored by Waleed Akbar, Javier Rivera, Khan Ahmed, Afaq Muhammad, Wang-Cheol Song
Predictive Security Metrics - Network security personnel are expected to provide uninterrupted services by handling attacks irrespective of the modus operandi. Multiple defensive approaches to prevent, curtail, or mitigate an attack are the primary responsibilities of a security personnel. Considering the fact that, predicting security attacks is an additional technique currently used by most organizations to accurately measure the security risks related to overall system performance, several approaches have been used to predict network security attacks. However, high predicting accuracy and difficulty in analyzing very large amount of dataset and getting a reliable dataset seem to be the major constraints. The uncertain behavior would be subjected to verification and validation by the network administrator. KDDD CUPP 99 dataset and NSL KDD dataset were both used in the research. NSL KDD provides 0.997 average micro and macro accuracy, having average LogLoss of 0.16 and average LogLossReduction of 0.976. Log-Loss Reduction ranges from infinity to 1, where 1 and 0 represent perfect prediction and mean prediction respectively. Log-Loss reduction should be as close to 1 as possible for a good model. LogLoss in the classification is an evaluation metrics that characterized the accuracy of a classifier. Log-loss is a measure of the performance of a classifier where the prediction input is a probability value between “0.00 to 1.00”. It should be as close to zero as possible. This paper proposes a FastTree Model for predicting network security incidents. Therefore, ML.NET Framework and FastTree Regression Technique have a high prediction accuracy and ability to analyze large datasets of normal, abnormal and uncertain behaviors.
Authored by Marcus Magaji, Abayomi Jegede, Nentawe Gurumdimma, Monday Onoja, Gilbert Aimufua, Ayodele Oloyede
Predictive Security Metrics - Predicting vulnerabilities through source code analysis and using it to guide software maintenance can effectively improve software security. One effective way to predict vulnerabilities is by analyzing library references and function calls used in code. In this paper, we extract library references and function calls from project files through source code analysis, generate sample sets for statistical learning based on these data. Design and train an integrated learning model that can be used for prediction. The designed model has a high accuracy rate and accomplishes the prediction task well. It also proves the correlation between vulnerabilities and library references and function calls.
Authored by Yiyi Liu, Minjie Zhu, Yilian Zhang, Yan Chen
Predictive Security Metrics - Across the globe, renewable generation integration has been increasing in the last decades to meet ever-increasing power demand and emission targets. Wind power has dominated among various renewable sources due to the widespread availability and advanced low-cost technologies. However, the stochastic nature of wind power results in power system reliability and security issues. This is because the uncertain variability of wind power results in challenges to various system operations such as unit commitment and economic dispatch. Such problems can be addressed by accurate wind power forecasts to great extent. This attracted wider investigations for obtaining accurate power forecasts using various forecasting models such as time series, machine learning, probabilistic, and hybrid. These models use different types of inputs and obtain forecasts in different time horizons, and have different applications. Also, different investigations represent forecasting performance using different performance metrics. Limited classification reviews are available for these areas and detailed classification on these areas will help researchers and system operators to develop new accurate forecasting models. Therefore, this paper proposes a detailed review of those areas. It concludes that even though quantum investigations are available, wind power forecasting accuracy improvement is an ever-existing research problem. Also, forecasting performance indication in financial term such as deviation charges can be used to represent the economic impact of forecasting accuracy improvement.
Authored by Sandhya Kumari, Arjun Rathi, Ayush Chauhan, Nigarish Khan, Sreenu Sreekumar, Sonika Singh
Predictive Security Metrics - A threat source that might exploit or create a hole in an information system, system security procedures, internal controls, or implementation is a computer operating system vulnerability. Since information security is a problem for everyone, predicting it is crucial. The typical method of vulnerability prediction involves manually identifying traits that might be related to unsafe code. An open framework for defining the characteristics and seriousness of software problems is called the Common Vulnerability Scoring System (CVSS). Base, Temporal, and Environmental are the three metric categories in CVSS. In this research, neural networks are utilized to build a predictive model of Windows 10 vulnerabilities using the published vulnerability data in the National Vulnerability Database. Different variants of neural networks are used which implements the back propagation for training the operating system vulnerabilities scores ranging from 0 to 10. Additionally, the research identifies the influential factors using Loess variable importance in neural networks, which shows that access complexity and polarity are only marginally important for predicting operating system vulnerabilities, while confidentiality impact, integrity impact, and availability impact are highly important.
Authored by Freeh Alenezi, Tahir Mehmood
Predictive Security Metrics - This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.
Authored by Fynn Kappelhoff, Rasmus Rasche, Debdeep Mukhopadhyay, Ulrich Rührmair
Identifying Evolution of Software Metrics by Analyzing Vulnerability History in Open Source Projects
Predictive Security Metrics - Software developers mostly focus on functioning code while developing their software paying little attention to the software security issues. Now a days, security is getting priority not only during the development phase, but also during other phases of software development life cycle (starting from requirement specification till maintenance phase). To that end, research have been expanded towards dealing with security issues in various phases. Current research mostly focused on developing different prediction models and most of them are based on software metrics. The metrics based models showed higher precision but poor recall rate in prediction. Moreover, they did not analyze the roles of individual software metric on the occurrences of vulnerabilities separately. In this paper, we target to track the evolution of metrics within the life-cycle of a vulnerability starting from its born version through the last affected version till fixed version. In particular, we studied a total of 250 files from three major releases of Apache Tomcat (8, 9 , and 10). We found that four metrics: AvgCyclomatic, AvgCyclomaticStrict, CountDeclM ethod, and CountLineCodeExe show significant changes over the vulnerability history of Tomcat. In addition, we discovered that Tomcat team prioritizes in fixing threatening vulnerabilities such as Denial of Service than less severe vulnerabilities. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. It will also help to assess developer’s mindset about fixing different types of vulnerabilities in open source projects.
Authored by Erik Maza, Kazi Sultana
Predictive Security Metrics - Security metrics for software products give a quantifiable assessment of a software system s trustworthiness. Metrics can also help detect vulnerabilities in systems, prioritize corrective actions, and raise the level of information security within the business. There is a lack of studies that identify measurements, metrics, and internal design properties used to assess software security. Therefore, this paper aims to survey security measurements used to assess and predict security vulnerabilities. We identified the internal design properties that were used to measure software security based on the internal structure of the software. We also identified the security metrics used in the studies we examined. We discussed how software refactoring had been used to improve software security. We observed that a software system with low coupling, low complexity, and high cohesion is more secure and vice versa. Current research directions have been identified and discussed.
Authored by Abdullah Almogahed, Mazni Omar, Nur Zakaria, Abdulwadood Alawadhi
Outsourced Database Security - The growing power of cloud computing prompts data owners to outsource their databases to the cloud. In order to meet the demand of multi-dimensional data processing in big data era, multi-dimensional range queries, especially over cloud platform, have received extensive attention in recent years. However, since the third-party clouds are not fully trusted, it is popular for the data owners to encrypt sensitive data before outsourcing. It promotes the research of encrypted data retrieval. Nevertheless, most existing works suffer from single-dimensional privacy leakage which would severely put the data at risk. Up to now, although a few existing solutions have been proposed to handle the problem of single-dimensional privacy, they are unsuitable in some practical scenarios due to inefficiency, inaccuracy, and lack of support for diverse data. Aiming at these issues, this paper mainly focuses on the secure range query over encrypted data. We first propose an efficient and private range query scheme for encrypted data based on homomorphic encryption, which can effectively protect data privacy. By using the dualserver model as the framework of the system, we not only achieve multi-dimensional privacy-preserving range query but also innovatively realize similarity search based on MinHash over ciphertext domains. Then we perform formal security analysis and evaluate our scheme on real datasets. The result shows that our proposed scheme is efficient and privacy-preserving. Moreover, we apply our scheme to a shopping website. The low latency demonstrates that our proposed scheme is practical.
Authored by Wentao Wang, Yuxuan Jin, Bin Cao
Personalized Outsourced Privacy-preserving Database Updates for Crowd-sensed Dynamic Spectrum Access
Outsourced Database Security - Dynamic Spectrum Access (DSA) paradigm enabled through Cognitive Radio (CR) appliances is extremely well suited to solve the spectrum shortage problem. Crowd-sensing has been effectively used for dynamic spectrum access sensing by leveraging the power of the masses. Specifically in the DSA context, crowd-sensing allows end users to query a DSA database which is updated through crowd-sensing workers. Despite recent research proposals that address the privacy and confidentiality concerns of the querying user and crowd-sensing workers, personalized privacy-preserving database updates through crowdsensing workers remains an open problem. To this end we propose a personalized privacy-preserving database update scheme for the crowd-sensing model based on lightweight homomorphic encryption. We provide substantial experiments based on reallife mobility data sets which show that the proposed protocol provides realistic efficiency and security.
Authored by Laura Truong, Erald Troja, Nikhil Yadav, Syed Bukhari, Mehrdad Aliasgari
Outsourced Database Security - With the rapid development of information technology, it becomes more and more popular for the use of electronic information systems in medical institutions. To protect the confidentiality of private EHRs, attribute-based encryption (ABE) schemes that can provide one-to-many encryption are often used as a solution. At the same time, blockchain technology makes it possible to build distributed databases without relying on trusted third-party institutions. This paper proposes a secure and efficient attribute-based encryption with outsourced decryption scheme based on blockchain, which can realize flexible and fine-grained access control and further improve the security of blockchain data sharing.
Authored by Fugeng Zeng, Qigang Deng, Dongxiu Wang
Outsourced Database Security - Efficient sequencing methods produce a large amount of genetic data, and make it accessible to researchers. This leads genomics to be considered a legitimate big data field. Hence, outsourcing data to the cloud is necessary as the genomic dataset is large. Data owners encrypt sensitive data before outsourcing to maintain data confidentiality and outsourcing aids data owners in resolving the issue of local storage management. Because genomic data is so enormous, safely and effectively performing researchers’ queries is challenging. In this paper, we propose a method, PRESSGenDB, for securely performing string and substring searches on the encrypted genomic sequences dataset. We leverage searchable symmetric encryption (SSE) and design a new method to handle these queries. In comparison to the state-of-the-art methods, PRESSGenDB supports various types of queries over genomic sequences such as string search and substring searches with and without a given requested start position. Moreover, it supports strings of alphabets as sequences rather than just a binary sequence of 0, 1s. It can search for substrings (patterns) over a whole dataset of genomic sequences rather than just one sequence. Furthermore, by comparing PRESSGenDB’s search complexity analytically with the state-ofthe-art, we show that it outperforms the recent efficient works.
Authored by Sara Jafarbeiki, Amin Sakzad, Shabnam Kermanshahi, Ron Steinfeld, Raj Gaire
Outsourced Database Security - Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness.
Authored by Dandan Yuan, Shujie Cui, Giovanni Russello
Outsourced Database Security - Applications today rely on cloud databases for storing and querying time-series data. While outsourcing storage is convenient, this data is often sensitive, making data breaches a serious concern. We present Waldo, a time-series database with rich functionality and strong security guarantees: Waldo supports multi-predicate filtering, protects data contents as well as query filter values and search access patterns, and provides malicious security in the 3-party honest-majority setting. In contrast, prior systems such as Timecrypt and Zeph have limited functionality and security: (1) these systems can only filter on time, and (2) they reveal the queried time interval to the server. Oblivious RAM (ORAM) and generic multiparty computation (MPC) are natural choices for eliminating leakage from prior work, but both of these are prohibitively expensive in our setting due to the number of roundtrips and bandwidth overhead, respectively. To minimize both, Waldo builds on top of function secret sharing, enabling Waldo to evaluate predicates non-interactively. We develop new techniques for applying function secret sharing to the encrypted database setting where there are malicious servers, secret inputs, and chained predicates. With 32-core machines, Waldo runs a query with 8 range predicates over 218 records in 3.03s, compared to 12.88s for an MPC baseline and 16.56s for an ORAM baseline. Compared to Waldo, the MPC baseline uses 9 − 82× more bandwidth between servers (for different numbers of records), while the ORAM baseline uses 20 − 152× more bandwidth between the client and server(s) (for different numbers of predicates).
Authored by Emma Dauterman, Mayank Rathee, Raluca Popa, Ion Stoica