Privacy Policies and Measurement - Email is one of the oldest and most popular applications on today’s Internet and is used for business and private communication. However, most emails are still susceptible to being intercepted or even manipulated by the servers transmitting the messages. Users with S/MIME certificates can protect their email messages. In this paper, we investigate the market for S/MIME certificates and analyse the impact of the ordering and revocation processes on the users’ privacy. We complete those processes for each vendor and investigate the number of requests, the size of the data transfer, and the number of trackers on the vendor’s Web site. We further collect all relevant documents, including privacy policies, and report on their number of words, readability, and quality. Our results show that users must make at least 86 HTTP requests and transfer at least 1.35 MB to obtain a certificate and 178 requests and 2.03 MB to revoke a certificate. All but one vendor employ third-party tracking during these processes, which causes between 43 and 354 third-party requests. Our results further show that the vendors’ privacy policies are at least 1701 words long which requires a user approximately 7 minutes to read. The longest policy requires approximately half an hour to be read. Measurements of the readability of all vendors’ privacy policies indicate that users need a level of education that is nearly equivalent to a bachelor’s degree to comprehend the texts. We also report on the quality of the policies and find that the vendors achieve compliance scores between 45 \% and 90 \%. With our method, vendors can measure their impact on the users’ privacy and create better products. On the other hand, users benefit from an analysis of all S/MIME certificate vendors in that they can make an informed choice of their vendor based on the objective metrics obtained by our study. Ultimately, the results help to increase the prevalence of encrypted emails and render society less susceptible to surveillance.
Authored by Tobias Mueller, Max Hartenstein
Privacy Policies and Measurement - With increased reliance of digital storage for personal, financial, medical, and policy information, a greater demand for robust digital authentication and cybersecurity protection measures results. Current security options include alpha-numeric passwords, two factor authentication, and bio-metric options such as fingerprint or facial recognition. However, all of these methods are not without their drawbacks. This projects leverages the fact that the use of physical handwritten signatures is still prevalent in society, and the thoroughly trained process and motions of handwritten signatures is unique for every individual. Thus, a writing stylus that can authenticate its user via inertial signature detection is proposed, which classifies inertial measurement features for user identification. The current prototype consists of two triaxial accelerometers, one mounted at each of the stylus’ terminal ends. Features extracted from how the pen is held, stroke styles, and writing speed can affect the stylus tip accelerations which leads to a unique signature detection and to deter forgery attacks. Novel, manual spatiotemporal features relating to such metrics were proposed and a multi-layer perceptron was utilized for binary classification. Results of a preliminary user study are promising with overall accuracy of 95.7\%, sensitivity of 100\%, and recall rate of 90\%.
Authored by Divas Subedi, Isabella Yung, Digesh Chitrakar, Kevin Huang
Privacy Policies and Measurement - Although the number of smart Internet of Things (IoT) devices has grown in recent years, the public s perception of how effectively these devices secure IoT data has been questioned. Many IoT users do not have a good level of confidence in the security or privacy procedures implemented within IoT smart devices for protecting personal IoT data. Moreover, determining the level of confidence end users have in their smart devices is becoming a major challenge. In this paper, we present a study that focuses on identifying privacy concerns IoT end users have when using IoT smart devices. We investigated multiple smart devices and conducted a survey to identify users privacy concerns. Furthermore, we identify five IoT privacy-preserving (IoTPP) control policies that we define and employ in comparing the privacy measures implemented by various popular smart devices. Results from our study show that the over 86\% of participants are very or extremely concerned about the security and privacy of their personal data when using smart IoT devices such as Google Nest Hub or Amazon Alexa. In addition, our study shows that a significant number of IoT users may not be aware that their personal data is collected, stored or shared by IoT devices.
Authored by Daniel Joy, Olivera Kotevska, Eyhab Al-Masri
Privacy Policies and Measurement - We report on the ideas and experiences of adapting Brane, a workflow execution framework, for use cases involving medical data exchange and processing. These use cases impose new requirements on the system to enforce policies encoding safety properties, ranging from access control to legal regulations pertaining to data privacy. Our approach emphasizes users’ control over the extent to which they cooperate in distributed execution, at the cost of revealing information about their policies.
Authored by Christopher Esterhuyse, Tim Muller, Thomas Van Binsbergen, Adam Belloum
Privacy Policies and Measurement - It is estimated that over 1 billion Closed-Circuit Television (CCTV) cameras are operational worldwide. The advertised main benefits of CCTV cameras have always been the same; physical security, safety, and crime deterrence. The current scale and rate of deployment of CCTV cameras bring additional research and technical challenges for CCTV forensics as well, as for privacy enhancements.
Authored by Hannu Turtiainen, Andrei Costin, Timo Hämäläinen, Tuomo Lahtinen, Lauri Sintonen
Privacy Policies and Measurement - Modelling and analyzing the massive policy discourse networks are of great importance in critical policy studies and have recently attracted increasing research interests. Yet, the effective representation scheme, quantitative policymaking metrics and the proper analysis methods remain largely unexplored. To address above challenges, with the Latent Dirichlet Allocation embedding, we proposed a government policy network, which models multiple entity types and complex relationships in between. Specifically, we have constructed the government policy network based on approximately 700 rural innovation and entrepreneurship policies released by the Chinese central government and eight provinces’ governments in the past eight years. We verified that the entity degree in the policy network is subject to the power-law distribution. Moreover, we propose a metric to evaluate the coordination between the central and local departments, namely coordination strength. And we find that this metric effectively reflects the coordination relationship between central and local departments. This study could be considered as a theoretical basis for applications such as policy discourse relationship prediction and departmental collaborative analysis.
Authored by Yilin Kang, Renwei Ou
Privacy Policies and Measurement - First introduced as a way of recording client-side state, third-party vendors have proliferated widely on the Web, and have become a fundamental part of the Web ecosystem. However, there is widespread concern that third-party vendors are being abused to track and profile individuals online for commercial, analytical and various other purposes. This paper builds the platform called “PRIVIS”, aiming at providing unique insights on how the privacy ecosystem is structured and affected through the analysis of data that stems from real users. First, to showcase what can be learned from this ecosystem through a datadriven analysis across the country, time and first-party categories, PRIVIS visualises data gathered from over 10K Chrome installers. It also equips participants with the means to collect and analyze their own data so that they could assess how their browsing habits are shared with third parties from their perspectives. Based on real-user datasets, the third-party quantity is not the only measure of web privacy risks. The measure proposed in this paper is how well thirdparty providers know their users. Second, PRIVIS studies the interplay between user location, special periods (after epidemic outbreak) and the overall number of third parties observed. The visualisation suggests that lockdown policies facilitate the growth in the number of third parties. Collectively, there are more active third-party activities, compared with both before the lockdowns and the corresponding periods in the previous year. And throughout the lockdown stages, the first lockdown performs the most aggressive.
Authored by Xuehui Hu
Privacy Policies - Authentication, authorization, and trust verification are central parts of an access control system. The conditions for granting access in such a system are collected in access policies. Since access conditions are often complex, dedicated languages – policy languages – for defining policies are in use.
Authored by Stefan More, Sebastian Ramacher, Lukas Alber, Marco Herzl
Privacy Policies - Companies and organizations inform users of how they handle personal data through privacy policies on their websites. Particular information, such as the purposes of collecting personal data and what data are provided to third parties is required to be disclosed by laws and regulations. An example of such a law is the Act on the Protection of Personal Information in Japan. In addition to privacy policies, an increasing number of companies are publishing security policies to express compliance and transparency of corporate behavior. However, it is challenging to update these policies against legal requirements due to the periodic law revisions and rapid business changes. In this study, we developed a method for analyzing privacy policies to check whether companies comply with legal requirements. In particular, the proposed method classifies policy contents using a convolutional neural network and evaluates privacy compliance by comparing the classification results with legal requirements. In addition, we analyzed security policies using the proposed method, to confirm whether the combination of privacy and security policies contributes to privacy compliance. In this study, we collected and evaluated 1,304 privacy policies and 140 security policies for Japanese companies. The results revealed that over 90\% of privacy policies sufficiently describe the handling of personal information by first parties, user rights, and security measures, and over 90\% insufficiently describe the data retention and specific audience. These differences in the number of descriptions are dependent on industry guidelines and business characteristics. Moreover, security policies were found to improve the compliance rates of 46 out of 140 companies by describing security practices not included in privacy policies.
Authored by Keika Mori, Tatsuya Nagai, Yuta Takata, Masaki Kamizono
Privacy Policies - Privacy policy is a legal document in which the users are informed about the data practices used by the organizations. Past research indicates that the privacy policies are long, include incomplete information, and are hard to read. Research also shows that users are not inclined to read these long and verbose policies. The solution that we are proposing in this paper is to build tools that can assist users with finding relevant content in the privacy policies for their queries using semantic approach. This paper presents the development of domain ontology for privacy policies so that the relevant sentences related to privacy question can be automatically identified. For this study, we built an ontology and also validated and evaluated the ontology using qualitative and quantitative methods including competency questions, data driven, and user evaluation. Results from the evaluation of ontology depicted that the amount of text to read was significantly reduced as the users had to only read selected text that ranged from 1\% to 30\% of a privacy policy. The amount of content selected for reading depended on the query and its associated keywords. This finding shows that the time required to read a policy was significantly reduced as the ontology directed the user to the content related to a given user query. This finding was also confirmed by the results of the user study session. The results from the user study session indicated that the users found ontology helpful in finding relevant sentences as compared to reading the entire policy.
Authored by Jasmin Kaur, Rozita Dara, Ritu Chaturvedi
Privacy Policies - In the era of the Internet of things (IoT), smart logistics is quietly rising, but user privacy security has become an important factor hindering its development. Because privacy policy plays a positive role in protecting user privacy and improving corporate reputation, it has become an important part of smart logistics and the focus of express companies. In this paper, through the construction of the privacy policy evaluation index system of express companies, aiming at qualitative indicators that are difficult to evaluate, we introduce the cloud model evaluation method that can combine the qualitative and quantitative together, and comprehensively evaluate the privacy policy of five express companies in China from four indicators: general situation, user informed consent, information security control and personal rights protection. The results show that: Overall, the privacy policies of the five express companies have not reached the "good" level, and there is a certain gap between the privacy policies of different express companies. From the comparison of indicators, the five express companies generally score relatively good; However, the overall score of information security control index is relatively poor, and the other two indexes are quite different. Cloud model evaluation method has strong applicability for the evaluation of express company privacy policy, which provides a reference for improving the privacy policy formulation and improving the privacy protection level of China’s express delivery industry in the era of IoT.
Authored by Qian Zhang, Weihong Xie, Xinxian Pan
Privacy Policies - Data privacy laws like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) provide guidelines for collecting personal information from individuals and processing it. These frameworks also require service providers to inform their customers on how clients data is gathered, used, protected, and shared with other parties. A privacy policy is a legal document used by service providers to inform users about how their personal information is collected, stored, and shared with other parties. It is expected that the privacy policies adhere to the data privacy regulations. However, it has been observed that some policies may deviate from the practices recommended by data protection regulations. Detecting instances where a policy may violate a certain regulation is quite challenging because the privacy policy text is long and complex, and there are numerous regulations. To address this problem, we have designed an approach to automatically detect whether a policy violates the articles of GDPR. This paper demonstrates how we have used Natural Language Inference (NLI) tasks to compare privacy content against the GDPR to detect privacy policies text in violation of GDPR. We provide two designs using the Stanford Natural Language Inference (SNLI) and the Multi-Genre Natural Language Inference (MultiNLI) datasets. The results from both designs are promising as our approach detected the deviations with 76\% accuracy.
Authored by Abdullah Alshamsan, Shafique Chaudhry
Privacy Policies - Privacy policies, despite the important information they provide about the collection and use of one’s data, tend to be skipped over by most Internet users. In this paper, we seek to make privacy policies more accessible by automatically classifying text samples into web privacy categories. We use natural language processing techniques and multiple machine learning models to determine the effectiveness of each method in the classification method. We also explore the effectiveness of these methods to classify privacy policies of Internet of Things (IoT) devices.
Authored by Jasmine Carson, Lisa DiSalvo, Lydia Ray
Privacy Policies - Smart contracts running on blockchain potentially disclose all data to the participants of the chain. Therefore, because privacy is important in many areas, smart contracts may not be considered a good option. To overcome this limitation, this paper introduces Stone, a privacy preservation system for smart contracts. With Stone, an arbitrary Solidity smart contract can be combined with a separate privacy policy in JSON, which prevents the storage data in the contract from being publicised. Because this approach is convenient for policy developers as well as smart contract programmers, we envision that this approach will be practically acceptable for real-world applications.
Authored by Jihyeon Kim, Dahyeon Jeong, Jisoo Kim, Eun-Sun Cho
Privacy Policies - The motive behind this research paper is to outline recently introduced social media encryption policies and the impact that they will have on user privacy. With close to no Data Protection Laws in the country, all social media platforms pose a threat to one’s privacy. The various new privacy policies that have been put in place across different social media platforms, tend to take away the user’s choice on whether they want their data shared with other social media apps or no. Seeing how WhatsApp, Facebook and Instagram are all Facebook owned, any data shared across one platform crosses over with the database of another, regardless of whether you have an account or not, completely taking away from the concept of consensual sharing of data. This paper will further discuss how the nature of encryption in India will significantly affect India’s newly recognised fundamental right, the Right to Privacy. Various policy developments bring in various user violation concerns and that will be the focus of this research paper.
Authored by Akshit Talwar, Alka Chaudhary, Anil Kumar
Privacy Policies - Privacy policies inform users of the data practices and access protocols employed by organizations and their digital counterparts. Research has shown that users often feel that these privacy policies are lengthy and complex to read and comprehend. However, it is critical for people to be aware of the data access practices employed by the organizations. Hence, much research has focused on automatically extracting privacy-specific artifacts from the policies, predominantly by using natural language classification tools. However, these classification tools are designed primarily for the classification of paragraphs or segments of the policies. In this paper, we report on our research where we identify the gap in classifying policies at a segment level, and provide an alternate definition of segment classification using sentence classification. To this aid, we train and evaluate sentence classifiers for privacy policies using BERT and XLNet. Our approach demonstrates improvements in prediction quality of existing models and hence, surpasses the current baselines for classification models, without requiring additional parameter and model tuning. Using our sentence classifiers, we also study topical structures in Alexa top 5000 website policies, in order to identify and quantify the diffusion of information pertaining to privacy-specific topics in a policy.
Authored by Andrick Adhikari, Sanchari Das, Rinku Dewri
Privacy Policies - Privacy policy statements are an essential approach to self-regulation by website operators in the area of personal privacy protection. However, these policies are often lengthy and difficult to understand, with users appearing to actually read the privacy policy in only a few cases. To address these obstacles, we propose a framework, Privacy Policy Analysis Framework for Automatic Annotation and User Interaction (PPAI) that stores, classifies, and categorizes queries on natural language privacy policies. At the core of PPAI is a privacy-centric language model that consists of a smaller fine-grained dataset of privacy policies and a new hierarchy of neural network classifiers that take into account privacy practices with high-level aspects and finegrained details. Our experimental results show that the eight readability metrics of the dataset exhibit a strong correlation. Furthermore, PPAI’s neural network classifier achieves an accuracy of 0.78 in the multi-classification task. The robustness experiments reached higher accuracy than the baseline and remained robust even with a small amount of labeled data.
Authored by Han Ding, Shaohong Zhang, Lin Zhou, Peng Yang
Predictive Security Metrics - Most IoT systems involve IoT devices, communication protocols, remote cloud, IoT applications, mobile apps, and the physical environment. However, existing IoT security analyses only focus on a subset of all the essential components, such as device firmware or communication protocols, and ignore IoT systems’ interactive nature, resulting in limited attack detection capabilities. In this work, we propose IOTA, a logic programmingbased framework to perform system-level security analysis for IoT systems. IOTA generates attack graphs for IoT systems, showing all of the system resources that can be compromised and enumerating potential attack traces. In building IOTA, we design novel techniques to scan IoT systems for individual vulnerabilities and further create generic exploit models for IoT vulnerabilities. We also identify and model physical dependencies between different devices as they are unique to IoT systems and are employed by adversaries to launch complicated attacks. In addition, we utilize NLP techniques to extract IoT app semantics based on app descriptions. IOTA automatically translates vulnerabilities, exploits, and device dependencies to Prolog clauses and invokes MulVAL to construct attack graphs. To evaluate vulnerabilities’ system-wide impact, we propose two metrics based on the attack graph, which provide guidance on fortifying IoT systems. Evaluation on 127 IoT CVEs (Common Vulnerabilities and Exposures) shows that IOTA’s exploit modeling module achieves over 80\% accuracy in predicting vulnerabilities’ preconditions and effects. We apply IOTA to 37 synthetic smart home IoT systems based on real-world IoT apps and devices. Experimental results show that our framework is effective and highly efficient. Among 27 shortest attack traces revealed by the attack graphs, 62.8\% are not anticipated by the system administrator. It only takes 1.2 seconds to generate and analyze the attack graph for an IoT system consisting of 50 devices.
Authored by Zheng Fang, Hao Fu, Tianbo Gu, Pengfei Hu, Jinyue Song, Trent Jaeger, Prasant Mohapatra
Predictive Security Metrics - With the emergence of Zero Trust (ZT) Architecture, industry leaders have been drawn to the technology because of its potential to handle a high level of security threats. The Zero Trust Architecture (ZTA) is paving the path for a security industrial revolution by eliminating location-based implicant access and focusing on asset, user, and resource security. Software Defined Perimeter (SDP) is a secure overlay network technology that can be used to implement a Zero Trust framework. SDP is a next-generation network technology that allows network architecture to be hidden from the outside world. It also hides the overlay communication from the underlay network by employing encrypted communications. With encrypted information, detecting abnormal behavior of entities on an overlay network becomes exceedingly difficult. Therefore, an automated system is required. We proposed a method in this paper for understanding the normal behavior of deployed polices by mapping network usage behavior to the policy. An Apache Spark collects and processes the streaming overlay monitoring data generated by the built-in fabric API in order to do this mapping. It sends extracted metrics to Prometheus for storage, and then uses the data for machine learning training and prediction. The cluster-id of the link that it belongs to is predicted by the model, and the cluster-ids are mapped onto the policies. To validate the legitimacy of policy, the labeled polices hash is compared to the actual polices hash that is obtained from blockchain. Unverified policies are notified to the SDP controller for additional action, such as defining new policy behavior or marking uncertain policies.
Authored by Waleed Akbar, Javier Rivera, Khan Ahmed, Afaq Muhammad, Wang-Cheol Song
Predictive Security Metrics - Network security personnel are expected to provide uninterrupted services by handling attacks irrespective of the modus operandi. Multiple defensive approaches to prevent, curtail, or mitigate an attack are the primary responsibilities of a security personnel. Considering the fact that, predicting security attacks is an additional technique currently used by most organizations to accurately measure the security risks related to overall system performance, several approaches have been used to predict network security attacks. However, high predicting accuracy and difficulty in analyzing very large amount of dataset and getting a reliable dataset seem to be the major constraints. The uncertain behavior would be subjected to verification and validation by the network administrator. KDDD CUPP 99 dataset and NSL KDD dataset were both used in the research. NSL KDD provides 0.997 average micro and macro accuracy, having average LogLoss of 0.16 and average LogLossReduction of 0.976. Log-Loss Reduction ranges from infinity to 1, where 1 and 0 represent perfect prediction and mean prediction respectively. Log-Loss reduction should be as close to 1 as possible for a good model. LogLoss in the classification is an evaluation metrics that characterized the accuracy of a classifier. Log-loss is a measure of the performance of a classifier where the prediction input is a probability value between “0.00 to 1.00”. It should be as close to zero as possible. This paper proposes a FastTree Model for predicting network security incidents. Therefore, ML.NET Framework and FastTree Regression Technique have a high prediction accuracy and ability to analyze large datasets of normal, abnormal and uncertain behaviors.
Authored by Marcus Magaji, Abayomi Jegede, Nentawe Gurumdimma, Monday Onoja, Gilbert Aimufua, Ayodele Oloyede
Predictive Security Metrics - Predicting vulnerabilities through source code analysis and using it to guide software maintenance can effectively improve software security. One effective way to predict vulnerabilities is by analyzing library references and function calls used in code. In this paper, we extract library references and function calls from project files through source code analysis, generate sample sets for statistical learning based on these data. Design and train an integrated learning model that can be used for prediction. The designed model has a high accuracy rate and accomplishes the prediction task well. It also proves the correlation between vulnerabilities and library references and function calls.
Authored by Yiyi Liu, Minjie Zhu, Yilian Zhang, Yan Chen
Predictive Security Metrics - Across the globe, renewable generation integration has been increasing in the last decades to meet ever-increasing power demand and emission targets. Wind power has dominated among various renewable sources due to the widespread availability and advanced low-cost technologies. However, the stochastic nature of wind power results in power system reliability and security issues. This is because the uncertain variability of wind power results in challenges to various system operations such as unit commitment and economic dispatch. Such problems can be addressed by accurate wind power forecasts to great extent. This attracted wider investigations for obtaining accurate power forecasts using various forecasting models such as time series, machine learning, probabilistic, and hybrid. These models use different types of inputs and obtain forecasts in different time horizons, and have different applications. Also, different investigations represent forecasting performance using different performance metrics. Limited classification reviews are available for these areas and detailed classification on these areas will help researchers and system operators to develop new accurate forecasting models. Therefore, this paper proposes a detailed review of those areas. It concludes that even though quantum investigations are available, wind power forecasting accuracy improvement is an ever-existing research problem. Also, forecasting performance indication in financial term such as deviation charges can be used to represent the economic impact of forecasting accuracy improvement.
Authored by Sandhya Kumari, Arjun Rathi, Ayush Chauhan, Nigarish Khan, Sreenu Sreekumar, Sonika Singh
Predictive Security Metrics - A threat source that might exploit or create a hole in an information system, system security procedures, internal controls, or implementation is a computer operating system vulnerability. Since information security is a problem for everyone, predicting it is crucial. The typical method of vulnerability prediction involves manually identifying traits that might be related to unsafe code. An open framework for defining the characteristics and seriousness of software problems is called the Common Vulnerability Scoring System (CVSS). Base, Temporal, and Environmental are the three metric categories in CVSS. In this research, neural networks are utilized to build a predictive model of Windows 10 vulnerabilities using the published vulnerability data in the National Vulnerability Database. Different variants of neural networks are used which implements the back propagation for training the operating system vulnerabilities scores ranging from 0 to 10. Additionally, the research identifies the influential factors using Loess variable importance in neural networks, which shows that access complexity and polarity are only marginally important for predicting operating system vulnerabilities, while confidentiality impact, integrity impact, and availability impact are highly important.
Authored by Freeh Alenezi, Tahir Mehmood
Predictive Security Metrics - This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong PUFs. These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial PUF complexity analyses, and are much easier and more efficient to apply: They do not require detailed knowledge of various ML methods, substantial computation times, or the availability of an internal parametric model of the studied PUF. Our metrics also can be standardized particularly easily. This avoids the sometimes inconclusive or contradictory findings of existing ML-based security test, which may result from the usage of different or non-optimized ML algorithms and hyperparameters, differing hardware resources, or varying numbers of challenge-response pairs in the training phase.
Authored by Fynn Kappelhoff, Rasmus Rasche, Debdeep Mukhopadhyay, Ulrich Rührmair
Predictive Security Metrics - Software developers mostly focus on functioning code while developing their software paying little attention to the software security issues. Now a days, security is getting priority not only during the development phase, but also during other phases of software development life cycle (starting from requirement specification till maintenance phase). To that end, research have been expanded towards dealing with security issues in various phases. Current research mostly focused on developing different prediction models and most of them are based on software metrics. The metrics based models showed higher precision but poor recall rate in prediction. Moreover, they did not analyze the roles of individual software metric on the occurrences of vulnerabilities separately. In this paper, we target to track the evolution of metrics within the life-cycle of a vulnerability starting from its born version through the last affected version till fixed version. In particular, we studied a total of 250 files from three major releases of Apache Tomcat (8, 9 , and 10). We found that four metrics: AvgCyclomatic, AvgCyclomaticStrict, CountDeclM ethod, and CountLineCodeExe show significant changes over the vulnerability history of Tomcat. In addition, we discovered that Tomcat team prioritizes in fixing threatening vulnerabilities such as Denial of Service than less severe vulnerabilities. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. It will also help to assess developer’s mindset about fixing different types of vulnerabilities in open source projects.
Authored by Erik Maza, Kazi Sultana