Web Browser Security 2015 |
Web browsers are vulnerable to a range of threats. To the Science of Security community, they are often the first vector for attacks and are relevant to the issues of compositionality, resilience, predictive metrics, and human behavior. The work cited here was presented in 2015.
Panja, B.; Gennarelli, T.; Meharia, P., "Handling Cross Site Scripting Attacks Using Cache Check to Reduce Webpage Rendering Time with Elimination of Sanitization and Filtering in Light Weight Mobile Web Browser," in Mobile and Secure Services (MOBISECSERV), 2015 First Conference on, pp.1-7, 20-21 Feb. 2015. doi: 10.1109/MOBISECSERV.2015.7072878
Abstract: In this paper we propose a new approach to prevent and detect potential cross-site scripting attacks. Our method called Buffer Based Cache Check, will utilize both the server-side as well as the client-side to detect and prevent XSS attacks and will require modification of both in order to function correctly. With Cache Check, instead of the server supplying a complete whitelist of all the known trusted scripts to the mobile browser every time a page is requested, the server will instead store a cache that contains a validated “trusted” instance of the last time the page was rendered that can be checked against the requested page for inconsistencies. We believe that with our proposed method that rendering times in mobile browsers will be significantly reduced as part of the checking is done via the server, and fewer checking within the mobile browser which is slower than the server. With our method the entire checking process isn't dumped onto the mobile browser and as a result the mobile browser should be able to render pages faster as it is only checking for “untrusted” content whereas with other approaches, every single line of code is checked by the mobile browser, which increases rendering times.
Keywords: cache storage; client-server systems; mobile computing; online front-ends; security of data; trusted computing; Web page rendering time; XSS attacks; buffer based cache check; client-side; cross-site scripting attacks; filtering; light weight mobile Web browser; sanitization; server-side; trusted instance; untrusted content; Browsers; Filtering; Mobile communication; Radio access networks; Rendering (computer graphics); Security; Servers; Cross site scripting; cache check; mobile browser; webpage rendering (ID#: 15-7951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072878&isnumber=7072857
Rajani, V.; Bichhawat, A.; Garg, D.; Hammer, C., "Information Flow Control for Event Handling and the DOM in Web Browsers," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp.366-379, 13-17 July 2015. doi: 10.1109/CSF.2015.32
Abstract: Web browsers routinely handle private information. Owing to a lax security model, browsers and JavaScript in particular, are easy targets for leaking sensitive data. Prior work has extensively studied information flow control (IFC) as a mechanism for securing browsers. However, two central aspects of web browsers - the Document Object Model (DOM) and the event handling mechanism - have so far evaded thorough scrutiny in the context of IFC. This paper advances the state-of-the-art in this regard. Based on standard specifications and the code of an actual browser engine, we build formal models of both the DOM (up to Level 3) and the event handling loop of a typical browser, enhance the models with fine-grained taints and checks for IFC, prove our enhancements sound and test our ideas through an instrumentation of WebKit, an in-production browser engine. In doing so, we observe several channels for information leak that arise due to subtleties of the event loop and its interaction with the DOM.
Keywords: Internet; Java; online front-ends;security of data; DOM; IFC; JavaScript; Web browsers; WebKit; browser engine; document object model; event handling; event handling mechanism; formal models; in-production browser engine; information flow control; lax security model; sensitive data leakage; Browsers; Context; Instruments; Lattices; Monitoring; Security; Standards (ID#: 15-7952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243745&isnumber=7243713
Hale, M.L.; Hanson, S., "A Testbed and Process for Analyzing Attack Vectors and Vulnerabilities in Hybrid Mobile Apps Connected to Restful Web Services," in Services (SERVICES), 2015 IEEE World Congress on, pp. 181-188, June 27 2015-July 2 2015. doi: 10.1109/SERVICES.2015.35
Abstract: Web traffic is increasingly trending towards mobile devices driving developers to tailor web content to small screens and customize web apps using mobile-only capabilities such as geo-location, accelerometers, offline storage, and camera features. Hybrid apps provide a cross-platform, device independent, means for developers to utilize these features. They work by wrapping web-based code, i.e., HTML5, CSS, and JavaScript, in thin native containers that expose device features. This design pattern encourages re-use of existing code, reduces development time, and leverages existing web development talent that doesn't depend on platform specific languages. Despite these advantages, the newness of hybrid apps raises new security challenges associated with integrating code designed for a web browser with features native to a mobile device. This paper explores these security concerns and defines three forms of attack that can specifically target and exploit hybrid apps connected to web services. Contributions of the paper include a high level process for discovering hybrid app attacks and vulnerabilities, definitions of emerging hybrid attack vectors, and a test bed platform for analyzing vulnerabilities. As an evaluation, hybrid attacks are analyzed in the test bed showing that it provides insight into vulnerabilities and helps assess risk.
Keywords: Web services; mobile computing; program testing; security of data; software engineering; RESTful Web service; Web development; attack vector analysis; hybrid mobile app; mobile device; test bed platform; vulnerability analysis; Accelerometers; Browsers; Cameras; Mobile applications; Mobile communication; Security; Smart phones; attack vectors; hybrid mobile application; thin native containers; vulnerabilities; web browser; web services (ID#: 15-7953)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7196523&isnumber=7196486
Hazel, J.J.; Valarmathie, P.; Saravanan, R., "Guarding Web Application with Multi - Angled Attack Detection," in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, pp. 1-4, 25-27 Feb. 2015. doi: 10.1109/ICSNS.2015.7292382
Abstract: An important research issue in the design of web application is protecting the front end web application from unauthorized access. Normally the web application is in the front end and database is in the back end and can be accessible using web browser. The database contains valuable information and it is the target for the attackers. There are many security issues in the back end database and many security measures being implemented in order to protect it. The problem here is, the front end application has set accessible by everyone and the attackers are trying to compromise the web front end application which in turn compromise the back end database. Therefore, the challenge here is to provide security to the front end web application thus enhancing security to the back end database. Currently vulnerability scanner is used to provide security to the front end web application. Even though many attacks are possible with it the most common and top most attacks are “Remote file inclusion attack, Query string attack, Union attack, Cross site scripting attack”. The proposed system is based on the design of web application in which it concentrates mainly on the detection and prevention of above said attacks. Initially, the system will show how these attacks are happening in the front end web application and overcoming of these attacks using the proposed algorithms such as longest common subsequence algorithm and brute force string matching algorithm. The successful overcoming of these attacks enhances security in the back end by implementing security in the web front end.
Keywords: Internet; authorisation; database management systems; online front-ends; query processing; Web application; Web browser; Web front end application; back end database; cross site scripting attack; multi-angled attack detection; query string attack; remote file inclusion attack; security issues; security measures; unauthorized access; union attack; Algorithm design and analysis; Browsers; Communication networks; Databases; Force; Reliability; Security; Cross site scripting attack; Query string attack; Remote file inclusion attack; Union attack (ID#: 15-7954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292382&isnumber=7292366
Chao Zhang; Niknami, M.; Chen, K.Z.; Chengyu Song; Zhaofeng Chen; Song, D., "JITScope: Protecting Web Users From Control-Flow Hijacking Attacks," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 567-575, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218424
Abstract: Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims' systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
Keywords: Internet; data protection; online front-ends; source code (software); CFI; Firefox Web browser; Internet resources; JIT compiled code; JIT optimization technique; JITScope; W⊕X policy; Web user protection; arbitrary malicious code; control-flow hijacking attacks; control-flow integrity; just-in-time compilation; source code compilation; Browsers; Engines; Instruments; Layout; Runtime; Safety; Security (ID#: 15-7955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218424&isnumber=7218353
Tajbakhsh, M.S.; Bagherzadeh, J., "A Sound Framework for Dynamic Prevention of Local File Inclusion," in Information and Knowledge Technology (IKT), 2015 7th Conference on, pp. 1-6, 26-28 May 2015. doi: 10.1109/IKT.2015.7288798
Abstract: Web applications take an important role in remote access over the Internet. These applications have many capabilities such as database access, file read/write, calculations as well as desktop applications but run in web browsers environments. As desktop applications, web applications can be exploited but with different techniques. One of the major known vulnerabilities of the web applications is Local File Inclusion. Inclusion in web applications is similar to library imports in desktop applications where a developer can include former developed codes. If an attacker includes his/her libraries, he/she can run his/her malicious code. Current research makes a brief survey of static and dynamic code analysis and suggests a framework for dynamically preventing malicious file inclusions by attackers. It is discussed that this framework prevents local file inclusions even if the developer has exploitable source code. The language PHP is used for describing the vulnerability and prevention framework.
Keywords: Internet; file organisation; libraries; security of data; Internet; Web browser environment; database access; desktop applications; dynamic code analysis; dynamic prevention; local file inclusion; malicious code; malicious file inclusions; remote access; sound framework; source code; static code analysis (ID#: 15-7956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288798&isnumber=7288662
Shbair, W.M.; Cholez, T.; Goichot, A.; Chrisment, I.; "Efficiently Bypassing SNI-based HTTPS Filtering," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 990-995, 11-15 May 2015. doi: 10.1109/INM.2015.7140423
Abstract: Encrypted Internet traffic is an essential element to enable security and privacy in the Internet. Surveys show that websites are more and more being served over HTTPS. They highlight an increase of 48% of sites using TLS over the past year, justifying the tendency that the Web is going to be encrypted. This motivates the development of new tools and methods to monitor and filter HTTPS traffic. This paper handles the latest technique for HTTPS traffic filtering that is based on the Server Name Indication (SNI) field of TLS and which has been recently implemented in many firewall solutions. Our main contribution is an evaluation of the reliability of this SNI extension for properly identifying and filtering HTTPS traffic. We show that SNI has two weaknesses, regarding (1) backward compatibility and (2) multiple services using a single certificate. We demonstrate thanks to a web browser plug-in called “Escape” that we designed and implemented, how these weaknesses can be practically used to bypass firewalls and monitoring systems relying on SNI. The results show positive evaluation (firewall's rules successfully bypassed) for all tested websites.
Keywords: Internet; Web sites; cryptography; data privacy; firewalls; hypermedia; information filtering; network servers; online front-ends; telecommunication traffic; transport protocols; Escape; HTTPS filtering; Internet privacy; Internet security; Internet traffic encryption; SNI; Web browser plug-in; Web site; firewall rule; server name indication; Browsers; Cryptography; Filtering; IP networks; Internet; Protocols; Servers (ID#: 15-7957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140423&isnumber=7140257
Yuchen Zhou; Evans, D., "Understanding and Monitoring Embedded Web Scripts," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 850-865, 17-21 May 2015. doi: 10.1109/SP.2015.57
Abstract: Modern web applications make frequent use of third-party scripts, often in ways that allow scripts loaded from external servers to make unrestricted changes to the embedding page and access critical resources including private user information. This paper introduces tools to assist site administrators in understanding, monitoring, and restricting the behavior of third-party scripts embedded in their site. We developed Script Inspector, a modified browser that can intercept, record, and check third-party script accesses to critical resources against security policies, along with a Visualizer tool that allows users to conveniently view recorded script behaviors and candidate policies and a Policy Generator tool that aids script providers and site administrators in writing policies. Site administrators can manually refine these policies with minimal effort to produce policies that effectively and robustly limit the behavior of embedded scripts. Policy Generator is able to generate effective policies for all scripts embedded on 72 out of the 100 test sites with minor human assistance. In this paper, we present the designs of our tools, report on what we've learned about script behaviors using them, evaluate the value of our approach for website administrator.
Keywords: Internet; data privacy; online front-ends; security of data; Policy Generator; Script Inspector; Visualizer tool; Web application; Web browser; Web script; critical resource access; private user information; security policy; third-party script; Advertising; Browsers; Monitoring; Privacy; Robustness; Security; Visualization; Anomaly Detection; Security and Privacy Policy; Web security and Privacy (ID#: 15-7958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163064&isnumber=7163005
Adachi, T.; Omote, K., "An Approach to Predict Drive-by-Download Attacks by Vulnerability Evaluation and Opcode," in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, pp. 145-151, 24-26 May 2015. doi: 10.1109/AsiaJCIS.2015.17
Abstract: Drive-by-download attacks exploit vulnerabilities in Web browsers, and users are unnoticeably downloading malware which accesses to the compromised Web sites. A number of detection approaches and tools against such attacks have been proposed so far. Especially, it is becoming easy to specify vulnerabilities of attacks, because researchers well analyze the trend of various attacks. Unfortunately, in the previous schemes, vulnerability information has not been used in the detection/prediction approaches of drive-by-download attacks. In this paper, we propose a prediction approach of "malware downloading" during drive-by-download attacks (approach-I), which uses vulnerability information. Our experimental results show our approach-I achieves the prediction rate (accuracy) of 92%, FNR of 15% and FPR of 1.0% using Naive Bayes. Furthermore, we propose an enhanced approach (approach-II) which embeds Opcode analysis (dynamic analysis) into our approach-I (static approach). We implement our approach-I and II, and compare the three approaches (approach-I, II and Opcode approaches) using the same datasets in our experiment. As a result, our approach-II has the prediction rate of 92%, and improves FNR to 11% using Random Forest, compared with our approach-I.
Keywords: Web sites; invasive software; learning (artificial intelligence); system monitoring; FNR; FPR; Opcode analysis; Web browsers; Web sites; attack vulnerabilities; drive-by-download attack prediction; dynamic analysis; malware downloading; naive Bayes; prediction rate; random forest; static approach; vulnerability evaluation; vulnerability information; Browsers; Feature extraction; Machine learning algorithms; Malware; Predictive models; Probability; Web pages; Drive-by-Download Attacks; Malware; Supervised Machine Learning (ID#: 15-7959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153949&isnumber=7153836
Limin Jia; Sen, S.; Garg, D.; Datta, A., "A Logic of Programs with Interface-Confined Code," in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 512-525, 13-17 July 2015
doi: 10.1109/CSF.2015.38
Abstract: Interface-confinement is a common mechanism that secures untrusted code by executing it inside a sandbox. The sandbox limits (confines) the code's interaction with key system resources to a restricted set of interfaces. This practice is seen in web browsers, hypervisors, and other security-critical systems. Motivated by these systems, we present a program logic, called System M, for modeling and proving safety properties of systems that execute adversary-supplied code via interface-confinement. In addition to using computation types to specify effects of computations, System M includes a novel invariant type to specify the properties of interface-confined code. The interpretation of invariant type includes terms whose effects satisfy an invariant. We construct a step-indexed model built over traces and prove the soundness of System M relative to the model. System M is the first program logic that allows proofs of safety for programs that execute adversary-supplied code without forcing the adversarial code to be available for deep static analysis. System M can be used to model and verify protocols as well as system designs. We demonstrate the reasoning principles of System M by verifying the state integrity property of the design of Memoir, a previously proposed trusted computing system.
Keywords: source code (software);trusted computing; Memoir design; System M program logic; Web browsers; adversary-supplied code; hypervisors; interface-confined code; sandbox; security-critical systems; step-indexed model; trusted computing system; untrusted code; Cognition; Computational modeling; Instruction sets; Radiation detectors; Safety; Semantics; Standards; adversary-supplied code; interface confinement; program logic; safety properties (ID#: 15-7960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7243751&isnumber=7243713
Aditya, S.; Mittal, V., "Multi-Layered Crypto Cloud Integration of oPass," in Computer Communication and Informatics (ICCCI), 2015 International Conference on, pp. 1-7, 8-10 Jan. 2015. doi: 10.1109/ICCCI.2015.7218114
Abstract: One of the most popular forms of user authentication is the Text Passwords. It is due to its convenience and simplicity. Still, the passwords are susceptible to be taken and compromised under various threats and weaknesses. In order to overcome these problems, a protocol called oPass was proposed. A cryptanalysis of it was done. We found out four kinds of attacks which could be done on it i.e. Use of SMS service, Attacks on oPass communication links, Unauthorized intruder access using the master password, Network attacks on untrusted web browser. One of them was Impersonation of the User. In order to overcome these problems in cloud environment, a protocol is proposed based on oPass to implement multi-layer crypto-cloud integration with oPass which can handle this kind of attack.
Keywords: cloud computing; cryptography; SMS service; Short Messaging Service; cloud environment; cryptanalysis; master password; multilayered crypto cloud integration; oPass communication links; oPass protocol; text password; user authentication; user impersonation; Authentication; Cloud computing; Encryption; Protocols; Servers; Cloud; Digital Signature; Impersonation; Network Security; RSA; SMS; oPass (ID#: 15-7961)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218114&isnumber=7218046
Caillat, Benjamin; Gilbert, Bob; Kemmerer, Richard; Kruegel, Christopher; Vigna, Giovanni, "Prison: Tracking Process Interactions to Contain Malware," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conferen on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 1282-1291, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.297
Abstract: Modern operating systems provide a number of different mechanisms that allow processes to interact. These interactions can generally be divided into two classes: inter-process communication techniques, which a process supports to provide services to its clients, and injection methods, which allow a process to inject code or data directly into another process' address space. Operating systems support these mechanisms to enable better performance and to provide simple and elegant software development APIs that promote cooperation between processes. Unfortunately, process interaction channels introduce problems at the end-host that are related to malware containment and the attribution of malicious actions. In particular, host-based security systems rely on process isolation to detect and contain malware. However, interaction mechanisms allow malware to manipulate a trusted process to carry out malicious actions on its behalf. In this case, existing security products will typically either ignore the actions or mistakenly attribute them to the trusted process. For example, a host-based security tool might be configured to deny untrusted processes from accessing the network, but malware could circumvent this policy by abusing a (trusted) web browser to get access to the Internet. In short, an effective host-based security solution must monitor and take into account interactions between processes. In this paper, we present Prison, a system that tracks process interactions and prevents malware from leveraging benign programs to fulfill its malicious intent. To this end, an operating system kernel extension monitors the various system services that enable processes to interact, and the system analyzes the calls to determine whether or not the interaction should be allowed. Prison can be deployed as an online system for tracking and containing malicious process interactions to effectively mitigate the threat of malware. The system can also be used as a dynamic analysis too- to aid an analyst in understanding a malware sample's effect on its environment.
Keywords: Browsers; Internet; Kernel; Malware; Monitoring; inter-process communication; malware containment; prison; windows (ID#: 15-7962)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336344&isnumber=7336120
Last, D., "Using Historical Software Vulnerability Data to Forecast Future Vulnerabilities," in Resilience Week (RWS), 2015, pp. 1-7, 18-20 Aug. 2015. doi: 10.1109/RWEEK.2015.7287429
Abstract: The field of network and computer security is a never-ending race with attackers, trying to identify and patch software vulnerabilities before they can be exploited. In this ongoing conflict, it would be quite useful to be able to predict when and where the next software vulnerability would appear. The research presented in this paper is the first step towards a capability for forecasting vulnerability discovery rates for individual software packages. This first step involves creating forecast models for vulnerability rates at the global level, as well as the category (web browser, operating system, and video player) level. These models will later be used as a factor in the predictive models for individual software packages. A number of regression models are fit to historical vulnerability data from the National Vulnerability Database (NVD) to identify historical trends in vulnerability discovery. Then, k-NN classification is used in conjunction with several time series distance measurements to select the appropriate regression models for a forecast. 68% and 95% confidence bounds are generated around the actual forecast to provide a margin of error. Experimentation using this method on the NVD data demonstrates the accuracy of these forecasts, as well as the accuracy of the confidence bounds forecasts. Analysis of these results indicates which time series distance measures produce the best vulnerability discovery forecasts.
Keywords: pattern classification; regression analysis; security of data; software packages; time series;computer security; k-NN classification; regression model; software package; software vulnerability data; time series distance measure; vulnerability forecasting; Accuracy; Market research; Predictive models; Software packages; Time series analysis; Training; cybersecurity; vulnerability discovery model; vulnerability prediction (ID#: 15-7963)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287429&isnumber=7287407
Hyun Lock Choo; Sanghwan Oh; Jonghun Jung; Hwankuk Kim, "The Behavior-Based Analysis Techniques for HTML5 Malicious Features," in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, pp. 436-440, 8-10 July 2015. doi: 10.1109/IMIS.2015.67
Abstract: HTML5 announced in October 2014 contains many more functions than previous HTML versions. It includes the media controls of audio, video, canvas, etc., and it is designed to access the browser file system through the Java Script API such as the web storage and file reader API. In addition, it provides the powerful functions to replace existing active X. As the HTML5 standard is adopted, the conversion of web services to HTML5 is being carried out all over the world. The browser developers particularly have high expectation for HTML5 as it provides many mobile functions. However, as there is much expectation of HTML5, the damage of malicious attacks using HTML5 is also expected to be large. The script, which is the key to HTML5 functions, is a different type from existing malware attacks as a malicious attack can be generated merely by only a user accessing a browser. The existing known attacks can also be reused by bypassing the detection systems through the new HTML5 elements. This paper intends to define the unique HTML5 behavior data through the browser execution data and to propose the detection of malware by categorizing the malicious HTML5 features.
Keywords: Internet; Java; hypermedia markup languages; invasive software; mobile computing; multimedia computing; online front-ends; telecommunication control; HTML versions; HTML5 behavior data; HTML5 elements; HTML5 functions; HTML5 malicious features;HTML5 standard; Java Script API; Web services; Web storage; behavior-based analysis techniques; browser developers; browser execution data; browser file system; detection systems; file reader API; malicious attacks; malware attacks; media controls; mobile functions; Browsers; Engines; Feature extraction; HTML; Malware; Standards; Behavior-Based Analysis;HTML5 Malicious Features; Script-based CyberAttack; Web Contents Security (ID#: 15-7964)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284990&isnumber=7284886
Sanders, S.; Kaur, J., "Can Web Pages Be Classified Using Anonymized TCP/IP Headers?," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2272-2280, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218614
Abstract: Web page classification is useful in many domains- including ad targeting, traffic modeling, and intrusion detection. In this paper, we investigate whether learning-based techniques can be used to classify web pages based only on anonymized TCP/IP headers of traffic generated when a web page is visited. We do this in three steps. First, we select informative TCP/IP features for a given downloaded web page, and study which of these remain stable over time and are also consistent across client browser platforms. Second, we use the selected features to evaluate four different labeling schemes and learning-based classification methods for web page classification. Lastly, we empirically study the effectiveness of the classification methods for real-world applications.
Keywords: Web sites; online front-ends; security of data; telecommunication traffic; transport protocols; TCP/IP header; Web page classification; ad targeting; client browser platforms; intrusion detection; labeling schemes; learning-based classification methods; learning-based techniques; traffic modeling; Browsers; Feature extraction; IP networks; Labeling; Navigation; Streaming media; Web pages; Traffic Classification; Web Page Measurement (ID#: 15-7965)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218614&isnumber=7218353
Thomas, K.; Bursztein, E.; Grier, C.; Ho, G.; Jagpal, N.; Kapravelos, A.; Mccoy, D.; Nappa, A.; Paxson, V.; Pearce, P.; Provos, N.; Abu Rajab, M., "Ad Injection at Scale: Assessing Deceptive Advertisement Modifications," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 151-167, 17-21 May 2015. doi: 10.1109/SP.2015.17
Abstract: Today, web injection manifests in many forms, but fundamentally occurs when malicious and unwanted actors tamper directly with browser sessions for their own profit. In this work we illuminate the scope and negative impact of one of these forms, ad injection, in which users have ads imposed on them in addition to, or different from, those that websites originally sent them. We develop a multi-staged pipeline that identifies ad injection in the wild and captures its distribution and revenue chains. We find that ad injection has entrenched itself as a cross-browser monetization platform impacting more than 5% of unique daily IP addresses accessing Google -- tens of millions of users around the globe. Injected ads arrive on a client's machine through multiple vectors: our measurements identify 50,870 Chrome extensions and 34,407 Windows binaries, 38% and 17% of which are explicitly malicious. A small number of software developers support the vast majority of these injectors who in turn syndicate from the larger ad ecosystem. We have contacted the Chrome Web Store and the advertisers targeted by ad injectors to alert each of the deceptive practices involved.
Keywords: advertising; online front-ends; profitability; Chrome Web store; Web injection; browser sessions; deceptive advertisement modifications; distribution chains; multistaged pipeline; profit; revenue chains;Browsers;Ecosystems;Google;Internet;Libraries;Pipelines;Security;ad fraud; ad injection; web injection (ID#: 15-7966)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163024&isnumber=7163005
Zibordi de Paiva, O.; Ruggiero, W.V., "A Survey on Information Flow Control Mechanisms in Web Applications," in High Performance Computing & Simulation (HPCS), 2015 International Conference on, pp. 211-220, 20-24 July 2015. doi: 10.1109/HPCSim.2015.7237042
Abstract: Web applications are nowadays ubiquitous channels that provide access to valuable information. However, web application security remains problematic, with Information Leakage, Cross-Site Scripting and SQL-Injection vulnerabilities - which all present threats to information - standing among the most common ones. On the other hand, Information Flow Control is a mature and well-studied area, providing techniques to ensure the confidentiality and integrity of information. Thus, numerous works were made proposing the use of these techniques to improve web application security. This paper provides a survey on some of these works that propose server-side only mechanisms, which operate in association with standard browsers. It also provides a brief overview of the information flow control techniques themselves. At the end, we draw a comparative scenario between the surveyed works, highlighting the environments for which they were designed and the security guarantees they provide, also suggesting directions in which they may evolve.
Keywords: Internet; SQL; security of data; SQL-injection vulnerability; Web application security; cross-site scripting; information confidentiality; information flow control mechanisms; information integrity; information leakage; server-side only mechanisms; standard browsers; ubiquitous channels; Browsers; Computer architecture; Context; Security; Standards; Web servers; Cross-Site Scripting; Information Flow Control; Information Leakage; SQL Injection; Web Application Security (ID#: 15-7967)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237042&isnumber=7237005
Deng, YuFeng; Manoharan, Sathiamoorthy, "Review and Analysis of Web Prefetching," in Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, pp. 40-45, 24-26 Aug. 2015. doi: 10.1109/PACRIM.2015.7334806
Abstract: Web caching is widely used to cache resources that have already been used and reuse them in near future. Prefetching, in comparison, is a technique to cache resources that have never been used. The core of prefetching is prediction - predicting which resources might be used in the the near future. Prefetching is a technology that has been actively studied in the recent years. Most of the modern browsers have built-in mechanisms for prefetching. Some modern websites also add prefetching support to enhance performance. Although prefetch can reduce user-perceived latency, it may increase bandwidth requirements, cause security issues, and trigger unexpected actions. This paper reviews prefetching features of some of the most popular modern web browsers and websites and discusses the problems that prefetching could cause.
Keywords: Bandwidth; Browsers; HTML; IP networks; Prefetching; Servers; Web pages; HTML5 link prefetching; omnibox prediction; prefetching; web caching (ID#: 15-7968)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7334806&isnumber=7334793
Nosheen, F.; Qamar, U., "Flexibility and Privacy Control by Cookie Management," in Digital Information, Networking, and Wireless Communications (DINWC), 2015 Third International Conference on, pp. 94-98, 3-5 Feb. 2015. doi: 10.1109/DINWC.2015.7054224
Abstract: Privacy of internet users is continuously on stack from various directions with the evolution of technology. Modern technology in the field of internet poses serious threats on the privacy of users. Unfortunately, while surfing on internet, we are careless about our privacy and allow intrusion of privacy to a great extent without objection. This facilitates advertisers in tracking user activities on web by third party cookies. Researchers have been conducting vigorous research on this topic and also have presented solutions to control the leakage of privacy without user consent. But surprisingly, major research activities confined to the desktop platform and little is known about web tracking on mobile devices. We survey current technologies and purpose a novel approach for android based mobile devices which control excessive tracking of users. Further, Mozilla Firefox add-ons and other related proposals dealing with cookies and privacy are also analyzed.
Keywords: Android (operating system); Internet; data privacy; mobile computing; security of data; Android based mobile devices; Internet user privacy; Mozilla Firefox add-ons; Web tracking; World Wide Web; cookie management; desktop platform; leakage control; privacy control; privacy intrusion; third party cookies; user activity tracking; Androids; Browsers; Humanoid robots; Internet; Mobile communication; Mobile handsets; Privacy; behavioural tracking; cookies; mobile-web; privacy; third party; tracking (ID#: 15-7969)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054224&isnumber=7054206
Taguinod, M.; Doupe, A.; Ziming Zhao; Gail-Joon Ahn, "Toward a Moving Target Defense for Web Applications," in Information Reuse and Integration (IRI), 2015 IEEE International Conference on, pp. 510-517, 13-15 Aug. 2015. doi: 10.1109/IRI.2015.84
Abstract: Web applications are a critical component of the security ecosystem as they are often the front door for many companies, as such, vulnerabilities in web applications allow hackers access to companies' private data, which contains consumers' private financial information. Web applications are, by their nature, available to everyone, at anytime, from anywhere, and this includes attackers. Therefore, attackers have the opportunity to perform reconnaissance at their leisure, acquiring information on the layout and technologies of the web application, before launching an attack. However, the defender must be prepared for all possible attacks and does not have the luxury of performing reconnaissance on the attacker. The idea behind Moving Target Defense (MTD) is to reduce the information asymmetry between the attacker and defender, ultimately rendering the reconnaissance information misleading or useless. In this paper we take the first steps of applying MTD concepts to web applications in order to create effective defensive layers. We first analyze the web application stack to understand where and how MTD can be applied. The key issue here is that an MTD application must actively prevent or disrupt a vulnerability or exploit, while still providing identical functionality. Then, we discuss our implementation of two MTD approaches, which can mitigate several classes of web application vulnerabilities or exploits. We hope that our discussion will help guide future research in applying the MTD concepts to the web application stack.
Keywords: Internet; security of data; MTD concept; Web applications; information asymmetry reduction; moving target defense; security ecosystem; Browsers; Databases; HTML; Layout; Operating systems; Web servers; Abstract Syntax Tree; Automated Conversion; Diversify; Layers; Moving; Randomize; Source Translation; Tiered; Web Software; Web applications}, (ID#: 15-7970)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301020&isnumber=7300933
Adaimy, R.; El-Hajj, W.; Ben Brahim, G.; Hajj, H.; Safa, H., "A Framework for Secure Information Flow Analysis in Web Applications," in Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 434-441, 24-27 March 2015. doi: 10.1109/AINA.2015.218
Abstract: Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company's financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data's confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. Existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.
Keywords: Internet; data integrity; graph theory; security of data; EPDG; PHP Web applications; Web application code; Web applications; annotation tasks; confidentiality policies; extended program dependence graph; generic security enforcement rules; insecure information flows; integrity policies; minimal developer effort; personal information; secure information flow analysis; security background; security-by-construction; Aggregates; Arrays; Browsers; Computer science; Databases; Security; Servers; Database Annotation; Program Dependence Graph ;Secure Information Flow; Web Applications Security (ID#: 15-7971)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098003&isnumber=7097928
Zheng Dong; Kapadia, A.; Blythe, J.; Camp, L.J., "Beyond The Lock Icon: Real-Time Detection of Phishing Websites Using Public Key Certificates," in Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1-12, 26-29 May 2015. doi: 10.1109/ECRIME.2015.7120795
Abstract: We propose a machine-learning approach to detect phishing websites using features from their X.509 public key certificates. We show that its efficacy extends beyond HTTPS-enabled sites. Our solution enables immediate local identification of phishing sites. As such, this serves as an important complement to the existing server-based anti-phishing mechanisms which predominately use blacklists. Blacklisting suffers from several inherent drawbacks in terms of correctness, timeliness, and completeness. Due to the potentially significant lag prior to site blacklisting, there is a window of opportunity for attackers. Other local client-side phishing detection approaches also exist, but primarily rely on page content or URLs, which are arguably easier to manipulate by attackers. We illustrate that our certificate-based approach greatly increases the difficulty of masquerading undetected for phishers, with single millisecond delays for users. We further show that this approach works not only against HTTPS-enabled phishing attacks, but also detects HTTP phishing attacks with port 443 enabled.
Keywords: Web sites; computer crime; learning (artificial intelligence); public key cryptography; HTTPS-enabled phishing attack; Web site phishing detection; machine-learning approach from; public key certificate; server-based antiphishing mechanism; site blacklisting; Browsers; Electronic mail; Feature extraction; Public key; Servers; Uniform resource locators; certificates; machine learning; security (ID#: 15-7972)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120795&isnumber=7120794
Lubbe, Luke; Oliver, Martin, "Beacons and Their Uses for Digital Forensics Purposes," in Information Security for South Africa (ISSA), 2015, pp. 1-6, 12-13 Aug. 2015. doi: 10.1109/ISSA.2015.7335074
Abstract: This article relates to the field of digital forensics with a particular focus on web (World Wide Web) beacons and how they can be utilized for digital forensic purposes. A web beacon or more commonly “web bug” is an example of a hidden resource reference in a webpage, which when the webpage is loaded, is requested from a third party source. The purpose of a web beacon is to track the browsing habits of a particular IP address. This paper proposes a novel technique that utilizes the presence of web beacons to create a unique ID for a website, to test this, a practical investigation is performed. The practical investigation involves an automated scanning of web beacons on a number of websites, this scanning process involves identifying which beacons are present on a web page and recording the presence of those beacons, the results of this scanning process is then encoded into a table for human analyses. The result of the investigation show promise and incentivizes further research. Real world implications, future work and possible Improvements on the methods which were used in this study are finally discussed.
Keywords: Browsers; DNA; Digital forensics; Fingerprint recognition; IP networks; Internet; Servers; Digital forensics; Web analytics; Web beacons; Web bugs (ID#: 15-7973)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335074&isnumber=7335039
Sliwa, J.; Jasiul, B.; Podlasek, T.; Matyszkiel, R., "Security Services Efficiency in Disadvantaged Networks," in Vehicular Technology Conference (VTC Spring), 2015 IEEE 81st, pp. 1-5, 11-14 May 2015
doi: 10.1109/VTCSpring.2015.7146075
Abstract: Modern coalition operations require efficient cooperation between partners of allied forces. They usually rely on their national systems equipped with software solutions supporting interoperability. Federation of systems built for the purpose of such operations assumes however independence of particular individual ones. In order to support efficient exchange of information between allies there are necessary federated software solutions promoting secure cross-domain information exchange. Lately the concept of Federated Mission Networking following Service Oriented Architecture (SOA) is being developed by NATO. In terms of secure information exchange for SOA-based services it proposes to use Web Authentication standard based on WS-Federation. In the article the authors present the results of tests of this standard efficiency in disadvantaged network environment built with PR4G radios. The architecture of the solution is presented with necessary information exchange relations and their invocation times.
Keywords: open systems; security of data; NATO; PR4G radios; SOA-based services; WS-Federation; Web authentication standard; allied forces; disadvantaged network environment built; disadvantaged networks; federated mission networking; federated software solutions; information exchange relations; interoperability; modern coalition operations; national systems; secure cross-domain information exchange; secure information exchange; security services efficiency; service oriented architecture; Authentication; Browsers; Delays; IP networks; Portals; Standards (ID#: 15-7974)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146075&isnumber=7145573
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.