Work Factor Metrics 2015

 

 
SoS Logo

Work Factor Metrics

2015



It is difficult to measure the relative strengths and weaknesses of modern information systems when the safety, security, and reliability of those systems must be protected. Developers often apply security to systems without the ability to evaluate the impact of those mechanisms to the overall system. Few efforts are directed at actually measuring the quantifiable impact of information assurance technology on the potential adversary. The research cited here describes analytic tools, methods, and processes for measuring and evaluating software, networks, and authentication. The work cited here was presented in 2015.

 




Murphy, David; Darabi, Hooman; Hao Wu, “25.3 A VCO with Implicit Common-Mode Resonance,” in Solid- State Circuits Conference — (ISSCC), 2015 IEEE International, vol., no., pp. 1–3, 22–26 Feb. 2015. doi:10.1109/ISSCC.2015.7063116

Abstract: CMOS VCO performance metrics have not improved significantly over the last decade. Indeed, the best VCO Figure of Merit (FOM) currently reported was published by Hegazi back in 2001 [1]. That topology, shown in Fig. 25.3.1(a), employs a second resonant tank at the source terminals of the differential pair that is tuned to twice the LO frequency (FLO). The additional tank provides a high common-mode impedance at 2×FLO, which prevents the differential pair transistors from conducting in triode and thus prevents the degradation of the oscillator’s quality factor (Q). As a consequence, the topology can achieve an oscillator noise factor (F)-defined as the ratio of the total oscillator noise to the noise contributed by the tank- of just below 2, which is equal to the fundamental limit of a cross-coupled LC CMOS oscillator [2]. There are, however, a few drawbacks of Hegazi’s VCO: (1) the additional area required for the tail inductor, (2) the routing complexity demanded of the tail inductor, which can degrade its Q and limit its effectiveness, and (3) for oscillators with wide tuning ranges, the need to independently tune the second inductor, which again can degrade its Q. Moreover, it can be shown that the common-mode impedance of the main tank at 2×FLO also has a significant effect on the oscillator’s performance, which if not properly modeled can lead to disagreement between simulation and measurement, particularly in terms of the flicker noise corner. To mitigate these issues, this work introduces a new oscillator topology that resonates the common-mode of the circuit at 2×FLO, but does not require an additional inductor.

Keywords: CMOS integrated circuits; network topology; voltage-controlled oscillators; CMOS VCO; differential pair transistors; figure of merit; implicit common-mode resonance; oscillator noise factor; oscillator topology; tail inductor;1f noise; Inductors; Phase noise; Voltage-controlled oscillators (ID#: 15-7457)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7063116&isnumber=7062838

 

Jie Li; Veeraraghavan, M.; Emmerson, S.; Russell, R.D., “File Multicast Transport Protocol (FMTP),” in Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on, vol., no., pp. 1037–1046, 4–7 May 2015. doi:10.1109/CCGrid.2015.121

Abstract: This paper describes a new reliable transport protocol designed to run on top of a multicast network service for delivery of continuously generated files. The motivation for this work is to support scientific computing Grid applications that require file transfers between geographically distributed data enters. For example, atmospheric research scientists at various universities subscribe to real-time meteorology data that is being distributed by the University Corporation for Atmospheric Research (UCAR). UCAR delivers 30 different feed types, such as radar data and satellite imagery, to over 240 institutions. The current solution uses an application-layer (AL) multicast tree with uncast TCP connections between the AL servers. Recently, Internet2 and other research-and-education networks have deployed a Layer-2 service using OpenFlow/Software Defined Network (SDN) technologies. Our new transport protocol, FMTP, is designed to run on top of a multipoint Layer-2 topology. A key design aspect of FMTP is the tradeoffs between file delivery throughput of fast receivers and robustness (measure of successful reception) of slow receivers. A configurable parameter, called the retransmission timeout factor, is used to trade off these two metrics. In a multicast setting, it is difficult to achieve full reliability without sacrificing throughput under moderate-to-high loads, and throughput is important in scientific computing grids. A backup solution allows receivers to use uncast TCP connections to request files that were not received completely via multicast. For a given load and a multicast group of 30 receivers, robustness increased significantly from 81.4 to 97.5% when the retransmission timeout factor was increased from 10 to 50 with a small drop in average throughput from 85 to 82.8 Mbps.

Keywords: geophysics computing; grid computing; multicast protocols; software defined networking; telecommunication network topology; transport protocols; AL multicast tree; AL servers; FMTP; Internet2; OpenFlow technology; SDN technology; UCAR; University Corporation for Atmospheric Research; application-layer multicast tree; atmospheric research scientists; configurable parameter; continuously generated file delivery; fast receivers; file multicast transport protocol; file request; file-delivery throughput; geographically distributed datacenters; moderate-to-high loads; multicast network service; multipoint Layer-2 topology; radar data; real-time meteorology data; research-and-education networks; retransmission timeout factor; satellite imagery; scientific computing grid applications; slow receivers; software defined network technology; unicast TCP connections; Multicast communication; Network topology; Receivers; Reliability; Throughput; Transport protocols; Unicast; Data distribution in scientific grids; interdatacenter file movement; reliable multicast; transport protocols (ID#: 15-7458)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7152590&isnumber=7152455

 

Thuseethan, S.; Vasanthapriyan, S., “Spider-Web Topology: A Novel Topology for Parallel and Distributed Computing,” in Information and Communication Technology (ICoICT ), 2015 3rd International Conference on, vol., no., pp. 34–38, 27–29 May 2015. doi:10.1109/ICoICT.2015.7231392

Abstract: This paper is mainly concerned with the static interconnection network, its topological properties and metrics, particularly for exiting topologies and proposed one. The interconnection network topology is a key factor in determining the characteristics of parallel computers; suitable topology provides efficiency increment while performing tasks. In the recent years, there are numerous topologies available with various characteristics need to be improved. In this research we analyzed existing static interconnection topologies and developed a novel topology by minimizing some degradation factors of topological properties. A novel topology, Spider-web topology is proposed and shows a considerable advantage over the existing topologies. Further, one of the major aims of this work is to do a comparative study of the existing static interconnection networks with this novel topology by analyzing the properties and metrics. Both theoretical-based and experimental-based comparison conducted here shows that the proposed topology is able to perform better than the existing topologies.

Keywords: interconnected systems; parallel processing; topology; distributed computing; experimental-based comparison; interconnection network topology; parallel computing; spider-web topology; static interconnection network; theoretical-based comparison; Computers; Multiprocessor interconnection; Network topology; Parallel processing; Routing; Sorting; Topology; Interconnection Network; Parallel Computing; Topology (ID#: 15-7459)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7231392&isnumber=7231384

 

Tsai, T.J.; Friedland, G.; Anguera, X., “An Information-Theoretic Metric of Fingerprint Effectiveness,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no., pp. 340–344, 19–24 April 2015. doi:10.1109/ICASSP.2015.7177987

Abstract: Audio fingerprinting refers to the process of extracting a robust, compact representation of audio which can be used to uniquely identify an audio segment. Works in the audio fingerprinting literature generally report results using system-level metrics. Because these systems are usually very complex, the overall system-level performance depends on many different factors. So, while these metrics are useful in understanding how well the entire system performs, they are not very useful in knowing how good or bad the fingerprint design is. In this work, we propose a metric of fingerprint effectiveness that decouples the effect of other system components such as the search mechanism or the nature of the database. The metric is simple, easy to compute, and has a clear interpretation from an information theory perspective. We demonstrate that the metric correlates directly with system-level metrics in assessing fingerprint effectiveness, and we show how it can be used in practice to diagnose the weaknesses in a fingerprint design.

Keywords: audio coding; audio signal processing; copy protection; signal representation; audio fingerprinting literature; audio representation extraction; audio segment; fingerprint effectiveness; information theoretic metric; search mechanism; system level metrics; system level performance; Accuracy; Databases; Entropy; Information rates; Noise measurement; Signal to noise ratio; audio fingerprint; copy detection (ID#: 15-7460)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177987&isnumber=7177909

 

Cheng Liu; So, H.K.-H., “Automatic Soft CGRA Overlay Customization for High-Productivity Nested Loop Acceleration on FPGAs,” in Field-Programmable Custom Computing Machines (FCCM), 2015 IEEE 23rd Annual International Symposium on, vol., no., pp. 101–101, 2–6 May 2015. doi:10.1109/FCCM.2015.57

Abstract: Compiling high level compute intensive kernels to FPGAs via an abstract overlay architecture has been demonstrated to be an effective way to improve designers’ productivity. However, achieving the desired performance and overhead constraints requires exploration in a complex design space involving multiple architectural parameters and counteracts the benefit of utilizing an overlay as a productivity enhancer. In this work, a soft CGRA (SCGRA) which provides unique opportunity to improve the power-performance of the resulting accelerators is used an FPGA overlay. With the observation that the loop unrolling factor and SCGRA size typically have monotonic impact on the loop compute time and the loop performance benefit degrades with the increase of the two design parameters, we took a marginal performance revenue metric to prune the design space to a small feasible design space (FDS) and then performed an intensive customization on the FDS by using analytical models of various design metrics such as power and overhead.

Keywords: field programmable gate arrays; logic design; FDS; FPGA; SCGRA size; abstract overlay architecture; accelerator power-performance; designer productivity; feasible design space; field programmable gate array; high-productivity nested loop acceleration; loop compute time; loop performance benefit; loop unrolling factor; productivity enhancer; soft CGRA overlay customization; Acceleration; Computer architecture; Field programmable gate arrays; Finite impulse response filters; Kernel; Measurement; Productivity; Design Productivity; FPGA Acceleration; Nested Loop Acceleration; Soft CGRA (ID#: 15-7461)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160051&isnumber=7160017

 

Syer, M.D.; Nagappan, M.; Adams, B.; Hassan, A.E., “Replicating and Re-Evaluating the Theory of Relative Defect-Proneness,” in Software Engineering, IEEE Transactions on, vol. 41, no. 2, pp. 176–197, Feb. 1 2015. doi:10.1109/TSE.2014.2361131

Abstract: A good understanding of the factors impacting defects in software systems is essential for software practitioners, because it helps them prioritize quality improvement efforts (e.g., testing and code reviews). Defect prediction models are typically built using classification or regression analysis on product and/or process metrics collected at a single point in time (e.g., a release date). However, current defect prediction models only predict if a defect will occur, but not when, which makes the prioritization of software quality improvements efforts difficult. To address this problem, Koru et al. applied survival analysis techniques to a large number of software systems to study how size (i.e., lines of code) influences the probability that a source code module (e.g., class or file) will experience a defect at any given time. Given that 1) the work of Koru et al. has been instrumental to our understanding of the size-defect relationship, 2) the use of survival analysis in the context of defect modelling has not been well studied and 3) replication studies are an important component of balanced scholarly debate, we present a replication study of the work by Koru et al. In particular, we present the details necessary to use survival analysis in the context of defect modelling (such details were missing from the original paper by Koru et al.). We also explore how differences between the traditional domains of survival analysis (i.e., medicine and epidemiology) and defect modelling impact our understanding of the size-defect relationship. Practitioners and researchers considering the use of survival analysis should be aware of the implications of our findings.

Keywords: program diagnostics; software quality; software reliability; defect modelling; relative defect-proneness theory; size-defect relationship; software system defects; source code module; survival analysis techniques; Analytical models; Data models; Hazards; Mathematical model; Measurement; Predictive models; Software; Cox Models; Cox models; Defect Modelling; Survival Analysis; Survival analysis; defect modeling (ID#: 15-7462)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914599&isnumber=7038242

 

Chuanqi Tao; Gao, Jerry; Bixin Li, “A Model-Based Framework to Support Complexity Analysis Service for Regression Testing of Component-Based Software,” in Service-Oriented System Engineering (SOSE), 2015 IEEE Symposium on, vol., no., pp. 326–331, March 30 2015–April 3 2015. doi:10.1109/SOSE.2015.42

Abstract: Today, software components have been widely used in software construction to reduce the cost of project and speed up software development cycle. During software maintenance, various software change approaches can be used to realize specific change requirements of software components. Different change approaches lead to diverse regression testing complexity. Such complexity is one of the key contributors to the cost and effectiveness of software maintenance. However, there is a lack of research work addressing regression testing complexity analysis service for software components. This paper proposes a framework to measure and analyze regression testing complexity based on a set of change and impact complexity models and metrics. The framework can provide services for complexity modeling, complexity factor classification, and regression testing complexity measurements. The initial study results indicate the proposed framework is feasible and effective in measuring the complexity of regression testing for component-based software.

Keywords: object-oriented programming; program testing; software maintenance; software metrics; complexity factor classification; complexity modeling; component-based software; model-based framework; project cost reduction; regression testing complexity analysis service; regression testing complexity measurements; software change approach; software components; software construction; software development cycle; Analytical models; Complexity theory; Computational modeling; Measurement; Software maintenance; Testing; component-based software regression testing; regression testing complexity; testing service (ID#: 15-7463)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133549&isnumber=7133490

 

Chun-Hung Liu; Beiyu Rong; Shuguang Cui, “Optimal Discrete Power Control in Poisson-Clustered Ad Hoc Networks,” in Wireless Communications, IEEE Transactions on, vol. 14, no. 1, pp. 138–151, Jan. 2015. doi:10.1109/TWC.2014.2334330

Abstract: Power control in a digital handset is practically implemented in a discrete fashion, and usually, such a discrete power control (DPC) scheme is suboptimal. In this paper, we first show that in a Poison-distributed ad hoc network, if DPC is properly designed with a certain condition satisfied, it can strictly work better than no power control (i.e., users use the same constant power) in terms of average signal-to-interference ratio, outage probability, and spatial reuse. This motivates us to propose an N-layer DPC scheme in a wireless clustered ad hoc network, where transmitters and their intended receivers in circular clusters are characterized by a Poisson cluster process on the plane ℝ2. The cluster of each transmitter is tessellated into N-layer annuli with transmit power Pi adopted if the intended receiver is located at the ith layer. Two performance metrics of transmission capacity (TC) and outage-free spatial reuse factor are redefined based on the N-layer DPC. The outage probability of each layer in a cluster is characterized and used to derive the optimal power scaling law Pi ∈ Θ(ηi-(α/2)), with ηi as the probability of selecting power Pi and α as the path loss exponent. Moreover, the specific design approaches to optimize Pi and N based on ηi are also discussed. Simulation results indicate that the proposed optimal N-layer DPC significantly outperforms other existing power control schemes in terms of TC and spatial reuse.

Keywords: ad hoc networks; probability; stochastic processes; N-layer DPC scheme; Poisson-clustered ad hoc networks; TC; optimal discrete power control; outage probability; outage-free spatial reuse factor; transmission capacity; wireless clustered ad hoc network; Ad hoc networks; Fading; Interference; Power control; Receivers; Transmitters; Wireless communication; Discrete power control; Poisson cluster process; stochastic geometry (ID#: 15-7464)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847161&isnumber=7004094

 

Bulbul, R.; Sapkota, P.; C.-W.Ten; L. Wang; Ginter, A., “Intrusion Evaluation of Communication Network Architectures for Power Substations,” in Power Delivery, IEEE Transactions on, vol. 30, no. 3, pp.1372–1382, June 2015. doi:10.1109/TPWRD.2015.2409887

Abstract: Electronic elements of a substation control system have been recognized as critical cyberassets due to the increased complexity of the automation system that is further integrated with physical facilities. Since this can be executed by unauthorized users, the security investment of cybersystems remains one of the most important factors for substation planning and maintenance. As a result of these integrated systems, intrusion attacks can impact operations. This work systematically investigates the intrusion resilience of the ten architectures between a substation network and others. In this paper, two network architectures comparing computer-based boundary protection and firewall-dedicated virtual local-area networks are detailed, that is, architectures one and ten. A comparison on the remaining eight architecture models was performed. Mean time to compromise is used to determine the system operational period. Simulation cases have been set up with the metrics based on different levels of attackers’ strength. These results as well as sensitivity analysis show that implementing certain architectures would enhance substation network security.

Keywords: firewalls; investment; local area networks; maintenance engineering; power system planning; safety systems; substation automation; substation protection; automation system; communication network architectures; computer-based boundary protection; cybersystems; electronic elements; firewall-dedicated virtual local-area networks; intrusion attacks; intrusion evaluation; intrusion resilience; power substations; security investment; sensitivity analysis; substation control system; substation maintenance; substation network security; substation planning; unauthorized users; Computer architecture; Modems; Protocols; Security; Servers; Substations; Tin; Cyberinfrastructure; electronic intrusion; network security planning; power substation (ID#: 15-7465)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7054545&isnumber=7110680

 

Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang, “Face Sketch Synthesis via Sparse Representation-Based Greedy Search,” in Image Processing, IEEE Transactions on, vol. 24, no. 8, pp. 2466–2477, Aug. 2015. doi:10.1109/TIP.2015.2422578

Abstract: Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.

Keywords: face recognition; feature extraction; greedy algorithms; image representation; Bayesian inference; digital entertainment; face sketch synthesis; hair style; hairpins; image backgrounds; image patches; law enforcement; learnt dictionary; nonfacial factors; objective metric; perceptual metric; photo patch feature dictionary; searching process; sparse coefficients; sparse representation-based greedy search; test photo alignment aspect; test photo image size aspect; training photo patches; training photo-sketch pairs; training set; Bayes methods; Dictionaries; Face; Glass; Hidden Markov models; Image coding; Training; Face sketch synthesis; dictionary learning; fast index; greedy search (ID#: 15-7466)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7084655&isnumber=7086144

 

Abo-Zahhad, M.; Ahmed, S.M.; Sabor, N.; Sasaki, S., “Utilisation of Multi-Objective Immune Deployment Algorithm for Coverage Area Maximisation with Limit Mobility in Wireless Sensors Networks,” in Wireless Sensor Systems, IET, vol. 5,

no. 5, pp. 250–261, Oct. 2015. doi:10.1049/iet-wss.2014.0085

Abstract: Coverage is one of the most important performance metrics for wireless sensor network (WSN) since it reflects how well a sensor field is monitored. The coverage issue in WSNs depends on many factors, such as the network topology, sensor sensing model and the most important one is the deployment strategy. Random deployment of the sensor nodes can cause coverage holes formulation. This problem is non-deterministic polynomial-time hard problem. So in this study, a new centralised deployment algorithm based on the immune optimisation algorithm is proposed to relocate the mobile nodes after the initial configuration to maximise the coverage area. Moreover, the proposed algorithm limits the moving distance of the mobile nodes to reduce the dissipation energy in mobility and to ensure the connectivity among the sensor nodes. The performance of the proposed algorithm is compared with the previous algorithms using Matlab simulation. Simulation results clear that the proposed algorithm based on binary and probabilistic sensing models improves the network coverage and the redundant covered area with minimum moving consumption energy. Furthermore, the simulation results show that the proposed algorithm also works when obstacles appear in the sensing field.

Keywords: computational complexity; mobility management (mobile radio); optimisation; probability; telecommunication network topology; wireless sensor networks; WSN; binary sensing model; centralised deployment algorithm; coverage area maximisation; coverage holes formulation; dissipation energy reduction; immune optimisation algorithm; limit mobility; mobile node relocation; multiobjective immune deployment algorithm; network coverage improvement; network topology; nondeterministic polynomial-time hard problem; performance metrics; probabilistic sensing model; random sensor node deployment; sensor sensing model

(ID#: 15-7467)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7277322&isnumber=7277314

 

Yu-Lin Chien; Lin, K.C.-J.; Ming-Syan Chen, “Machine Learning Based Rate Adaptation with Elastic Feature Selection for HTTP-Based Streaming,” in Multimedia and Expo (ICME), 2015 IEEE International Conference on, vol., no., pp. 1–6, June 29 2015–July 3 2015. doi:10.1109/ICME.2015.7177418

Abstract: Dynamic Adaptive Streaming over HTTP (DASH) has become an emerging application nowadays. Video rate adaptation is a key to determine the video quality of HTTP-based media streaming. Recent works have proposed several algorithms that allow a DASH client to adapt its video encoding rate to network dynamics. While network conditions are typically affected by many different factors, these algorithms however usually consider only a few representative information, e.g., predicted available bandwidth or fullness of its playback buffer. In addition, the error in bandwidth estimation could significantly degrade their performance. Therefore, this paper presents Machine Learning-based Adaptive Streaming over HTTP (MLASH), an elastic framework that exploits a wide range of useful network-related features to train a rate classification model. The distinct properties of MLASH are that its machine learning-based framework can be incorporated with any existing adaptation algorithm and utilize big data characteristics to improve prediction accuracy. We show via trace-based simulations that machine learning-based adaptation can achieve a better performance than traditional adaptation algorithms in terms of their target quality of experience (QoE) metrics.

Keywords: feature selection; hypermedia; learning (artificial intelligence); media streaming; pattern classification; quality of experience; video coding; DASH client; HTTP-based media streaming; HTTP-based streaming; MLASH; QoE metrics; adaptation algorithm; bandwidth estimation; big data characteristics; dynamic adaptive streaming over HTTP; elastic feature selection; machine learning based rate adaptation; machine learning-based adaptation; machine learning-based adaptive streaming over HTTP; machine learning-based framework; network condition; network dynamics; network-related feature; playback buffer; prediction accuracy; rate classification model; representative information; target quality of experience metrics; trace-based simulation; video encoding rate; video quality; video rate adaptation; Bandwidth; Lead; Servers; Streaming media; Training; HTTP Streaming; Machine Learning; Rate Adaptation (ID#: 15-7468)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177418&isnumber=7177375

 

Kaur, P.P.; Singh, H.; Singh, M., “Evaluation of Architecture of Component Based System,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 852–857, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.170

Abstract: Being widely used in industries and organizations, Component based engineering is a technology for systems, highly successful in providing a product with high quality of functionality at low cost. This paper can enlighten beginners to the field of component based system and evaluating its architecture on certain parameters. Component based engineering can help to see how accurate, reliable or secure a system can respond, commonly named as system’s non-functional properties. In this paper, the approach to evaluate the architecture of component based system is based on non-functional property, ‘Performance’. Performance attribute ensures for the smooth and efficient operation of the software system. Next the architecture is evaluated at an early level of design which can be useful in a way, whether the architecture can meet the desired performance specifications or not thus saving cost. We analyzed the results over standard performance parameters namely response time, throughput and resource utilization. Logically, First the system over component based architecture is proposed, here SDLC (System development life cycle), next we model the architecture for performance over performance model. Here, MPFQN (Multichain PFQN) performance model for Iterative SDLC which work on component based system is used. The system was observed over some assumptions and scheduling disciplines given to various architectural elements. Varying the scheduling disciplines in the model gave varying results on performance parameters, that are observed using SHARPE Tool. The work studied in this paper was built on assumptions and simulations which can be extended to put in some real case study for some organization system.

Keywords: object-oriented programming; software architecture; SDLC; SHARPE tool; component based engineering; component based system architecture; iterative SDLC; multichain PFQN performance model; performance attribute; system development life cycle; Computational modeling; Computer architecture; Mathematical model; Software; Throughput; Time factors; Unified modeling language; model based evaluation; multichain pfqn; performance metrics; queueing network; system architecture (ID#: 15-7469)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155968&isnumber=7155781

 

Tzoumas, V.; Rahimian, M.A.; Pappas, G.J.; Jadbabaie, A., “Minimal Actuator Placement with Bounds on Control Effort,” in Control of Network Systems, IEEE Transactions on, vol. 3, no. 1, pp. 67–78, March 2016. doi:10.1109/TCNS.2015.2444031

Abstract: We address the problem of minimal actuator placement in a linear system subject to an average control energy bound. First, following the recent work of Olshevsky, we prove that this is NP-hard. Then, we provide an efficient algorithm which, for a given range of problem parameters, approximates up to a multiplicative factor of O(log n), n being the network size, any optimal actuator set that meets the same energy criteria; this is the best approximation factor one can achieve in polynomial time, in the worst case. Moreover, the algorithm uses a perturbed version of the involved control energy metric, which we prove to be supermodular. Next, we focus on the related problem of cardinality-constrained actuator placement for minimum control effort, where the optimal actuator set is selected so that an average input energy metric is minimized. While this is also an NP-hard problem, we use our proposed algorithm to efficiently approximate its solutions as well. Finally, we run our algorithms over large random networks to illustrate their efficiency.

Keywords: Actuators; Aerospace electronics; Approximation algorithms; Approximation methods; Controllability; Measurement; Controllability Energy Metrics; Input Placement; Leader Selection; Minimal Network Controllability; Multi-agent Networked Systems (ID#: 15-7470)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7122316&isnumber=6730648

 

Hale, M.L.; Gamble, R.; Hale, J.; Haney, M.; Lin, J.; Walter, C., “Measuring the Potential for Victimization in Malicious Content,” in Web Services (ICWS), 2015 IEEE International Conference on, vol., no., pp. 305–312, June 27 2015–July 2 2015. doi:10.1109/ICWS.2015.49

Abstract: Sending malicious content to users for obtaining personnel, financial, or intellectual property has become a multi-billion dollar criminal enterprise. This content is primarily presented in the form of emails, social media posts, and phishing websites. User training initiatives seek to minimize the impact of malicious content through improved vigilance. Training works best when tailored to specific user deficiencies. However, tailoring training requires understanding how malicious content victimizes users. In this paper, we link a set of malicious content design factors, in the form of degradations and sophistications, to their potential to form a victimization prediction metric. The design factors examined are developed from an analysis of over 100 pieces of content from email, social media and websites. We conducted an experiment using a sample of the content and a game-based simulation platform to evaluate the efficacy of our victimization prediction metric. The experimental results and their analysis are presented as part of the evaluation.

Keywords: Internet; computer crime; social networking (online); trusted computing; unsolicited e-mail; e-mails; game-based simulation platform; malicious content; multibillion dollar criminal enterprise; phishing Web sites; social media posts; victimization prediction metric; Degradation; Electronic mail; Games; Measurement; Media; Taxonomy; Training; content assessment; maliciousness; metrics; phishing; trust; trust factors; user training; victimization (ID#: 15-7471)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195583&isnumber=7195533

 

Khabbaz, M.; Assi, C., “Modelling and Analysis of a Novel Deadline-Aware Scheduling Scheme for Cloud Computing Data Centers,” in Cloud Computing, IEEE Transactions on, vol. PP, no. 99, pp. 1–1, October 2015. doi:10.1109/TCC.2015.2481429

Abstract: User Request (UR) service scheduling is a process that significantly impacts the performance of a cloud data center. This is especially true since essential Quality-of-Service (QoS) performance metrics such as the UR blocking probability as well as the data center?s response time are tightly coupled to such a process. This paper revolves around the proposal of a novel Deadline-Aware UR Scheduling Scheme (DASS) that has the objective of improving the data center?s QoS performance in term of the above-mentioned metrics. A minority of existing work in the literature targets the formulation of mathematical models for the purpose of characterizing a cloud data center?s performance. As a contribution to covering this gap, this paper presents an analytical model, which is developed for the purpose of capturing the system?s dynamics and evaluating its performance when operating under DASS. The model?s results and their accuracy are verified through simulations. In addition, the performance of the data center achieved under DASS is compared to its counterpart achieved under the more generic First-In-First- Out (FIFO) scheme. The reported results indicate that DASS outperforms FIFO by 11% to 58% in terms of the blocking probability and by 82% to 89% in terms of the system?s response time.

Keywords: Analytical models; Bandwidth; Cloud computing; Data models; Mathematical model; Quality of service; Time factors; Analysis; Cloud; Data Center; Modelling; Performance (ID#: 15-7472)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274716&isnumber=6562694

 

Goderie, J.; Georgsson, B.M.; van Graafeiland, B.; Bacchelli, A., “ETA: Estimated Time of Answer Predicting Response Time in Stack Overflow,” in Mining Software Repositories (MSR), 2015 IEEE/ACM 12th Working Conference on, vol., no., pp. 414–417,

16–17 May 2015. doi:10.1109/MSR.2015.52

Abstract: Question and Answer (Q&A) sites help developers dealing with the increasing complexity of software systems and third-party components by providing a platform for exchanging knowledge about programming topics. A shortcoming of Q&A sites is that they provide no indication on when an answer is to be expected. Such an indication would help, for example, the developers who posed the questions in managing their time. We try to fill this gap by investigating whether and how answering time for a question posed on Stack Overflow, a prominent example of Q&A websites, can be predicted considering its tags. To this aim, we first determine the types of answers to be considered valid answers to the question, after which the answering time was predicted based on similarity of the set of tags. Our results show that the classification is correct in 30%-35% of the cases.

Keywords: question answering (information retrieval); software metrics; Stack Overflow; question and answer sites; software system complexity; third-party components; Communities; Correlation; Prediction algorithms; Time factors; Time measurement; Training; response time; stack overflow (ID#: 15-7473)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180106&isnumber=7180053

 

Yong Jin; Xin Yang; Kula, Raula Gaikovina; Eunjong Choi; Inoue, Katsuro; Iida, Hajimu, “Quick Trigger on Stack Overflow: A Study of Gamification-Influenced Member Tendencies,” in Mining Software Repositories (MSR), 2015 IEEE/ACM 12th Working Conference on , vol., no., pp. 434–437, 16–17 May 2015. doi:10.1109/MSR.2015.57

Abstract: In recent times, gamification has become a popular technique to aid online communities stimulate active member participation. Gamification promotes a reward-driven approach, usually measured by response-time. Possible concerns of gamification could a trade-off between speedy over quality responses. Conversely, bias toward easier question selection for maximum reward may exist. In this study, we analyze the distribution gamification-influenced tendencies on the Q&A Stack Overflow online community. In addition, we define some gamification-influenced metrics related to response time to a question post. We carried experiments of a four-month period analyzing 101,291 members posts. Over this period, we determined a Rapid Response time of 327 seconds (5.45 minutes). Key findings suggest that around 92% of SO members have fewer rapid responses that non-rapid responses. Accepted answers have no clear relationship with rapid responses. However, we did find that rapid responses significantly contain tags that did not follow their usual tagging tendencies.

Keywords: computer games; question answering (information retrieval); social networking (online); software metrics; Q&A Stack Overflow online community; SO members; active member participation; distribution gamification-influenced tendencies; gamification-influenced member tendencies; gamification-influenced metrics; rapid response time; reward-driven approach; Communities; Context; Data mining; Measurement; Software; Tagging; Time factors; Gamification; Mining Software Repositories; Online Community tendencies (ID#: 15-7474)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7180111&isnumber=7180053

 

Medina, V.; Lafon-Pham, D.; Paljic, A.; Diaz, E., “Physically Based Image Synthesis of Materials : A Methodology Towards

the Visual Comparison of Physical vs. Virtual Samples,”
in Colour and Visual Computing Symposium (CVCS), 2015, pp. 1–6, 25–26 Aug. 2015. doi:10.1109/CVCS.2015.7274878

Abstract: The assessment of images of complex materials on an absolute scale is difficult for a human observer. Comparing physical and virtual samples side-by-side simplifies the task by introducing a reference. The goal of this article is to study the influence of image exposure on the perception of realism on images of paint materials containing sparkling metallic flakes. We use a radiometrically calibrated DSLR camera to acquire high resolution raw photographs of our physical samples which provide us with radiometric information from the samples. This is combined with the data obtained from the calibration of a stereoscopic display and shutter glasses to transform the raw photographs into images that can be shown by the display, controlling the colorimetric output signal. This ensures that we can transform our data back and forth between a radiometric and a colorimetric representation, minimizing the loss of information throughout the chain of acquisition and visualization. In this article we propose a paired comparison scenario that improves the results from our previous work, focusing on three main aspects: stereoscopy, exposure time, and dynamic range. Our results show that observers consider stereoscopy as the most important factor of the three for judging the similarity of these images to the reference, followed by exposure time and dynamic range, which supports our claims from previous research.

Keywords: Image color analysis; Lighting; Observers; Paints; Radiometry; Stereo image processing; Visualization; Human visual system; Paired comparison; Perceptual quality metrics; Physically-based rendering; Texture perception (ID#: 15-7475)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7274878&isnumber=7274875

 

Erfanian, Aida; Yaoping Hu, “Conflict Resolution Models on Usefulness Within Multi-User Collaborative Virtual Environments,” in 3D User Interfaces (3DUI), 2015 IEEE Symposium on, vol., no., pp. 147–148, 23–24 March 2015. doi:10.1109/3DUI.2015.7131743

Abstract: Conflict resolution models play key roles in coordinating simultaneous interactions in multi-user collaborative virtual environments (VEs). Currently, conflict resolution models are first-come-first-serve (FCFS) and dynamic priority (DP). Known to be unfair, the FCFS model grants all interaction opportunities to the agilest user. Instead, the DP model permits all users the perception of equality in interaction. Nevertheless, it remains unclear whether the perception of equality in interaction could impact the usefulness of multi-user collaborative VEs. Thus, this present work compared the FCFS and DP models for underlying the usefulness of multi-user collaborative VEs. This comparison was undertaken based on a metrics of usefulness (i.e., task focus, decision time, and consensus), which we defined according to the ISO/IEC 205010:2011 standard. This definition remedied the current metrics of usefulness that measures actually effectiveness and efficiency of target technologies, instead of their usefulness. On our multi-user collaborative VE, we observed that the DP model yielded significantly lower decision time and higher consensus than the FCFS model. There was, however, no significant difference of task focus between both models. These observations imply a potential to improve multi-user collaborative VEs.

Keywords: IEC standards; ISO standards; human computer interaction; human factors; virtual reality; DP model; FCFS model; ISO/IEC 205010:2011 standard; conflict resolution models; consensus; decision time; dynamic priority model; first-come-first-serve model; multiuser collaborative VE; multiuser collaborative virtual environments; simultaneous interaction coordination; task focus; usefulness metrics; user equality perception; Analysis of variance; Collaboration; Computational modeling; Measurement; Standards; Testing; Virtual environments; Conflict resolution models; multi-user collaborative virtual environments; usefulness (ID#: 15-7476)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131743&isnumber=7131667


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.