Discrete and Continuous Optimization
Discrete and Continuous Optimization |
Discrete and continuous optimization are two mathematical approaches to problem solving. The research works cited here are primarily focused oncontinuous optimization. They appeared in 2014 between January and October.
- Biao Zhang; Huihui Yan; Junhua Duan; Liang, J.J.; Hong-yan Sang; Quan-ke Pan, "An Improved Harmony Search Algorithm With Dynamic Control Parameters For Continuous Optimization Problems," Control and Decision Conference (2014 CCDC), The 26th Chinese , pp.966,971, May 31 2014-June 2 2014 doi: 10.1109/CCDC.2014.6852303 Abstract: An improved harmony search algorithm is presented for solving continuous optimization problems in this paper. In the proposed algorithm, an elimination principle is developed for choosing from the harmony memory, so that the harmonies with better fitness will have more opportunities to be selected in generating new harmonies. Two key control parameters, pitch adjustment rate (PAR) and bandwidth distance (bw), are dynamically adjusted to favor exploration in the early stages and exploitation during the final stages of the search process with the different search spaces of the optimization problems. Numerical results of 12 benchmark problems show that the proposed algorithm performs more effectively than the existing HS variants in finding better solutions.
Keywords: optimisation; search problems; HS variants; PAR; bandwidth distance; bw; continuous optimization problems; dynamic control parameters; elimination principle; harmony memory; harmony search algorithm; pitch adjustment rate; search process; search spaces; Algorithm design and analysis; Educational institutions; Electronic mail; Heuristic algorithms; Optimization; Search problems; Vectors; Continuous optimization; Dynamic parameter; Evolutionary algorithms; Harmony search; Meta-heuristics (ID#:14-3154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852303&isnumber=6852105
- Li, Yexing; Cai, Xinye; Fan, Zhun; Zhang, Qingfu, "An External Archive Guided Multiobjective Evolutionary Approach Based On Decomposition For Continuous Optimization," Evolutionary Computation (CEC), 2014 IEEE Congress on, pp.1124,1130, 6-11 July 2014. doi: 10.1109/CEC.2014.6900340 In this paper, we propose a decomposition based multiobjective evolutionary algorithm that extracts information from an external archive to guide the evolutionary search for continuous optimization problem. The proposed algorithm used a mechanism to identify the promising regions(subproblems) through learning information from the external archive to guide evolutionary search process. In order to demonstrate the performance of the algorithm, we conduct experiments to compare it with other decomposition based approaches. The results validate that our proposed algorithm is very competitive.
Keywords: Benchmark testing; Educational institutions; Learning systems; Optimization; Sociology; Statistics; Vectors (ID#:14-3155)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900340&isnumber=6900223
- Chia-Feng Juang; Chi-Wei Hung; Chia-Hung Hsu, "Rule-Based Cooperative Continuous Ant Colony Optimization to Improve the Accuracy of Fuzzy System Design," Fuzzy Systems, IEEE Transactions on, vol.22, no.4, pp.723, 735, Aug. 2014. doi: 10.1109/TFUZZ.2013.2272480 This paper proposes a cooperative continuous ant colony optimization (CCACO) algorithm and applies it to address the accuracy-oriented fuzzy systems (FSs) design problems. All of the free parameters in a zero- or first-order Takagi-Sugeno-Kang (TSK) FS are optimized through CCACO. The CCACO algorithm performs optimization through multiple ant colonies, where each ant colony is only responsible for optimizing the free parameters in a single fuzzy rule. The ant colonies cooperate to design a complete FS, with a complete parameter solution vector (encoding a complete FS) that is formed by selecting a subsolution component (encoding a single fuzzy rule) from each colony. Subsolutions in each ant colony are evolved independently using a new continuous ant colony optimization algorithm. In the CCACO, solutions are updated via the techniques of pheromone-based tournament ant path selection, ant wandering operation, and best-ant-attraction refinement. The performance of the CCACO is verified through applications to fuzzy controller and predictor design problems. Comparisons with other population-based optimization algorithms verify the superiority of the CCACO.
Keywords: ant colony optimisation; control system synthesis; fuzzy control; fuzzy systems; vectors; CCACO algorithm; FSs; TSK FS; Takagi-Sugeno-Kang fuzzy systems; accuracy-oriented fuzzy system design problems; ant wandering operation; best-ant-attraction refinement; fuzzy controller; parameter solution vector; pheromone-based tournament ant path selection; predictor design problems; rule-based cooperative continuous ant colony optimization; subsolution component; Algorithm design and analysis; Ant colony optimization; Frequency selective surfaces; Fuzzy systems; Optimization; Probability density function; Vectors; Ant colony optimization; cooperative evolution; evolutionary fuzzy systems ;swarm intelligence (SI) (ID#:14-3156)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6555815&isnumber=6867403
- Yi, Peng; Hong, Yiguang, "Distributed Continuous-Time Gradient-Based Algorithm For Constrained Optimization," Control Conference (CCC), 2014 33rd Chinese, pp.1563, 1567, 28-30 July 2014 doi: 10.1109/ChiCC.2014.6896861 In this paper, we consider distributed algorithm based on a continuous-time multi-agent system to solve constrained optimization problem. The global optimization objective function is taken as the sum of agents' individual objective functions under a group of convex inequality function constraints. Because the local objective functions cannot be explicitly known by all the agents, the problem has to be solved in a distributed manner with the cooperation between agents. Here we propose a continuous-time distributed gradient dynamics based on the KKT condition and Lagrangian multiplier methods to solve the optimization problem. We show that all the agents asymptotically converge to the same optimal solution with the help of a constructed Lyapunov function and a LaSalle invariance principle of hybrid systems.
Keywords: Algorithm design and analysis; Heuristic algorithms; Linear programming; Multi-agent systems; Optimization; Trajectory; Distributed optimization; Lagrangian multiplier method; constrained optimization; continuous-time optimization algorithm; multi-agent systems (ID#:14-3157)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6896861&isnumber=6895198
- Hu, Xiao-Bing; Wang, Ming; Hu, Xiao-Bing; Leeson, Mark S, "Calculating the Complete Pareto Front For A Special Class Of Continuous Multi-Objective Optimization Problems," Evolutionary Computation (CEC), 2014 IEEE Congress on , vol., no., pp.290,297, 6-11 July 2014. doi: 10.1109/CEC.2014.6900297 Existing methods for multi-objective optimization usually provide only an approximation of a Pareto front, and there is little theoretical guarantee of finding the real Pareto front. This paper is concerned with the possibility of fully determining the true Pareto front for those continuous multi-objective optimization problems for which there are a finite number of local optima in terms of each single objective function and there is an effective method to find all such local optima. To this end, some generalized theoretical conditions are firstly given to guarantee a complete cover of the actual Pareto front for both discrete and continuous problems. Then based on such conditions, an effective search procedure inspired by the rising sea level phenomenon is proposed particularly for continuous problems of the concerned class. Even for general continuous problems to which not all local optima are available, the new method may still work well to approximate the true Pareto front. The good practicability of the proposed method is especially underpinned by multi-optima evolutionary algorithms. The advantages of the proposed method in terms of both solution quality and computational efficiency are illustrated by the simulation results.
Keywords: Aggregates; Approximation methods ;Evolutionary computation; Linear programming; Optimization; Sea level; Search problems; Continuous Problem; Evolutionary algorithm; Local Optima; Multi-Objective Optimization; Pareto Front (ID#:14-3158)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900297&isnumber=6900223
- Cao, Lu; Chen, Weisheng, "Distributed Continuous-Time Optimization Based On Lagrangian Functions," Control Conference (CCC), 2014 33rd Chinese, pp.5796,5801, 28-30 July 2014 doi: 10.1109/ChiCC.2014.6895931 Distributed optimization is an emerging research topic. Agents in the network solve the problem by exchanging information which depicts people's consideration on a optimization problem in real lives. In this paper, we introduce two algorithms in continuous-time to solve distributed optimization problems with equality constraints where the cost function is expressed as a sum of functions and where each function is associated to an agent. We firstly construct a continuous dynamic system by utilizing the Lagrangian function and then show that the algorithm is locally convergent and globally stable under certain conditions. Then, we modify the Lagrangian function and re-construct the dynamic system to prove that the new algorithm will be convergent under more relaxed conditions. At last, we present some simulations to prove our theoretical results.
Keywords: Eigenvalues and eigenfunctions; Equations; Heuristic algorithms; Lagrangian functions; Linear programming; Optimization; Vectors; Constrained Optimization; Continuous-Time; Distributed Optimization; Lagrangian Function (ID#:14-3159)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895931&isnumber=6895198
- Kia, S.S.; Cortes, J.; Martinez, S., "Periodic and Event-Triggered Communication For Distributed Continuous-Time Convex Optimization," American Control Conference (ACC), 2014, pp.5010,5015, 4-6 June 2014. doi: 10.1109/ACC.2014.6859122 We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.
Keywords: convex programming; directed graphs; network theory (graphs);Zeno behavior; connected undirected graph; convex function; cost functions; distributed continuous-time algorithm; distributed continuous-time convex optimization; event-triggered communication; global cost function; network optimization problem; periodic communication; privacy preservation properties; strongly connected weight-balanced directed graph; synchronous periodic communication scheme; Algorithm design and analysis; Convergence ;Convex functions; Cost function; Privacy; Topology; Control of networks; Optimization algorithms (ID#:14-3160)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859122&isnumber=6858556
- Qingshan Liu; Tingwen Huang; Jun Wang, "One-Layer Continuous-and Discrete-Time Projection Neural Networks for Solving Variational Inequalities and Related Optimization Problems," Neural Networks and Learning Systems, IEEE Transactions on, vol.25, no.7, pp.1308,1318, July 2014. doi: 10.1109/TNNLS.2013.2292893 This paper presents one-layer projection neural networks based on projection operators for solving constrained variational inequalities and related optimization problems. Sufficient conditions for global convergence of the proposed neural networks are provided based on Lyapunov stability. Compared with the existing neural networks for variational inequalities and optimization, the proposed neural networks have lower model complexities. In addition, some improved criteria for global convergence are given. Compared with our previous work, a design parameter has been added in the projection neural network models, and it results in some improved performance. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural networks.
Keywords: discrete time systems; neural nets; optimisation; variational techniques; Lyapunov stability; constrained variational inequalities; one-layer continuous-time projection neural networks; one-layer discrete-time projection neural networks; optimization problems; sufficient conditions; Convergence; Educational institutions; Lyapunov methods; Mathematical model; Neural networks; Optimization; Vectors; Constrained optimization; Lyapunov stability; global convergence; projection neural network; variational inequalities; variational inequalities (ID#:14-3162)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680760&isnumber=6828828
- Chouzenoux, Emilie; Pesquet, Jean-Christophe; Florescu, Anisia, "A Multi-Parameter Optimization Approach For Complex Continuous Sparse Modelling," Digital Signal Processing (DSP), 2014 19th International Conference on, pp.817,820, 20-23 Aug. 2014 doi: 10.1109/ICDSP.2014.6900780 The main focus of this work is the estimation of a complex valued signal assumed to have a sparse representation in an uncountable dictionary of signals. The dictionary elements are parameterized by a real-valued vector and the available observations are corrupted with an additive noise. By applying a linearization technique, the original model is recast as a constrained sparse perturbed model. The problem of the computation of the involved multiple parameters is addressed from a nonconvex optimization viewpoint. A cost function is defined including an arbitrary Lipschitz differentiable data fidelity term accounting for the noise statistics, and an ℓ0-like penalty. A proximal algorithm is then employed to solve the resulting nonconvex and nonsmooth minimization problem. Experimental results illustrate the good practical performance of the proposed approach when applied to 2D spectrum analysis.
Keywords: Dictionaries; Digital signal processing; Estimation; Optimization; Signal processing algorithms; Spectral analysis; Vectors; 2D spectrum estimation; continuous compressive sensing; forward-backward algorithm; hard thresholding; multivariate estimation; nonconvex optimization; proximity operator; sparse modeling (ID#:14-3163)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900780&isnumber=6900665
- McDaniel, P.; Rivera, B.; Swami, A, "Toward a Science of Secure Environments," Security & Privacy, IEEE, vol.12, no.4, pp.68,70, July-Aug. 2014. doi: 10.1109/MSP.2014.81 The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Keywords: decision making; optimisation; security of data; agile systems; continuous optimization problem; cyber environment; cyber security collaborative research alliance; cyber-decision-making; detection mechanisms; environmental security; fundamental science; network security;secure environments; software security; Approximation methods; Communities; Computational modeling; Computer security; Decision making; formal security; modeling; science of security; security; systems security (ID#:14-3164)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876248&isnumber=6876237
- Hao Wang; Haibin Ouyang; Liqun Gao; Wei Qin, "Opposition-based Learning Harmony Search Algorithm With Mutation For Solving Global Optimization Problems," Control and Decision Conference (2014 CCDC), The 26th Chinese, pp.1090,1094, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852327 This paper develops an opposition-based learning harmony search algorithm with mutation (OLHS-M) for solving global continuous optimization problems. The proposed method is different from the original harmony search (HS) in three aspects. Firstly, opposition-based learning technique is incorporated to the process of improvisation to enlarge the algorithm search space. Then, a new modified mutation strategy is instead of the original pitch adjustment operation of HS to further improve the search ability of HS. Effective self-adaptive strategy is presented to fine-tune the key control parameters (e.g. harmony memory consideration rate HMCR, and pitch adjustment rate PAR) to balance the local and global search in the evolution of the search process. Numerical results demonstrate that the proposed algorithm performs much better than the existing improved HS variants that reported in recent literature in terms of the solution quality and the stability.
Keywords: learning (artificial intelligence); optimisation; search problems; OLHS-M; algorithm search space; global continuous optimization problems; global search; local search; mutation strategy; opposition-based learning harmony search algorithm; original pitch adjustment operation; self-adaptive strategy; Algorithm design and analysis; Convergence; Heuristic algorithms; Linear programming; Optimization; Search problems; Vectors; Harmony Search Algorithm; Mutation Operation; Opposition-Based Learning; Search Space; Stability (ID#:14-3165)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852327&isnumber=6852105
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.