MPC 2012, ISSUE 4
Mathematical Programming Computation, Volume 4, Issue 4, December 2012
Optimal sensitivity based on IPOPT
Hans Pirnay, Rodrigo López-Negrete, L. T. Biegler
We introduce a flexible, open source implementation that provides the optimal sensitivity of solutions of nonlinear programming (NLP) problems, and is adapted to a fast solver based on a barrier NLP method. The program, called sIPOPT evaluates the sensitivity of the Karush–Kuhn–Tucker (KKT) system with respect to perturbation parameters. It is paired with the open-source IPOPT NLP solver and reuses matrix factorizations from the solver, so that sensitivities to parameters are determined with minimal computational cost. Aside from estimating sensitivities for parametric NLPs, the program provides approximate NLP solutions for nonlinear model predictive control and state estimation. These are enabled by pre-factored KKT matrices and a fix-relax strategy based on Schur complements. In addition, reduced Hessians are obtained at minimal cost and these are particularly effective to approximate covariance matrices in parameter and state estimation problems. The sIPOPT program is demonstrated on four case studies to illustrate all of these features.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 4, December 2012
Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm
Zaiwen Wen, W. Yin, Y. Zhang
Thematrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions—a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Extensive numerical experiments show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms. In addition, convergence of this nonlinear SOR algorithm to a stationary point is analyzed.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 4, December 2012
Maximum-weight stable sets and safe lower bounds for graph coloring
Stephan Held, William Cook, Edward C. Sewell
The best method known for determining lower bounds on the vertex coloring number of a graph is the linear-programming column-generation technique, where variables correspond to stable sets, first employed by Mehrotra and Trick in 1996. We present an implementation of the method that provides numerically-safe results, independent of the floating-point accuracy of linear-programming software. Our work includes an improved branch-and-bound algorithm for maximum-weight stable sets and a parallel branch-and-price framework for graph coloring. Computational results are presented on a collection of standard test instances, including the unsolved challenge problems created by David S. Johnson in 1989.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 4, December 2012
A comparative study of SQP-type algorithms for nonlinear and nonconvex mixed-integer optimization
Oliver Exler, Thomas Lehmann, Klaus Schittkowski
We present numerical results of a comparative study of codes for nonlinear and nonconvex mixed-integer optimization. The underlying algorithms are based on sequential quadratic programming (SQP) with stabilization by trust-regions, linear outer approximations, and branch-and-bound techniques. The mixed-integer quadratic programming subproblems are solved by a branch-and-cut algorithm. Second order information is updated by a quasi-Newton update formula (BFGS) applied to the Lagrange function for continuous, but also for integer variables. We do not require that the model functions can be evaluated at fractional values of the integer variables. Thus, partial derivatives with respect to integer variables are replaced by descent directions obtained from function values at neighboring grid points, and the number of simulations or function evaluations, respectively, is our main performance criterion to measure the efficiency of a code. Numerical results are presented for a set of 100 academic mixed-integer test problems. Since not all of our test examples are convex, we reach the best-known solutions in about 90 % of the test runs, but at least feasible solutions in the other cases. The average number of function evaluations of the new mixed-integer SQP code is between 240 and 500 including those needed for one- or two-sided approximations of partial derivatives or descent directions, respectively. In addition, we present numerical results for a set of 55 test problems with some practical
Full Text: PDF
MPC 2012, ISSUE 3
Mathematical Programming Computation, Volume 4, Issue 3, September 2012
A computational study of a solver system for processing two-stage stochastic LPs with enhanced Benders decomposition
Victor Zverovich, Csaba I. Fabian, Eldon F. D. Ellison, Gautam Mitra
We report a computational study of two-stage SP models on a large set of benchmark problems and consider the following methods: (i) Solution of the deter- ministic equivalent problem by the simplex method and an interior point method, (ii) Benders decomposition (L-shaped method with aggregated cuts), (iii) Regular- ised decomposition of Ruszczyn ?ski (Math Program 35:309–333, 1986), (iv) Bend- ers decomposition with regularization of the expected recourse by the level method (Lemare?chal et al. in Math Program 69:111–147, 1995), (v) Trust region (regularisa- tion) method of Linderoth and Wright (Comput Optim Appl 24:207–250, 2003). In this study the three regularisation methods have been introduced within the computational structure of Benders decomposition. Thus second-stage infeasibility is controlled in the traditional manner, by imposing feasibility cuts. This approach allows extensions of the regularisation to feasibility issues, as in Fa?bia?n and Szo ?ke (Comput Manag Sci 4:313–353, 2007). We report computational results for a wide range of benchmark problems from the POSTS and SLPTESTSET collections and a collection of difficult test problems compiled by us. Finally the scale-up properties and the performance profiles of the methods are presented.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 3, September 2012
Cutting plane versus compact formulations for uncertain (integer) linear programs
Matteo Fischetti, Michele Monaci
Robustness is about reducing the feasible set of a given nominal opti- mization problem by cutting “risky” solutions away. To this end, the most popular approach in the literature is to extend the nominal model with a polynomial number of additional variables and constraints, so as to obtain its robust counterpart. Robustness can also be enforced by adding a possibly exponential family of cutting planes, which typically leads to an exponential formulation where cuts have to be generated at run time. Both approaches have pros and cons, and it is not clear which is the best one when approaching a specific problem. In this paper we computationally compare the two options on some prototype problems with different characteristics. We first address robust optimization a? la Bertsimas and Sim for linear programs, and show through computational experiments that a considerable speedup (up to 2 orders of magni- tude) can be achieved by exploiting a dynamic cut generation scheme. For integer linear problems, instead, the compact formulation exhibits a typically better perfor- mance. We then move to a probabilistic setting and introduce the uncertain set covering problem where each column has a certain probability of disappearing, and each row has to be covered with high probability. A related uncertain graph connectivity problem is also investigated, where edges have a certain probability of failure. For both problems, compact ILP models and cutting plane solution schemes are presented and compared through extensive computational tests. The outcome is that a compact ILP formulation (if available) can be preferable because it allows for a better use of the rich arsenal of preprocessing/cut generation tools available in modern ILP solvers. For the cases where such a compact ILP formulation is not available, as in the uncertain connectivity problem, we propose a restart solution strategy and computationally show its practical effectiveness.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 3, September 2012
LP and SDP branch-and-cut algorithms for the minimum graph bisection problem: a computational comparison
Michael Armbruster, Marzena Fügenschuh, Christoph Helmberg, Alexander Martin
While semidefinite relaxations are known to deliver good approximations for combinatorial optimization problems like graph bisection, their practical scope is mostly associated with small dense instances. For large sparse instances, cutting plane techniques are considered the method of choice. These are also applicable for semi- definite relaxations via the spectral bundle method, which allows to exploit structural properties like sparsity. In order to evaluate the relative strengths of linear and semi- definite approaches for large sparse instances, we set up a common branch-and-cut framework for linear and semidefinite relaxations of the minimum graph bisection problem. It incorporates separation algorithms for valid inequalities of the bisection cut polytope described in a recent study by the authors. While the problem specific cuts help to strengthen the linear relaxation significantly, the semidefinite bound prof- its much more from separating the cycle inequalities of the cut polytope on a slightly enlarged support. Extensive numerical experiments show that this semidefinite branch- and-cut approach without problem specific cuts is a superior choice to the classical simplex approach exploiting bisection specific inequalities on a clear majority of our large sparse test instances from VLSI design and numerical optimization.
Full Text: PDF
MPC 2012, ISSUE 2
Mathematical Programming Computation, Volume 4, Issue 2, June 2012
PySP: modeling and solving stochastic programs in Python
Jean-Paul Watson, David L. Woodruff, William E. Hart
Although stochastic programming is a powerful tool for modeling deci- sion-making under uncertainty, various impediments have historically prevented its wide-spread use. One factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of their deterministic counterparts, which are typically formulated first. A second factor relates to the difficulty of solving stochastic programming models, particularly in the mixed-integer, non-linear, and/or multi-stage cases. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times on large-scale problems. We simul- taneously address both of these factors in our PySP software package, which is part of the Coopr open-source Python repository for optimization; the latter is distributed as part of IBM’s COIN-OR repository. To formulate a stochastic program in PySP, the user specifies both the deterministic base model (supporting linear, non-linear, and mixed-integer components) and the scenario tree model (defining the problem stages and the nature of uncertain parameters) in the Pyomo open-source algebraic model- ing language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves passing an extensive form to a standard deterministic solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets’ Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for obtaining approximate solutions to multi-stage stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide com- pletely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 2, June 2012
On optimizing over lift-and-project closures
Pierre Bonami
The strengthened lift-and-project closure of a mixed integer linear pro- gram is the polyhedron obtained by intersecting all strengthened lift-and-project cuts obtained from its initial formulation, or equivalently all mixed integer Gomory cuts read from all tableaux corresponding to feasible and infeasible bases of the LP relax- ation. In this paper, we present an algorithm for approximately optimizing over the strengthened lift-and-project closure. The originality of our method is that it relies on a cut generation linear programming problem which is obtained from the original LP relaxation by only modifying the bounds on the variables and constraints. This separation LP can also be seen as dual to the cut generation LP used in disjunc- tive programming procedures with a particular normalization. We study properties of this separation LP, and discuss how to use it to approximately optimize over the strengthened lift-and-project closure. Finally, we present computational experiments and comparisons with recent related works.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 2, June 2012
A penalty-interior-point algorithm for nonlinear constrained optimization
Frank E. Curtis
Penalty and interior-point methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interior-point methods have become the workhorse in large-scale optimization due to their New- ton-like qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penalty-interior-point method that possesses the advantages of penalty and interior-point techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penalty-interior-point subproblem itself, but on the design of updates for the penalty and interior-point parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on large sets of problems, including sets of problems that exhibit degeneracy or are infeasible.
Full Text: PDF
MPC 2012, ISSUE 1
Mathematical Programming Computation, Volume 4, Issue 1, March 2012
Rounding-based heuristics for nonconvex MINLPs
Giacomo Nannicini, Pietro Belotti
We propose two primal heuristics for nonconvex mixed-integer nonlinear programs. Both are based on the idea of rounding the solution of a continuous nonlinear program subject to linear constraints. Each rounding step is accomplished through the solution of a mixed-integer linear program. Our heuristics use the same algorithmic scheme, but they differ in the choice of the point to be rounded (which is feasible for nonlinear constraints but possibly fractional) and in the linear constraints. We propose a feasibility heuristic, that aims at finding an initial feasible solution, and an improvement heuristic, whose purpose is to search for an improved solution within the neighborhood of a given point. The neighborhood is defined through local branching cuts or box constraints. Computational results show the effectiveness in practice of these simple ideas, implemented within an open-source solver for nonconvex mixedinteger nonlinear programs.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 1, March 2012
Globally solving nonconvex quadratic programming problems via completely positive programming
Jieqiu Chen, Samuel Burer
Nonconvex quadratic programming (QP) is an NP-hard problem that optimizes a general quadratic function over linear constraints. This paper introduces a new global optimization algorithm for this problem, which combines two ideas from the literature—finite branching based on the first-order KKT conditions and polyhedralsemidefinite relaxations of completely positive (or copositive) programs. Through a series of computational experiments comparing the new algorithm with existing codes on a diverse set of test instances, we demonstrate that the new algorithm is an attractive method for globally solving nonconvex QP.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 1, March 2012
Fast Fourier optimization
Robert J. Vanderbei
Many interesting and fundamentally practical optimization problems, ranging from optics, to signal processing, to radar and acoustics, involve constraints on the Fourier transform of a function. It is well-known that the fast Fourier transform (fft) is a recursive algorithm that can dramatically improve the efficiency for computing the discrete Fourier transform. However, because it is recursive, it is difficult to embed into a linear optimization problem. In this paper, we explain the main idea behind the fast Fourier transform and show how to adapt it in such a manner as to make it encodable as constraints in an optimization problem. We demonstrate a realworld problem from the field of high-contrast imaging. On this problem, dramatic improvements are translated to an ability to solve problems with a much finer grid of discretized points. As we shall show, in general, the “fast Fourier” version of the optimization constraints produces a larger but sparser constraint matrix and therefore one can think of the fast Fourier transform as a method of sparsifying the constraints in an optimization problem, which is usually a good thing.
Full Text: PDF
Mathematical Programming Computation, Volume 4, Issue 1, March 2012
A primal–dual regularized interior-point method for convex quadratic programs
M. P. Friedlander, D. Orban
Interior-point methods in augmented form for linear and convex quadratic programming require the solution of a sequence of symmetric indefinite linear systems which are used to derive search directions. Safeguards are typically required in order to handle free variables or rank-deficient Jacobians. We propose a consistent framework and accompanying theoretical justification for regularizing these linear systems. Our approach can be interpreted as a simultaneous proximal-point regularization of the primal and dual problems. The regularization is termed exact to emphasize that, although the problems are regularized, the algorithm recovers a solution of the original problem, for appropriate values of the regularization parameters.
Full Text: PDF
Imprint and privacy statement
For the imprint and privacy statement we refer to the Imprint of ZIB.
© 2008-2024 by Zuse Institute Berlin (ZIB).