MPC

MPC 2015, ISSUE 4



Mathematical Programming Computation, Volume 7, Issue 4, December 2015

Progress in presolving for mixed integer programming

Gerald Gamrath, Thorsten Koch, Alexander Martin, Matthias Miltenberger, Dieter Weninger

This paper describes three presolving techniques for solving mixed integer programming problems (MIPs) that were implemented in the academic MIP solver SCIP. The task of presolving is to reduce the problem size and strengthen the formulation, mainly by eliminating redundant information and exploiting problem structures. The first method fixes continuous singleton columns and extends results known from duality fixing. The second analyzes and exploits pairwise dominance relations between variables, whereas the third detects isolated subproblems and solves them independently. The performance of the presented techniques is demonstrated on two MIP test sets. One contains all benchmark instances from the last three MIPLIB versions, while the other consists of real-world supply chain management problems. The computational results show that the combination of all three presolving techniques almost halves the solving time for the considered supply chain management problems. For the MIPLIB instances we obtain a speedup of 20% on affected instances while not degrading the performance on the remaining problems.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 4, December 2015

A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees

Xiaocun Que, Frank E. Curtis

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy the algorithm ensures that convergence will continue to such a point. Under suitable assumptions, it is proved that the algorithm converges globally with probability one. The algorithm has been implemented inC++and the results of numerical experiments illustrate the efficacy of the proposed approach.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 4, December 2015

PEBBL: an object-oriented framework for scalable parallel branch and bound

Jonathan Eckstein, William E. Hart, Cynthia A. Phillips

Parallel Enumeration and Branch-and-Bound Library (PEBBL) is a C++ class library implementing the underlying operations needed to support a wide variety of branch-and-bound algorithms on MPI-based message-passing distributed-memory parallel computing environments. PEBBL can be customized to support applicationspecific operations, while managing the generic aspects of branch and bound, such as maintaining the active subproblem pool across multiple processors, load balancing, and termination detection. PEBBL is designed to provide highly scalable performance on large numbers of processor cores.We describe the basics of PEBBL’s architecture, with emphasis on the features most critical to is high scalability, including its flexible two-level load balancing architecture and its support for a synchronously parallel ramp-up phase.We also present an example application: themaximummonomial agreement problem arising from certain machine learning applications. For sufficiently difficult problem instances, we show essentially linear speedup on over 6000 processor cores, demonstrating a new state of the art in scalability for branch-and-bound implementations. We also show how processor cache effects can lead to reproducibly superlinear speedups.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 4, December 2015

Computational study of decomposition algorithms for mean-risk stochastic linear programs

Lewis Ntaimo, Tanisha G. Cotton

Mean-risk stochastic programs include a risk measure in the objective to model risk averseness for many problems in science and engineering. This paper reports a computational study of mean-risk two-stage stochastic linear programs with recourse based on absolute semideviation (ASD) and quantile deviation (QDEV). The studywas aimed at performing an empirical investigation of decomposition algorithms for stochastic programs with quantile and deviation mean-risk measures; analyzing how the instance solutions vary across different levels of risk; and understanding when it is appropriate to use a given mean-risk measure. Aggregated optimality cut and separate cut subgradient-based algorithms were implemented for each mean-risk model. Both types of algorithms show similar computational performance for ASD whereas the separate cut algorithm outperforms the aggregated cut algorithm for QDEV. The study provides several insights. For example, the results reveal that the risk-neutral approach is still appropriate for most of the standard stochastic programming test instances due to their uniform or normal-like marginal distributions. However, when the distributions are modified, the risk-neutral approach may no longer be appropriate and the risk-averse approach becomes necessary. The results also show that ASD is a more conservative mean-risk measure than QDEV.

Full Text: PDF



MPC 2015, ISSUE 3



Mathematical Programming Computation, Volume 7, Issue 3, September 2015

A note on “On fast trust region methods for quadratic models with linear constraints”, by Michael J.D. Powell

Daniel Bienstock, Philip E. Gill, Nick Gould

The following paper, “On fast trust region methods for quadratic models with linear constraints” by Michael J.D. Powell, was submitted to Mathematical Programming Computation in August 2014, andwas still in review when we became aware of Mike’s deteriorating health. Unfortunately, despite our best efforts and those of two referees, Mike died before we were able to send him our verdict. Both referees had concerns about points that needed further explanation, but in all likelihood—and after a few clarifying iterations—the paper would ultimately have been published. As we believe that the ideas central to the paper are of general interest to the journal’s readership, we have taken the unusual step of publishing the paper “as is”. Readers might, like the referees, question some of the statements made, but we feel it would be wrong tosecond guess Mike’s intentions, and have chosen to leave such statements alone.Mike had a profound effect on computational nonlinear optimization. We dedicate this paper to his memory.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 3, September 2015

On fast trust region methods for quadratic models with linear constraints

M.J. D. Powell

Quadratic models Qk (x), x ? Rn, of the objective function F(x), x ? Rn, are used by many successful iterative algorithms for minimization, where k is the iteration number. Given the vector of variables xk ? Rn, a new vector xk+1 may be calculated that satisfies Qk (xk+1) < Qk (xk ), in the hope that it provides the reduction F(xk+1) < F(xk ). Trust region methods include a bound of the form xk+1 ? xk  ? k . Also we allow general linear constraints on the variables that have to hold at xk and at xk+1. We consider the construction of xk+1, using only of magnitude n2 operations on a typical iteration when n is large. The linear constraints are treated by active sets, which may be updated during an iteration, and which decrease the number of degrees of freedom in the variables temporarily, by restricting x to an affine subset of Rn. Conjugate gradient and Krylov subspace methods are addressed for adjusting the reduced variables, but the resultant steps are expressed in terms of the original variables. Termination conditions are given that are intended to combine suitable reductions in Qk (·) with a sufficiently small number of steps. The reason for our work is that xk+1 is required in the LINCOA software of the author, which is designed for linearly constrained optimization without derivatives when there are hundreds of variables. Our studies go beyond the final version of LINCOA, however, which employs conjugate gradients with termination at the trust region boundary. In particular, we find that, if an active set is updated at a point that is not the trust region centre, then the Krylov method may terminate too early due to a degeneracy. An extension to the conjugate gradient method for searching round the trust region boundary receives attention too, but it was also rejected from LINCOA, because of poor numerical results. The given techniques of LINCOA seem to be adequate in practice.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 3, September 2015

Solving the equality generalized traveling salesman problem using the Lin–Kernighan–Helsgaun Algorithm

Keld Helsgaun

The equality generalized traveling salesman problem (E-GTSP) is an extension of the traveling salesman problem (TSP) where the set of cities is partitioned into clusters, and the salesman has to visit every cluster exactly once. It is well known that any instance of E-GTSP can be transformed into a standard asymmetric instance of the TSP, and therefore solved with a TSP solver. This paper evaluates the performance of the state-of-the art TSP solver Lin–Kernighan–Helsgaun (LKH) on transformed E-GTSP instances. Although LKH is used without any modifications, the computational evaluation shows that all instances in a well-known library of benchmark instances, GTSPLIB, could be solved to optimality in a reasonable time. In addition, it was possible to solve a series of new very-large-scale instances with up to 17,180 clusters and 85,900 vertices. Optima for these instances are not known but it is conjectured that LKH has been able to find solutions of a very high quality. The program’s performance has also been evaluated on a large number of instances generated by transforming arc routing problem instances into E-GTSP instances. The program is free of charge for academic and non-commercial use and can be downloaded in source code.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 3, September 2015

A parallel quadratic programming method for dynamic optimization problems

Janick V. Frasch, Sebastian Sager, Moritz Diehl

Quadratic programming problems (QPs) that arise from dynamic optimization problems typically exhibit a very particular structure. We address the ubiquitous case where these QPs are strictly convex and propose a dual Newton strategy that exploits the block-bandedness similarly to an interior-point method. Still, the proposed method features warmstarting capabilities of active-set methods. We give details for an efficient implementation, including tailored numerical linear algebra, step size computation, parallelization, and infeasibility handling. We prove convergence of the algorithm for the considered problem class. A numerical study based on the open-source implementation qpDUNES shows that the algorithm outperforms both well-established general purpose QP solvers as well as state-of-the-art tailored control QP solvers significantly on the considered benchmark problems.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 3, September 2015

SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints

Kim-Chuan Toh, Liuqin Yang, Defeng Sun

In this paper, we present a majorized semismooth Newton-CG augmented Lagrangian method, called SDPNAL+, for semidefinite programming (SDP) with partial or full nonnegative constraints on the matrix variable. SDPNAL+ is a much enhanced version of SDPNAL introduced by Zhao et al. (SIAM J Optim 20:1737–1765, 2010) for solving generic SDPs. SDPNAL works very efficiently for nondegenerate SDPs but may encounter numerical difficulty for degenerate ones. Here we tackle this numerical difficulty by employing a majorized semismooth Newton- CG augmented Lagrangian method coupled with a convergent 3-block alternating direction method of multipliers introduced recently by Sun et al. (SIAM J Optim, to appear). Numerical results for various large scale SDPs with or without nonnegative constraints show that the proposed method is not only fast but also robust in obtaining accurate solutions. It outperforms, by a significant margin, two other competitive publicly available first order methods based codes: (1) an alternating direction method of multipliers based solver called SDPAD by Wen et al. (Math Program Comput 2:203– 230, 2010) and (2) a two-easy-block-decomposition hybrid proximal extragradient method called 2EBD-HPE by Monteiro et al. (Math Program Comput 1–48, 2014). In contrast to these two codes, we are able to solve all the 95 difficult SDP problems arising from the relaxations of quadratic assignment problems tested in SDPNAL to an accuracy of 10?6 efficiently, while SDPAD and 2EBD-HPE successfully solve 30 and 16 problems, respectively. In addition, SDPNAL+ appears to be the only viable method currently available to solve large scale SDPs arising from rank-1 tensor approximation problems constructed by Nie and Wang (SIAM J Matrix Anal Appl 35:1155–1179, 2014). The largest rank-1 tensor approximation problem we solved (in about 14.5 h) is nonsym(21,4), in which its resulting SDP problem has matrix dimension n = 9261 and the number of equality constraints m = 12,326,390.

Full Text: PDF



MPC 2015, ISSUE 2



Mathematical Programming Computation, Volume 7, Issue 2, June 2015

The strength of multi-row models

Quentin Louveaux, Laurent Poirrier, Domenico Salvagnin

We develop a method for computing facet-defining valid inequalities for any mixed-integer set PJ . While our practical implementation does not return only facet-defining inequalities, it is able to find a separating cut whenever one exists. The separator is not comparable in speed with the specific cutting-plane generators used in branch-and-cut solvers, but it is general-purpose. We can thus use it to compute cuts derived from any reasonably small relaxation PJ of a general mixed-integer problem, even when there exists no specific implementation for computing cuts with PJ . Exploiting this, we evaluate, from a computational perspective, the usefulness of cuts derived from several types of multi-row relaxations. In particular, we presentresults with four different strengthenings of the two-row intersection cut model, and multi-row models with up to fifteen rows. We conclude that only fully-strengthened two-row cuts seem to offer a significant advantage over two-row intersection cuts. Our results also indicate that the improvement obtained by going frommodelswith very few rows to models with up to fifteen rows may not be worth the increased computing cost.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 2, June 2015

Generalized alternating direction method of multipliers: new theoretical insights and applications

Ethan X. Fang, Bingsheng He, Han Liu, Xiaoming Yuan

Recently, the alternating direction method of multipliers (ADMM) has received intensive attention from a broad spectrum of areas. The generalized ADMM (GADMM) proposed by Eckstein and Bertsekas is an efficient and simple acceleration scheme of ADMM. In this paper, we take a deeper look at the linearized version of GADMM where one of its subproblems is approximated by a linearization strategy. This linearized version is particularly efficient for a number of applications arising from different areas. Theoretically, we show the worstcase O(1/k) convergence rate measured by the iteration complexity (k represents the iteration counter) in both the ergodic and a nonergodic senses for the linearized version of GADMM. Numerically, we demonstrate the efficiency of this linearized version of GADMM by some rather new and core applications in statistical learning. Code packages in Matlab for these applications are also developed.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 2, June 2015

Solving network design problems via iterative aggregation

Andreas Bärmann, Frauke Liers, Alexander Martin, Maximilian Merkert, Christoph Thurner, Dieter Weninger

In this work, we present an exact approach for solving network design problems that is based on an iterative graph aggregation procedure. The scheme allows existing preinstalled capacities. Starting with an initial aggregation, we solve a sequence of network design master problems over increasingly fine-grained representations of the original network. In each step, a subproblem is solved that either proves optimality of the solution or gives a directive where to refine the representation of the network in the subsequent iteration. The algorithm terminates with a globally optimal solution to the original problem. Our implementation uses a standard integer programming solver for solving the master problems as well as the subproblems. The computational results on random and realistic instances confirm the profitable use of the iterative aggregation technique. The computing time often reduces drastically when our method is compared to solving the original problem from scratch.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 2, June 2015

On solving a hard quadratic 3-dimensional assignment problem

Hans D. Mittelmann, Domenico Salvagnin

We address the solution of a very challenging (and previously unsolved) instance of the quadratic 3-dimensional assignment problem, arising in digitalwireless communications.The paper describes the techniques developed to solve this instance to optimality, from the choice of an appropriate mixed-integer programming formulation, to cutting planes and symmetry handling. Using these techniques we were able to solve the target instance with moderate computational effort (2.5 million nodes and 1 week of computations on a standard PC).

Full Text: PDF



MPC 2015, ISSUE 1



Mathematical Programming Computation, Volume 7, Issue 1, March 2015

Global optimization of nonconvex problems with multilinear intermediates

Xiaowei Bao, Aida Khajavirad, Nikolaos V. Sahinidis, Mohit Tawarmalani

We consider global optimization of nonconvex problems containing mul- tilinear functions. It is well known that the convex hull of a multilinear function over a box is polyhedral, and the facets of this polyhedron can be obtained by solving a linear optimization problem (LP). When used as cutting planes, these facets can sig- nificantly enhance the quality of conventional relaxations in general-purpose global solvers. However, in general, the size of this LP grows exponentially with the number of variables in the multilinear function. To cope with this growth, we propose a graph decomposition scheme that exploits the structure of a multilinear function to decom- pose it to lower-dimensional components, for which the aforementioned LP can be solved very efficiently by employing a customized simplex algorithm. We embed this cutting plane generation strategy at every node of the branch-and-reduce global solver BARON, and carry out an extensive computational study on quadratically constrained quadratic problems, multilinear problems, and polynomial optimization problems. Results show that the proposed multilinear cuts enable BARON to solve many more problems to global optimality and lead to an average 60 % CPU time reduction.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 1, March 2015

Alternating proximal gradient method for sparse nonnegative Tucker decomposition

Yangyang Xu

Multi-waydataarisesinmanyapplicationssuchaselectroencephalography classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrin- sic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the product of a core tensor and several factor matrices with sparsity and nonnegativity constraints. An alternating proximal gradient method is applied to solve the problem. The algorithm is then modified to sparse NTD with missing values. Per-iteration cost of the algorithm is estimated scalable about the data size, and global convergence is established under fairly loose conditions. Numerical experiments on both synthetic and real world data demonstrate its superiority over a few state-of-the-art methods for (sparse) NTD from partial and/or full observations. The MATLAB code along with demos are accessible from the author’s homepage.

Full Text: PDF



Mathematical Programming Computation, Volume 7, Issue 1, March 2015

Methods for convex and general quadratic programming

Philip E. Gill, Elizabeth Wong

Computational methods are considered for finding a point that satisfies the second-order necessary conditions for a general (possibly nonconvex) quadratic program (QP). The first part of the paper considers the formulation and analysis of an active-set method for a generic QP with both equality and inequality constraints. The method uses a search direction that is the solution of an equality-constrained subproblem involving a “working set” of linearly independent constraints. The method is a reformulation of a method for general QP first proposed by Fletcher, and modified subsequently by Gould. The reformulation facilitates a simpler analysis and has the benefit that the algorithm reduces to a variant of the simplex method when the QP is a linear program. The search direction is computed from a KKT system formed from the QP Hessian and the gradients of the working-set constraints. It is shown that, under certain circumstances, the solution of this KKT system may be updated using a simple recurrence relation, thereby giving a significant reduction in the number of KKT systems that need to be solved. The second part of the paper focuses on the solution of QP problems with constraints in so-called standard form. We describe how the constituent KKT systems are solved, and discuss how an initial basis is defined. Numerical results are presented for all QPs in the CUTEst test collection.

Full Text: PDF




Imprint and privacy statement

For the imprint and privacy statement we refer to the Imprint of ZIB.
© 2008-2024 by Zuse Institute Berlin (ZIB).