- Dimitri
Bertsekas
Massachusetts Institute of Technology
Extended Monotropic Programming and Polyhedral Approximations
We propose a unifying framework for polyhedral approximation in
convex optimization. It subsumes classical methods, such as cutting
plane and simplicial decomposition, but also includes new methods,
and new versions/extensions of old methods, such as a simplicial
decomposition method for nondifferentiable optimization, and a new
piecewise linear approximation method for convex single commodity
network flow problems. Our framework is based on an extended form of
monotropic programming, a broadly applicable model, which includes as
special cases Fenchel duality and Rockafellar's monotropic
programming, and is characterized by an elegant and symmetric duality
theory. Our algorithm combines flexibly outer and inner linearization
of the cost function. The linearization is progressively refined by
using primal and dual differentiation, and the roles of outer and
inner linearization are reversed in a mathematically equivalent dual
algorithm.
- Samuel
Burer
The University of Iowa
Nonconvex Quadratic Programming: Models and Algorithms
We show that any NP-hard nonconvex quadratic program with binary
and continuous variables can be modeled as a linear program over a
particular convex cone (the cone of so-called completely positive
matrices). Although this cone is in general intractable, it is known
that it can be approximated to any accuracy by a hierarchy of explicit
polyhedral-semidefinite cones. Hence, our result can be viewed as
a unified approach for a large class of difficult problems, which
includes, for example, all 0-1 integer programs.
We also examine a specific polyhedral-semidefinite approximation of our
model. This gives rise to a theoretically tractable relaxation, which
is nevertheless expensive for off-the-shelf software. So we propose a
decomposition technique to solve the relaxation, which also readily
produces lower bounds on the NP-hard objective value.
- Jonathan
Eckstein
Rutgers
A Practical Relative Error Criterion for the Classic Augmented
Lagrangian
This talk begins by summarizing some recent computational work on
implementing augmented Lagrangian methods using a box-constrained
conjugate-gradient-like method to (approximately) solve the
subproblems. The combination of features that worked best in
practice, namely the classic quadratic penalty, no proximal
regularizing term, and a relative error criterion to allow highly
approximate solution of the subproblems, lacks a satisfying
convergence theory, even in the convex case.
Inspired by these results, we attempted to prove global convergence in
the convex case for a method of this type, using some form of relative
error criterion that can be applied directly to the augmented
Lagrangian gradient. Eventually we succeeded, but the resulting
method involves a novel auxiliary sequence that undergoes an
extragradient-like update and is Fejer monotone to the primal solution
set without having to converge to it.
Joint work with Paulo J.S. Silva -- University of Sao Paulo.
- Michael
Friedlander
The University of British Columbia
Exact Regularization of Convex Programs
An optimization problem is ill-posed if its solution is not unique or
is acutely sensitive to data perturbations. A common approach to such
problems is to construct a related problem with a well-behaved
solution that deviates only slightly from the original solution set.
The strategy is often used in data fitting applications, and also
within optimization algorithms as a means for stabilizing the solution
process.
This approach is known as regularization, and deviations from
solutions of the original problem are generally accepted as a
trade-off for obtaining solutions with other desirable properties.
In fact, however, there exist necessary and sufficient conditions such
that solutions of the regularized problem continue to be exact
solutions of the original problem. We present these conditions for
general convex programs, and give some applications of exact
regularization.
This is joint work with Paul Tseng.
- Jiawang
Nie
University of California, San Diego
Discriminants and Nonnegative Polynomials
For a semialgebraic set K in R^n, let P_d(K) be the cone of
polynomials in R^n of degree at most d that are nonnegative on K. In this
talk we discuss the geometry of the boundary of P_d(K).
When K=R^n and d is even, we
show that its boundary lies on the irreducible hypersurface defined by the
discriminant of a single polynomial. When K is a real algebraic variety, we
show that P_d(K) lies on the hypersurface defined by the discriminant of
several polynomials. When K is a general semialgebraic set, we show that
P_d(K) lies on a union of hypersurfaces defined by the discriminantal
equations. Explicit formulae for the degrees of these hypersurfaces and
discriminants are given. We also prove that typically P_d(K) does not have a
log-polynomial type barrier, but a log-semialgebraic type barrier exits.
Some illustrating examples are shown.
- Jiming
Peng
University of Illinois, Urbana-Champaign
Finding the Densest k-Subgraph via Clustering and Spectral Methods
Finding the densest k-subgraph (DKS) is a classical discrete optimization problem with
various applications from domains such as machine learning and social networks.
Mathematically, it can be represented as a 0-1 binary quadratic maximization problem
with a fixed number (k) of nonzero elements. In spite of its simple presentation, such
a problem has been proved to be NP-hard and admits no PTAS unless P=NP.
In this talk, we first reformulate the DKS as a specific clustering problem to find a
single cluster of size k. Several effective approximation algorithms are proposed based
on the reformulated clustering model. Experimental results illustrate that the
proposed algorithms can locate suboptimal solutions with quality guarantee very
quickly.
This work is joint with M. Heath, P. Jiang and R. Yang.
- Ting Kei
Pong
Uinversity of Washington
ESDP Relaxation of Sensor Network Localization: Analysis, Extensions and
Algorithms
Recently Wang, Zheng, Boyd, and Ye proposed an edge-based SDP (ESDP) as a
further relaxation of the SDP relaxation of the sensor network localization
problem. We show that zero individual trace necessarily certifies position accuracy in
ESDP interior solutions, provided that the measured distances are noiseless.
We then propose a robust version of ESDP to handle noise and a fast,
distributed algorithm for its solution.
This is a joint work with Paul Tseng.
- Warren
Hare
(The University of British Columbia, Okanagan)
Identifying Active Manifolds in Regularization Problems
In 2009, Tseng and Yun, showed that the l1-regularization
of a C2 function can be approximated by the
l1-regularization
of a quadratic approximation to the function.
We consider a generalization of this problem, in which the l1-norm
is replaced by a more general nonsmooth function that
is prox-regular and partly smooth with respect to an underlying active manifold
M (the l1-norm satisfies these conditions).
We reexamine Tseng and Yun's
algorithm in terms of active set identification,
showing that their method will correctly
identify the active manifold in a finite number of iterations.
That is, after a finite number of iterations, all future iterates will lie in
M.
Furthermore, we
confirm a conjecture of Tseng that a given class of local approximations
correctly identifies the active manifold from any point in the
neighborhood of a solution.
- Yinyu
Ye
Stanford University
Rigidity and Localization: An Optimization Perspective
A fundamental problem in wireless ad-hoc and sensor networks is that
of determining the positions of sensor nodes. Often, such a
problem is complicated by the presence of nodes whose positions cannot be
uniquely determined. Most existing work uses the notion of
global rigidity from rigidity theory to address the non-uniqueness issue.
However, such a notion is not entirely satisfactory, as it has been
shown that even if a network localization instance is known to be globally
rigid, the problem of determining the node positions is still
intractable in general. In this talk, we propose to use the notion of
universal rigidity to bridge such disconnect. Although the notion of
universal rigidity is more restrictive than that of global rigidity, it
captures a large class of networks and is much more relevant to the
efficient solvability of the network localization problem. Specifically, we
show that both the problem of deciding whether a given network
localization instance is universally rigid and the problem of determining the
node positions of a universally rigid instance can be solved
efficiently using semidefinite programming (SDP). Then, we give various
constructions of universally rigid instances. In particular, we show
that d-lateration graphs are generically universally rigid, thus demonstrating
not only the richness of the class of universally rigid instances,
but also the fact that d-lateration graphs possess much stronger geometric
properties than previously known. Based on the theory, we
introduce an edge sparsification procedure that can provably preserve the
localizability of the original input, but the SDP problem size
is greatly reduced.
For a print friendly version of the abstracts click
HERE.