[Math logo] WEST COAST OPTIMIZATION MEETING: SPRING 2012

University of Washington

HOME

SCHEDULE

ABSTRACTS

HOTELS

FOOD GUIDE

BANQUET

TRAVEL

MAPS

CONTACT US

DEPARTMENT OF MATHEMATICS (UW)

UNIVERSITY OF WASHINGTON

Abstracts

Printable abstracts.


  • Chris Aholt (University of Washington)

    Quadratic Sum of Squares Relaxations

    We consider the question of minimizing a quadratic objective function over a real variety described by quadratic polynomials. This problem is NP-hard in general. We show necessary and sufficient conditions under which the first sum of squares relaxation (i.e. the Lagrangian dual) is exact, assuming attainment of both the relaxation and the original objective. We apply the theorem when the objective is squared Euclidean distance to a point, where we see that these methods can effectively tackle image reconstruction problems in computer vision which motivated this work.


  • Jeff Bilmes (University of Washington)

    Some Recent Results with Submodular Functions

    We'll discuss a number of recent results using submodular functions. First, we'll discuss finding the minimum cost cut on graphs that have submodular cost edge weights (cooperative cut) --- this will including hardness results, approximation algorithms (including an interesting discrete iterative approach), and some applications in both vision and speech. Next, we'll discuss how submodularity is becoming important in the area of machine learning, including some results on active learning and data summarization. This last application also involves new strategies to in some sense "learn" submodular functions given a sample of data.

    Joint work with Stefanie Jegelka, Andrew Guillory, and Hui Lin.


  • Maryam Fazel (University of Washington)

    Algorithms for rank minimization of structured matrices with applications in system identification and realization

    We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and Moment structures, and catalog applications from diverse fields under this framework, including system identification and shape-from-moments estimation.

    Several first-order methods for solving the resulting optimization problem, including alternating direction methods, proximal point algorithm, and gradient projection methods are presented and numerical experiments comparing their performance on system identification and system realization problems are discussed.

    Joint work with: Ting Kei Pong, Defeng Sun, and Paul Tseng.


  • Hung Phan (University of British Columbia, Okanagan)

    Restricted Normal Cones: Basic Properties and Applications

    In this talk, I introduce the restricted normal cone, which is a novel generalization of the Mordukhovich (also known as basic or limiting) normal cone. Basic properties are presented. In the case of subspaces, we make a connection to the Friedrichs angle between the subspaces. Restricted normal cones are useful in extending work by Lewis, Luke and Malick on the method of alternating projections for two (possibly nonconvex) sets. An application to the problem of sparsity optimization will also be presented.

    Based on joint work with: Heinz Bauschke (UBC Kelowna), Russell Luke (Goettingen, Germany), and Shawn Wang (UBC Kelowna)


  • Ben Recht (University of Wisconsin, Madison)

    The Convex Geometry of Inverse Problems

    Deducing the state or structure of a system from partial, noisy measurements is a fundamental task throughout the sciences and engineering. The resulting inverse problems are often ill-posed because there are fewer measurements available than the ambient dimension of the model to be estimated. In practice, however, many interesting signals or models contain few degrees of freedom relative to their ambient dimension: a small number of genes may constitute the signature of a disease, very few parameters may specify the correlation structure of a time series, or a sparse collection of geometric constraints may determine a sensor network configuration. Discovering, leveraging, or recognizing such low-dimensional structure plays an important role in making inverse problems well-posed.

    In this talk, I will propose a unified approach to transform notions of simplicity and latent low-dimensionality into convex penalty functions. This approach builds on the success of generalizing compressed sensing to matrix completion, and greatly extends the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose general signals into sums of atoms from a simple---but not necessarily discrete---set. These algorithms are derived in an optimization framework that encompasses previous methods based on l1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will provide sharp estimates of the number of generic measurements required for exact and robust estimation of a variety of structured models. I will then detail several example applications and describe how to scale the corresponding algorithms to massive data sets.


  • Johannes Royset (The Naval Postgraduate School)

    Uncertainty quantification using nonparametric estimation and epi-splines

    We address uncertainty quantification in complex systems by a flexible, nonparametric framework for estimation of density and performance functions. The framework systematically incorporates hard information derived from physics-based sensors, field test data, and computer simulations as well as soft information from human sources and experiences. The framework is based on epi-splines for consistent approximation of infinite-dimensional optimization problems arising in the estimation process. Preliminary numerical results indicate that the framework is highly promising.

    This is joint work with R. J-B Wets.


  • Mohit Singh (McGill University and Microsoft Research)

    Iterative Methods in Combinatorial Optimization

    In this talk we will demonstrate iterative methods as a general technique to analyze linear programming formulations of many combinatorial optimization problems. The technique builds on the result of Jain who introduced the iterative rounding framework in the 90s. The talk will focus on the recent applications to degree bounded network design problems where the task is to minimize the cost of the network and also satisfy given degree bounds on nodes. The method enables to achieve additive approximation algorithms for these problems which add to a rather small list of combinatorial optimization problems which have an additive approximation algorithm.

For a printable version of the abstracts click HERE.

This page was last modified on APRIL 24, 2012