New Paper with Acceleration Framework for Sparse Optimization Problems

I added the link to the paper “A Multilevel Framework for Sparse Optimization With Application to Inverse Covariance Estimation and Logistic Regression” soon to appear in SIAM Scientific Computing (SISC) journal. The paper describes a method that accelerates sparse optimization methods that use L1 regularization to achieve sparse solution. We show how to apply this method to the sparse inverse covariance method (also known as GLASSO) and the L1-regularized logistic regression.

Released Code of the Block-Coordinate Descent for Inverse Covariance

I released the code for the paper “A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation” that was presented in NIPS 2014. The algorithm includes a flag that enables the multilevel acceleration. This flag is very useful for large-scale problems on the thousands-millions of variables. The code runs in Matlab and includes some functions in C that require compilation. Also, it calls functions from METIS 5.0.2 to partitioning the neighbors in every sweep. The released version was tested on Windows, although it should work on other platforms as well.

You are welcome to try it and contact me with any comment you may have. I would like to know if somebody managed to run it in linux or mac.

Optimization Workshop OPT2014 in NIPS

Last week, I received the notice that the work with Eran Treister and Irad Yavneh was accepted in the optimization workshop at NIPS 2014. This is a follow up work the sparse inverse covariance work, where we present an acceleration framework based on multilevel techniques. The framework reduces the number of computations needed by defining an hierarchy of levels and updating a subset of the active set of non-zero elements. We tested the framework on QUIC and on BCD-IC algorithms with very interesting results, in particular for large-scale problems where the timings are reduced up to 10x.

See you at NIPS 2014 and in the OPT 2014 workshop.