Home » Research News (Page 2)
Category Archives: Research News
Today I received the announcement that the paper “Fusion of Ultrasound Harmonic Imaging with Clutter Removal Using Sparse Signal Separation” was accepted for a presentation in the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2015).
The work introduces a novel idea on how speckle noise can be reduced by using a fusion of the fundamental and 2nd harmonics acquired simultaneously. The idea is to remove clutter artifacts while fusing the two harmonic signals. We base the solution on our previous work on clutter mitigation using MCA and the idea of joint sparsity. The method results in improved images both in clutter mitigation and speckle noise reduction.
The conference will take place during April 19th – 24th, 2015 in the wonderful city of Brisbane, Australia.
I released the code for the paper “A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation” that was presented in NIPS 2014. The algorithm includes a flag that enables the multilevel acceleration. This flag is very useful for large-scale problems on the thousands-millions of variables. The code runs in Matlab and includes some functions in C that require compilation. Also, it calls functions from METIS 5.0.2 to partitioning the neighbors in every sweep. The released version was tested on Windows, although it should work on other platforms as well.
You are welcome to try it and contact me with any comment you may have. I would like to know if somebody managed to run it in linux or mac.
Today, I found that the work “A Block-Coordinate Descent Approach for Large-Scale Sparse Inverse Covariance Estimation” joint with Eran Treister was published in the NIPS 2014 proceedings website. I will publish the algorithm code for this work and the Multilevel framework in a few days.
Hope that you enjoy it and please send me your comments!
Last week, I received the notice that the work with Eran Treister and Irad Yavneh was accepted in the optimization workshop at NIPS 2014. This is a follow up work the sparse inverse covariance work, where we present an acceleration framework based on multilevel techniques. The framework reduces the number of computations needed by defining an hierarchy of levels and updating a subset of the active set of non-zero elements. We tested the framework on QUIC and on BCD-IC algorithms with very interesting results, in particular for large-scale problems where the timings are reduced up to 10x.
See you at NIPS 2014 and in the OPT 2014 workshop.
A few days ago, I’ve received the notification about the acceptance to NIPS 2014 of the work I submitted with my friend and colleague Eran Treister back in June. The NIPS 2014 conference will be held in Montreal, Canada during December 8th and 11th. Our work is about a new algorithm to solve the Sparse Inverse Covariance Estimation problem in high dimensions, such that the memory is a limitation factor. In the work we show that the algorithm is faster than the previous methods in thousands to millions of variables, and that the algorithm is capable of running in a single server with 64GB because of its reduced memory usage.
Our work on Sparse Signal Separation for Clutter Reduction in Echocardiography using Off-line Learned Dictionaries was accepted to be presented in IEEE 28th Convention of Electrical and Electronics Engineers in Israel. The conference will be held in Eilat during December, 2014. The work is about removing clutter artifacts from ultrasound images using sparse representations, morphological component analysis, and off-line dictionary learning.
The poster that I presented about the new work with my friend and colleague Eran Treister, obtained the 2nd place in the CS Faculty Research Day. The event was held last Monday at CS faculty building. Among visitors there were undergrad students, professors, and industry people.
The work presents a state-of-the-art method to compute the sparse inverse of the covariance matrix in huge dimensions (hundred thoudsands elements). The method allows for computation of a 100K by 100K matrix in about 10 hours in a quad core computer with 8Gb memory.