location: Current position: Home >> Scientific Research >> Paper Publications

Learning an Alternating Bergman Network for Non-convex and Non-smooth Optimization Problems

Hits:

Indexed by:Symposium

Date of Publication:2017-01-01

Included Journals:EI、CPCI-S

Volume:10559

Page Number:11-27

Key Words:Non-convex optimization; Alternating direction method; Sparse approximation; Learning-based algorithm; Optimization network

Abstract:Recently, non-convex and non-smooth problems have received considerable interests in the fields of image processing and machine learning. The proposed conventional algorithms rely on carefully designed initializations, and the parameters can not be tuned adaptively during iterations with corresponding to various real-world data. To settle these problems, we propose an alternating Bregman network (ABN), which discriminatively learns all the parameters from training pairs and then is directly applied to test data without additional operations. Specifically, parameters of ABN are adaptively learnt from training data to force the objective value drop rapidly toward the optimal and then obtain a desired solution in practice. Furthermore, the basis algorithm of ABN is an alternating method with Bregman modification (AMBM), which solves each subproblem with a designated Bregman distance. This AMBM is more general and flexible than previous approaches; at the same time it is proved to receive the best convergence result for general non-convex and non-smooth optimization problems. Thus, our proposed ABN is an efficient and converged algorithm which rapidly converges to desired solutions in practice. We applied ABN to sparse coding problem with l(0) penalty and the experimental results verify the efficiency of our proposed algorithm.

Pre One:L-0-Regularized Intensity and Gradient Prior for Deblurring Text Images and Beyond

Next One:Learning Discriminative Data Fitting Functions for Blind Image Deblurring