Abstract
Sparse coding is typically solved by iterative optimization techniques, such as the ISTA algorithm. To accelerate the estimation, neural networks are proposed to produce the best possible approximation of the sparse codes by unfolding and learning weights of ISTA. However, due to the uncertainty in the neural network, one can only obtain a possible approximation with fixed computation cost and tolerable error. Moreover, since the problem of sparse coding is an inverse problem, the optimal possible approximation is often not unique. Inspired by these insights, we propose a novel framework called Learned ISTA with Mixture Sparsity Network (LISTA-MSN) for sparse coding, which learns to predict the best possible approximation distribution conditioned on the input data. By sampling from the predicted distribution, LISTA-MSN can obtain a more precise approximation of sparse codes. Experiments on synthetic data and real image data demonstrate the effectiveness of the proposed method.
L. Li and X. Long—Contributed equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ablin, P., Moreau, T., Massias, M., Gramfort, A.: Learning step sizes for unfolded sparse coding. In: Advances in Neural Information Processing Systems, pp. 13100–13110 (2019)
Bauschke, H.H., Combettes, P.L., et al.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011). https://doi.org/10.1007/978-1-4419-9467-7
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bishop, C.M.: Mixture density networks (1994)
Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5–6), 629–654 (2008). https://doi.org/10.1007/s00041-008-9035-z
Chen, X., Liu, J., Wang, Z., Yin, W.: Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In: Advances in Neural Information Processing Systems, pp. 9061–9071 (2018)
Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs) (2015)
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)
Friedman, J., Hastie, T., Höfling, H., Tibshirani, R., et al.: Pathwise coordinate optimization. Ann. Appl. Stat. 1(2), 302–332 (2007)
Gold, S., Rangarajan, A., et al.: Softmax to softassign: neural network algorithms for combinatorial optimization. J. Artif. Neural Netw. 2(4), 381–399 (1996)
Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 399–406 (2010)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017
Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area V2. In: NIPS (2007)
Liu, J., Chen, X., Wang, Z., Yin, W.: ALISTA: analytic weights are as good as learned weights in LISTA. In: International Conference on Learning Representations (2019)
Mairal, J., Bach, F.R., Ponce, J., Sapiro, G., Zisserman, A.: Discriminative learned dictionaries for local image analysis. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)
Moreau, T., Bruna, J.: Understanding trainable sparse coding via matrix factorization. Stat 1050, 29 (2017)
Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010). https://doi.org/10.1109/TIP.2010.2050625
Zhou, J.T., et al.: SC2Net: sparse LSTMs for sparse coding. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Acknowledgment
This work was supported in part to Dr. Liansheng Zhuang by NSFC under contract (U20B2070, No. 61976199), and in part to Dr. Houqiang Li by NSFC under contract No. 61836011.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, L., Long, X., Zhuang, L., Wang, S. (2021). Learn the Approximation Distribution of Sparse Coding with Mixture Sparsity Network. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-88013-2_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88012-5
Online ISBN: 978-3-030-88013-2
eBook Packages: Computer ScienceComputer Science (R0)