[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Learn the Approximation Distribution of Sparse Coding with Mixture Sparsity Network

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

  • 1851 Accesses

Abstract

Sparse coding is typically solved by iterative optimization techniques, such as the ISTA algorithm. To accelerate the estimation, neural networks are proposed to produce the best possible approximation of the sparse codes by unfolding and learning weights of ISTA. However, due to the uncertainty in the neural network, one can only obtain a possible approximation with fixed computation cost and tolerable error. Moreover, since the problem of sparse coding is an inverse problem, the optimal possible approximation is often not unique. Inspired by these insights, we propose a novel framework called Learned ISTA with Mixture Sparsity Network (LISTA-MSN) for sparse coding, which learns to predict the best possible approximation distribution conditioned on the input data. By sampling from the predicted distribution, LISTA-MSN can obtain a more precise approximation of sparse codes. Experiments on synthetic data and real image data demonstrate the effectiveness of the proposed method.

L. Li and X. Long—Contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 63.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 79.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ablin, P., Moreau, T., Massias, M., Gramfort, A.: Learning step sizes for unfolded sparse coding. In: Advances in Neural Information Processing Systems, pp. 13100–13110 (2019)

    Google Scholar 

  2. Bauschke, H.H., Combettes, P.L., et al.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011). https://doi.org/10.1007/978-1-4419-9467-7

    Book  MATH  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bishop, C.M.: Mixture density networks (1994)

    Google Scholar 

  5. Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5–6), 629–654 (2008). https://doi.org/10.1007/s00041-008-9035-z

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, X., Liu, J., Wang, Z., Yin, W.: Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In: Advances in Neural Information Processing Systems, pp. 9061–9071 (2018)

    Google Scholar 

  7. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs) (2015)

    Google Scholar 

  8. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    Article  MathSciNet  Google Scholar 

  9. Friedman, J., Hastie, T., Höfling, H., Tibshirani, R., et al.: Pathwise coordinate optimization. Ann. Appl. Stat. 1(2), 302–332 (2007)

    Article  MathSciNet  Google Scholar 

  10. Gold, S., Rangarajan, A., et al.: Softmax to softassign: neural network algorithms for combinatorial optimization. J. Artif. Neural Netw. 2(4), 381–399 (1996)

    Google Scholar 

  11. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 399–406 (2010)

    Google Scholar 

  12. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017

    Google Scholar 

  13. Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area V2. In: NIPS (2007)

    Google Scholar 

  14. Liu, J., Chen, X., Wang, Z., Yin, W.: ALISTA: analytic weights are as good as learned weights in LISTA. In: International Conference on Learning Representations (2019)

    Google Scholar 

  15. Mairal, J., Bach, F.R., Ponce, J., Sapiro, G., Zisserman, A.: Discriminative learned dictionaries for local image analysis. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

    Google Scholar 

  16. Moreau, T., Bruna, J.: Understanding trainable sparse coding via matrix factorization. Stat 1050, 29 (2017)

    Google Scholar 

  17. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010). https://doi.org/10.1109/TIP.2010.2050625

    Article  MathSciNet  MATH  Google Scholar 

  18. Zhou, J.T., et al.: SC2Net: sparse LSTMs for sparse coding. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part to Dr. Liansheng Zhuang by NSFC under contract (U20B2070, No. 61976199), and in part to Dr. Houqiang Li by NSFC under contract No. 61836011.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liansheng Zhuang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, L., Long, X., Zhuang, L., Wang, S. (2021). Learn the Approximation Distribution of Sparse Coding with Mixture Sparsity Network. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics