Abstract
When using kernel interpolation techniques for constructing a surrogate model from given data, the choice of interpolation points is crucial for the quality of the surrogate. When dealing with vector-valued target functions which are approximated by matrix-valued kernel models, the selection problem is further complicated as not only the choice of points but also the directions in which the data is projected must be determined.
We thus propose variants of Matrix P-greedy algorithms that enable us to iteratively select suitable sets of point-direction pairs with which the approximation space is enriched. We show that the selected pairs result in quasi-optimal convergence rates. Experimentally, we investigate the approximation quality of the different variants.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alvarez, M., Rosasco, L., Lawrence, n.d.: Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning 4(3), 195–266 (2012)
Binev, P., Cohen, A., Dahmen, W., DeVore, R., Petrova, G., Wojtaszczyk, P.: Convergence rates for greedy algorithms in reduced basis methods. SIAM J. Math. Anal. 43(3), 1457–1472 (2011). http://dx.doi.org/10.1137/100795772
DeVore, R., Petrova, G., Wojtaszczyk, P.: Greedy algorithms for reduced bases in Banach spaces. Constr. Approx. 37(3), 455–466 (2013). http://dx.doi.org/10.1007/s00365-013-9186-2
Haasdonk, B.: Convergence rates of the POD–Greedy method. ESAIM: Mathematical Modelling and Numerical Analysis 47(3), 859–873 (2013). http://dx.doi.org/10.1051/m2an/2012045
Micchelli, C.A., Pontil, M.: On learning vector-valued functions. Neural Comput. 17(1), 177–204 (2005). http://dx.doi.org/10.1162/0899766052530802
Reisert, M., Burkhardt, H.: Learning equivariant functions with matrix valued kernels. J. Mach. Learn. Res. 8, 385–408 (2007). http://dl.acm.org/citation.cfm?id=1248659.1248674
Rieger, C., Zwicknagl, B.: Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning. Adv. Comput. Math. 32(1), 103–129 (2008). http://dx.doi.org/10.1007/s10444-008-9089-0
Santin, G., Haasdonk, B.: Convergence rate of the data-independent P-greedy algorithm in kernel-based approximation. Dolomites Res. Notes Approx. 10, 68–78 (2017). www.emis.de/journals/DRNA/9-2.html
Wendland, H.: Scattered Data Approximation. Cambridge University Press (2004). http://dx.doi.org/10.1017/CBO9780511617539. Cambridge Books Online
Acknowledgements
The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—EXC 2075—390740016.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wittwar, D., Haasdonk, B. (2021). Convergence Rates for Matrix P-Greedy Variants. In: Vermolen, F.J., Vuik, C. (eds) Numerical Mathematics and Advanced Applications ENUMATH 2019. Lecture Notes in Computational Science and Engineering, vol 139. Springer, Cham. https://doi.org/10.1007/978-3-030-55874-1_119
Download citation
DOI: https://doi.org/10.1007/978-3-030-55874-1_119
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-55873-4
Online ISBN: 978-3-030-55874-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)