Abstract
In this paper, we study data-dependent regret bounds for online kernel selection in the regime online classification with the hinge loss. Existing work only achieves \(O(\Vert f\Vert ^2_{\mathcal {H}_{\kappa }}T^\alpha ), \frac{1}{2}\le \alpha <1\) regret bounds, where \(\kappa \in \mathcal {K}\), a preset candidate set. The worst-case regret bounds can not reveal kernel selection improves the performance of single kernel leaning in some benign environment. We develop two adaptive online kernel selection algorithms and obtain the first high-probability regret bound depending on \(\mathcal {A}(\mathcal {I}_T,\kappa )\), a variant of kernel alignment. If there is a kernel in the candidate set matching the data well, then our algorithms can improve the learning performance significantly and reduce the time complexity. Our results also justify using kernel alignment as a criterion for evaluating kernel function. The first algorithm has a O(T/K) per-round time complexity and enjoys a \(O(\Vert f\Vert ^2_{\mathcal {H}_{i^*}} \sqrt{K\mathcal {A}(\mathcal {I}_T,\kappa _{i^*})})\) high-probability regret bound. The second algorithm enjoys a \(\tilde{O}(\beta ^{-1} \sqrt{T\mathcal {A}(\mathcal {I}_T,\kappa _{i^*})})\) per-round time complexity and achieves a \(\tilde{O}(\Vert f\Vert ^2_{\mathcal {H}_{{i^*}}}K^{\frac{1}{2}}\beta ^{\frac{1}{2}} T^{\frac{1}{4}}\mathcal {A}(\mathcal {I}_T,\kappa _{i^*})^{\frac{1}{4}})\) high-probability regret bound, where \(\beta \ge 1\) is a balancing factor and \(\kappa _{i^*}\in \mathcal {K}\) is the kernel with minimal \(\mathcal {A}(\mathcal {I}_T,\kappa )\).
This work was supported in part by the National Natural Science Foundation of China under grants No. 62076181.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The codes are available at https://github.com/JunfLi-TJU/KARegret-OKS.
References
Baram, Y.: Learning by kernel polarization. Neural Comput. 17(6), 1264–1275 (2005)
Calandriello, D., Lazaric, A., Valko, M.: Efficient second-order online kernel learning with adaptive embedding. In: Advances in Neural Information Processing Systems, vol. 30, pp. 6140–6150 (2017)
Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)
Chiang, C., et al.: Online optimization with gradual variations. In: Proceedings of the 25th Annual Conference on Learning Theory, pp. 6.1–6.20 (2012)
Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: a kernel-based perceptron on a budget. SIAM J. Comput. 37(5), 1342–1372 (2008)
Foster, D.J., Kale, S., Mohri, M., Sridharan, K.: Parameter-free online learning via model selection. In: Advances in Neural Information Processing Systems, vol. 30, pp. 6022–6032 (2017)
Foster, D.J., Rakhlin, A., Sridharan, K.: Adaptive online learning. In: Advances in Neural Information Processing Systems, vol. 28, pp. 3375–3383 (2015)
Hazan, E., Kale, S.: Better algorithms for benign bandits. In: Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 38–47 (2009)
Hazan, E., Kale, S.: Extracting certainty from uncertainty: regret bounded by variation in costs. Mach. Learn. 80(2–3), 165–188 (2010)
Jézéquel, R., Gaillard, P., Rudi, A.: Efficient online learning with kernels for adversarial large scale problems. In: Advances in Neural Information Processing Systems, vol. 32, pp. 9427–9436 (2019)
Jin, R., Hoi, S.C.H., Yang, T.: Online multiple kernel learning: algorithms and mistake bounds. In: Proceedings of the 21st International Conference on Algorithmic Learning Theory, pp. 390–404 (2010)
Lee, C., Luo, H., Wei, C., Zhang, M.: Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs. In: Advances in Neural Information Processing Systems, vol. 33, pp. 15522–15533 (2020)
Li, J., Liao, S.: Online kernel selection with multiple bandit feedbacks in random feature space. In: Proceedings of the 11th International Conference on Knowledge Science, Engineering and Management, pp. 301–312 (2018)
Lu, J., Hoi, S.C.H., Wang, J., Zhao, P., Liu, Z.: Large scale online kernel learning. J. Mach. Learn. Res. 17(47), 1–43 (2016)
Lykouris, T., Sridharan, K., Tardos, É.: Small-loss bounds for online learning with partial information. In: Proceedings of the 31st Conference on Learning Theory, pp. 979–986 (2018)
Muthukumar, V., Ray, M., Sahai, A., Bartlett, P.: Best of many worlds: robust model selection for online supervised learning. In: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pp. 3177–3186 (2019)
Nguyen, T.D., Le, T., Bui, H., Phung, D.: Large-scale online kernel learning with random feature reparameterization. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 2543–2549 (2017)
Rakhlin, A., Sridharan, K.: Online learning with predictable sequences. In: Proceedings of the 26th Annual Conference on Learning Theory, pp. 993–1019 (2013)
Shen, Y., Chen, T., Giannakis, G.B.: Random feature-based online multi-kernel learning in environments with unknown dynamics. J. Mach. Learn. Res. 20(22), 1–36 (2019)
Vitter, J.S.: Random sampling with a reservoir. ACM Trans. Math. Softw. 11(1), 37–57 (1985)
Wei, C., Luo, H.: More adaptive algorithms for adversarial bandits. In: Proceedings of the 31st Annual Conference on Learning Theory, pp. 1263–1291 (2018)
Yang, T., Mahdavi, M., Jin, R., Yi, J., Hoi, S.C.H.: Online kernel selection: algorithms and evaluations. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 1197–1202 (2012)
Zhang, L., Yi, J., Jin, R., Lin, M., He, X.: Online kernel learning with a near optimal sparsity bound. In: Proceedings of the 30th International Conference on Machine Learning, pp. 621–629 (2013)
Zhang, X., Liao, S.: Online kernel selection via incremental sketched kernel alignment. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 3118–3124 (2018)
Zhao, P., Wang, J., Wu, P., Jin, R., Hoi, S.C.H.: Fast bounded online gradient descent algorithms for scalable kernel-based online learning. In: Proceedings of the 29th International Conference on Machine Learning, pp. 1075–1082 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Liao, S., Li, J. (2021). High-Probability Kernel Alignment Regret Bounds for Online Kernel Selection. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12975. Springer, Cham. https://doi.org/10.1007/978-3-030-86486-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-86486-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86485-9
Online ISBN: 978-3-030-86486-6
eBook Packages: Computer ScienceComputer Science (R0)