[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Online Algorithms with Limited Data Retention (Extended Abstract)

Authors Nicole Immorlica, Brendan Lucier, Markus Mobius, James Siderius



PDF
Thumbnail PDF

File

LIPIcs.FORC.2024.10.pdf
  • Filesize: 0.49 MB
  • 8 pages

Document Identifiers

Author Details

Nicole Immorlica
  • Microsoft Research, Cambridge, MA, USA
Brendan Lucier
  • Microsoft Research, Cambridge, MA, USA
Markus Mobius
  • Microsoft Research, Cambridge, MA, USA
James Siderius
  • Tuck School of Business at Dartmouth, Hanover, NH, USA

Acknowledgements

The authors thank Rad Niazadeh, Stefan Bucher, the Simons Institute for the Theory of Computing, participants at the CS and Law conference, seminar participants at the 2024 SIGecom Winter Meetings and the 2022 C3.ai DTI Workshop on Data, Learning, and Markets.

Cite As Get BibTex

Nicole Immorlica, Brendan Lucier, Markus Mobius, and James Siderius. Online Algorithms with Limited Data Retention (Extended Abstract). In 5th Symposium on Foundations of Responsible Computing (FORC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 295, pp. 10:1-10:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) https://doi.org/10.4230/LIPIcs.FORC.2024.10

Abstract

We introduce a model of online algorithms subject to strict constraints on data retention. An online learning algorithm encounters a stream of data points, one per round, generated by some stationary process. Crucially, each data point can request that it be removed from memory m rounds after it arrives. To model the impact of removal, we do not allow the algorithm to store any information or calculations between rounds other than a subset of the data points (subject to the retention constraints). At the conclusion of the stream, the algorithm answers a statistical query about the full dataset. We ask: what level of performance can be guaranteed as a function of m?
We illustrate this framework for multidimensional mean estimation and linear regression problems. We show it is possible to obtain an exponential improvement over a baseline algorithm that retains all data as long as possible. Specifically, we show that m = Poly(d, log(1/ε)) retention suffices to achieve mean squared error ε after observing O(1/ε) d-dimensional data points. This matches the error bound of the optimal, yet infeasible, algorithm that retains all data forever. We also show a nearly matching lower bound on the retention required to guarantee error ε. One implication of our results is that data retention laws are insufficient to guarantee the right to be forgotten even in a non-adversarial world in which firms merely strive to (approximately) optimize the performance of their algorithms. Our approach makes use of recent developments in the multidimensional random subset sum problem to simulate the progression of stochastic gradient descent under a model of adversarial noise, which may be of independent interest.

Subject Classification

ACM Subject Classification
  • Theory of computation → Design and analysis of algorithms
Keywords
  • online algorithms
  • machine learning
  • data
  • privacy
  • law

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Olivier Bachem, Mario Lucic, and Andreas Krause. Practical coreset constructions for machine learning. arXiv preprint arXiv:1703.06476, 2017. Google Scholar
  2. Luca Becchetti, Arthur Carvalho Walraven da Cuhna, Andrea Clementi, Francesco d'Amore, Hicham Lesfari, Emanuele Natale, and Luca Trevisan. On the multidimensional random subset sum problem. arXiv preprint arXiv:2207.13944, 2022. Google Scholar
  3. Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers, pages 177-186. Springer, 2010. Google Scholar
  4. Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463-480. IEEE, 2015. Google Scholar
  5. California consumer privacy act. https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5, 2018. Cal. Civ. Code §§ 1798.100 et seq.
  6. Consumer data protection act, 2021 h.b. 2307/2021 s.b. 1392. https://lis.virginia.gov/cgi-bin/legp604.exe?ses=212&typ=bil&val=Hb2307, 2021.
  7. Aloni Cohen, Adam Smith, Marika Swanberg, and Prashant Nalini Vasudevan. Control, confidentiality, and the right to be forgotten. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 3358-3372, 2023. Google Scholar
  8. Arthur da Cunha, Francesco d'Amore, Frédéric Giroire, Hicham Lesfari, Emanuele Natale, and Laurent Viennot. Revisiting the random subset sum problem. arXiv preprint arXiv:2204.13929, 2022. Google Scholar
  9. Paramveer Dhillon, Yichao Lu, Dean P Foster, and Lyle Ungar. New subsampling algorithms for fast least squares regression. Advances in neural information processing systems, 26, 2013. Google Scholar
  10. Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475-3506, 2012. Google Scholar
  11. Dan Feldman. Introduction to core-sets: an updated survey. arXiv preprint arXiv:2011.09384, 2020. Google Scholar
  12. Damien Ferbach, Christos Tsirigotis, Gauthier Gidel, and Joey Bose. A general framework for proving the equivariant strong lottery ticket hypothesis. In The Eleventh International Conference on Learning Representations, 2022. Google Scholar
  13. Sanjam Garg, Shafi Goldwasser, and Prashant Nalini Vasudevan. Formalizing data deletion in the context of the right to be forgotten. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 373-402. Springer, 2020. Google Scholar
  14. General data protection regulation, 2016. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1. Google Scholar
  15. Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Random differential privacy. arXiv preprint arXiv:1112.2680, 2011. Google Scholar
  16. George S. Lueker. Exponentially small bounds on the expected optimum of the partition and subset sum problems. Random Structures & Algorithms, 12(1):51-62, 1998. URL: 3.0.CO;2-S">https://doi.org/10.1002/(SICI)1098-2418(199801)12:1<51::AID-RSA3>3.0.CO;2-S.
  17. Ping Ma, Michael Mahoney, and Bin Yu. A statistical perspective on algorithmic leveraging. In International Conference on Machine Learning, pages 91-99. PMLR, 2014. Google Scholar
  18. Xiangrui Meng and Michael W Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 91-100, 2013. Google Scholar
  19. Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022. Google Scholar
  20. Ankit Pensia, Shashank Rajput, Alliot Nagle, Harit Vishwakarma, and Dimitris Papailiopoulos. Optimal lottery tickets via subset sum: Logarithmic over-parameterization is sufficient. Advances in neural information processing systems, 33:2599-2610, 2020. Google Scholar
  21. Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1571-1578, 2012. Google Scholar
  22. Daniel Ting and Eric Brochu. Optimal subsampling with influence functions. Advances in neural information processing systems, 31, 2018. Google Scholar
  23. Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, and Philip S Yu. Machine unlearning: A survey. ACM Computing Surveys, 56(1):1-36, 2023. Google Scholar
  24. Rong Zhu. Gradient-based sampling: An adaptive importance sampling for least-squares. Advances in neural information processing systems, 29, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail