Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleAugust 2024
A Framework for Adversarial Streaming Via Differential Privacy and Difference Estimators
AbstractClassical streaming algorithms operate under the (not always reasonable) assumption that the input stream is fixed in advance. Recently, there is a growing interest in designing robust streaming algorithms that provide provable guarantees even ...
- research-articleJanuary 2025
Private truly-everlasting robust-prediction
ICML'24: Proceedings of the 41st International Conference on Machine LearningArticle No.: 1896, Pages 46591–46604Private everlasting prediction (PEP), recently introduced by Naor et al. [2023], is a model for differentially private learning in which the learner never publicly releases a hypothesis. Instead, it provides black-box access to a "prediction oracle" that ...
- research-articleMay 2024
Black-box differential privacy for interactive ML
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing SystemsArticle No.: 3381, Pages 77313–77330In this work we revisit an interactive variant of joint differential privacy, recently introduced by Naor et al. [2023], and generalize it towards handling online processes in which existing privacy definitions seem too restrictive. We study basic ...
- research-articleMay 2024
Private everlasting prediction
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing SystemsArticle No.: 2390, Pages 54785–54804A private learner is trained on a sample of labeled points and generates a hypothesis that can be used for predicting the labels of newly sampled points while protecting the privacy of the training set [Kasiviswannathan et al., FOCS 2008]. Past research ...
-
- research-articleMay 2024
Adaptive data analysis in a balanced adversarial model
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing SystemsArticle No.: 1120, Pages 25748–25760In adaptive data analysis, a mechanism gets n i.i.d. samples from an unknown distribution Ɗ, and is required to provide accurate estimations to a sequence of adaptively chosen statistical queries with respect to Ɗ. Hardt and Ullman [2014] and Steinke and ...
- research-articleJuly 2023
Concurrent shuffle differential privacy under continual observation
ICML'23: Proceedings of the 40th International Conference on Machine LearningArticle No.: 1415, Pages 33961–33982We introduce the concurrent shuffle model of differential privacy. In this model we have multiple concurrent shufflers permuting messages from different, possibly overlapping, batches of users. Similarly to the standard (single) shuffle model, the privacy ...
- research-articleJune 2023
Optimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization
STOC 2023: Proceedings of the 55th Annual ACM Symposium on Theory of ComputingPages 472–482https://doi.org/10.1145/3564246.3585148The problem of learning threshold functions is a fundamental one in machine learning. Classical learning theory implies sample complexity of O(ξ−1 log(1/β)) (for generalization error ξ with confidence 1−β). The private version of the problem, however, ...
- ArticleApril 2023
On Differential Privacy and Adaptive Data Analysis with Bounded Space
AbstractWe study the space complexity of the two related fields of differential privacy and adaptive data analysis. Specifically,
- Under standard cryptographic assumptions, we show that there exists a problem P that requires exponentially more space to be ...
- research-articleFebruary 2023
Tricking the hashing trick: a tight lower bound on the robustness of CountSketch to adaptive inputs
AAAI'23/IAAI'23/EAAI'23: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial IntelligenceArticle No.: 813, Pages 7235–7243https://doi.org/10.1609/aaai.v37i6.25882CountSketch and Feature Hashing (the "hashing trick") are popular randomized dimensionality reduction methods that support recovery of ℓ2-heavy hitters (keys i where vi2 > ŝ‖v‖22) and approximate inner products. When the inputs are not adaptive (do not ...
- research-articleNovember 2022
Adversarially Robust Streaming Algorithms via Differential Privacy
Journal of the ACM (JACM), Volume 69, Issue 6Article No.: 42, Pages 1–14https://doi.org/10.1145/3556972A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms ...
- research-articleJune 2022
Dynamic algorithms against an adaptive adversary: generic constructions and lower bounds
STOC 2022: Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of ComputingPages 1671–1684https://doi.org/10.1145/3519935.3520064Given an input that undergoes a sequence of updates, a dynamic algorithm maintains a valid solution to some predefined problem at any point in time; the goal is to design an algorithm in which computing a solution to the updated input is done more ...
- research-articleJanuary 2022
Differentially Private Learning of Geometric Concepts
SIAM Journal on Computing (SICOMP), Volume 51, Issue 4Pages 952–974https://doi.org/10.1137/21M1406428We present efficient differentially private algorithms for learning unions of polygons in the plane (which are not necessarily convex). Our algorithms are $(\alpha,\beta)$--probably approximately correct and $(\varepsilon,\delta)$--differentially private ...
- research-articleJune 2024
On the sample complexity of privately learning axis-aligned rectangles
NIPS '21: Proceedings of the 35th International Conference on Neural Information Processing SystemsArticle No.: 2167, Pages 28286–28297We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a finite grid Xd ⊆ ℝd with differential privacy. Existing results show that the sample complexity of this problem is at most min $\min\left\{ d{\cdot}\log|X| \;,\; d^{1.5}{\cdot}\...
- research-articleJune 2024
Differentially private multi-armed bandits in the shuffle model
NIPS '21: Proceedings of the 35th International Conference on Neural Information Processing SystemsArticle No.: 1911, Pages 24956–24967We give an (ε, δ)-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of $O\left(\left(\sum_{a:\Delta_a>0}\frac{\log T}{\Delta_a}\right)+\frac{k\sqrt{\log\frac{1}{\delta}}\...
- ArticleAugust 2021
Separating Adaptive Streaming from Oblivious Streaming Using the Bounded Storage Model
AbstractStreaming algorithms are algorithms for processing large data streams, using only a limited amount of memory. Classical streaming algorithms typically work under the assumption that the input stream is chosen independently from the internal state ...
- research-articleApril 2021
Algorithmic Stability for Adaptive Data Analysis
SIAM Journal on Computing (SICOMP), Volume 50, Issue 3Pages STOC16-377–STOC16-405https://doi.org/10.1137/16M1103646Adaptivity is an important feature of data analysis---the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all ...
- research-articleJanuary 2021
Locally private k-means clustering
The Journal of Machine Learning Research (JMLR), Volume 22, Issue 1Article No.: 176, Pages 7964–7993We design a new algorithm for the Euclidean k-means problem that operates in the local model of differential privacy. Unlike in the non-private literature, differentially private algorithms for the k-means objective incur both additive and multiplicative ...
- research-articleJanuary 2021
Learning Privately with Labeled and Unlabeled Examples
AbstractA private learner is an algorithm that given a sample of labeled individual examples outputs a generalizing hypothesis while preserving the privacy of each individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic construction of ...
- research-articleDecember 2020
Private learning of halfspaces: simplifying the construction & reducing the sample complexity
NIPS '20: Proceedings of the 34th International Conference on Neural Information Processing SystemsArticle No.: 1172, Pages 13976–13985We present a differentially private learner for halfspaces over a finite grid G in ℝd with sample complexity ≈d2.5 · 2log* |G| which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a d2 factor. The building block for our learner is ...