User profiles for Felix Yu
Felix Xinnan YuSr. Staff Research Scientist, Google New York Verified email at google.com Cited by 19441 |
Federated learning: Strategies for improving communication efficiency
Federated Learning is a machine learning setting where the goal is to train a high-quality
centralized model while training data remains distributed over a large number of clients each …
centralized model while training data remains distributed over a large number of clients each …
Simplified models for LHC new physics searches
…, J Wacker, W Waltenberger, I Yavin, F Yu… - Journal of Physics G …, 2012 - iopscience.iop.org
This document proposes a collection of simplified models relevant to the design of new-physics
searches at the Large Hadron Collider (LHC) and the characterization of their results. …
searches at the Large Hadron Collider (LHC) and the characterization of their results. …
A field guide to federated optimization
Federated learning and analytics are a distributed approach for collaboratively learning
models (or statistics) from decentralized data, motivated by and designed for privacy protection. …
models (or statistics) from decentralized data, motivated by and designed for privacy protection. …
cpSGD: Communication-efficient and differentially-private distributed SGD
Distributed stochastic gradient descent is an important subroutine in distributed learning. A
setting of particular interest is when the clients are mobile devices, where two important …
setting of particular interest is when the clients are mobile devices, where two important …
Long-lived particles at the energy frontier: the MATHUSLA physics case
…, B Tweedie, SM West, C Young, F Yu… - Reports on progress …, 2019 - iopscience.iop.org
We examine the theoretical motivations for long-lived particle (LLP) signals at the LHC in a
comprehensive survey of standard model (SM) extensions. LLPs are a common prediction of …
comprehensive survey of standard model (SM) extensions. LLPs are a common prediction of …
An exploration of parameter redundancy in deep networks with circulant projections
We explore the redundancy of parameters in deep neural networks by replacing the
conventional linear projection in fully-connected layers with the circulant projection. The circulant …
conventional linear projection in fully-connected layers with the circulant projection. The circulant …
Pre-training tasks for embedding-based large-scale retrieval
We consider the large-scale query-document retrieval problem: given a query (eg, a question),
return the set of relevant documents (eg, paragraphs containing the answer) from a large …
return the set of relevant documents (eg, paragraphs containing the answer) from a large …
Self-supervised learning for large-scale item recommendations
Large scale recommender models find most relevant items from huge catalogs, and they
play a critical role in modern search and recommendation systems. To model the input space …
play a critical role in modern search and recommendation systems. To model the input space …
Feddm: Iterative distribution matching for communication-efficient federated learning
Federated learning (FL) has recently attracted increasing attention from academia and
industry, with the ultimate goal of achieving collaborative training under privacy and …
industry, with the ultimate goal of achieving collaborative training under privacy and …
Designing category-level attributes for discriminative visual recognition
Attribute-based representation has shown great promises for visual recognition due to its
intuitive interpretation and cross-category generalization property. However, human efforts are …
intuitive interpretation and cross-category generalization property. However, human efforts are …