-
University of Exeter
- UK
Stars
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
Preprint: Asymmetry in Low-Rank Adapters of Foundation Models
Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models
An easy-to-use federated learning platform
Latest Advances on Federated LLM Learning
Official implementation of paper: SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Material for The Mathematical Engineering of Deep Learning. See https://deeplearningmath.org
Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"
Example ML projects that use the Determined library.
✨✨A curated list of latest advances on Large Foundation Models with Federated Learning
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)
source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib
AISystem 主要是指AI系统,包括AI芯片、AI编译器、AI推理和训练框架等AI全栈底层技术
An implementation of the BERT model and its related downstream tasks based on the PyTorch framework. @月来客栈
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Personalized federated learning codebase for research
"Efficient Federated Learning for Modern NLP", to appear at MobiCom 2023.
Comprehensive and timely academic information on federated learning (papers, frameworks, datasets, tutorials, workshops)