-
Peking University
- Beijing, China
Stars
A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/Pallas/JAX).
北大文再文-最优化方法(凸优化)-程序作业 Course homework for Optimization Methods 2023 Fall, PKU WenZW
Machine learning algorithms for many-body quantum systems
Train transformer language models with reinforcement learning.
A framework for few-shot evaluation of language models.
An implementation combining FermiNet with effective core potential (ecp). For paper, see https://arxiv.org/abs/2108.11661.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Efficient and Accurate Neural-Network Ansatz for Quantum Monte Carlo
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
An implementation of the Fermionic Neural Network for ab-initio electronic structure calculations
Official Repository for the Uni-Mol Series Methods
Graphormer is a general-purpose deep learning backbone for molecular modeling.