-
Peking University
- Beijing
- http://net.pku.edu.cn/~cuibin/
Stars
PKU-DAIR / Hetu-Galvatron
Forked from AFDWang/Hetu-GalvatronGalvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs).
A comprehensive guide for beginners in the field of data management and artificial intelligence.
SysBench: Can Large Language Models Follow System Messages?
A Flexible and Powerful Parameter Server for large-scale machine learning
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
Implementation of MFES-HB [AAAI'21] along with Hyperband and BOHB
PKU-DAIR / GAMLP
Forked from zwt233/GAMLPCode of GAMLP for Open Graph Benchmark. KDD‘22
PKU-DAIR / RIM
Forked from zwt233/RIMRIM: Reliable Influence-based Active Learning on Graphs (NeurIPS'21 Spotlight)
PKU-DAIR / mindware
Forked from thomas-young-2013/mindwareAn efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
PKU-DAIR / HyperTune
Forked from thomas-young-2013/HyperTuneEfficient Hyper-parameter Tuning at Scale (VLDB'22)
QQ浏览器2021AI算法大赛赛道二 第1名 方案. Rank 1st solution to QQ Browser 2021 AI Algorithm Competition (CIKM AnalytiCup 2021) Track 2 Automated Hyperparameter Optimization.
An Experimental Evaluation for Database Configuration Tuning
A new CardEst Benchmark to Bridge AI and DBMS
PKU-DAIR / Hetu
Forked from Hsword/HetuA high-performance distributed deep learning system targeting large-scale and automated distributed training.
cuibinpku / angel
Forked from Angel-ML/angelA Flexible and Powerful Parameter Server for large-scale machine learning
PKU-DAIR / open-box
Forked from thomas-young-2013/open-boxGeneralized and Efficient Blackbox Optimization System
Generalized and Efficient Blackbox Optimization System.
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.