8000 GitHub - sailfish009/pytorch-lamb: Implementation of https://arxiv.org/abs/1904.00962
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

sailfish009/pytorch-lamb

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementation of https://arxiv.org/abs/1904.00962 for large batch, large learning rate training.

The paper doesn't specify clamp values for ϕ, so I use 10.

Bonus: TensorboardX logging (example below).

Try the sample

git clone git@github.com:cybertronai/pytorch-lamb.git
cd pytorch-lamb
pip install -e .
python test_lamb.py
tensorboard --logdir=runs

Sample results

At --lr=.02, the Adam optimizer is unable to train.

Red: python test_lamb.py --batch-size=512 --lr=.02 --wd=.01 --log-interval=30 --optimizer=adam

Blue: python test_lamb.py --batch-size=512 --lr=.02 --wd=.01 --log-interval=30 --optimizer=lamb

About

Implementation of https://arxiv.org/abs/1904.00962

Resources

License

Stars

Watchers

Forks

< 3299 /div>

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%
0