8000 Implemented SGD optimizer with momentum and added tests. by joshbrown901 · Pull Request #217 · nntile/nntile · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Implemented SGD optimizer with momentum and added tests. #217

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

joshbrown901
Copy link

I implemented the SGD optimizer with momentum and updated local repository at the kernel, starpu and tensor levels. I also provided tests for the sgd with momentum on the home directory to verify and check that the optimizer works correctly

@@ -34,6 +34,7 @@
#include <nntile/kernel/prod_inplace.hh>
#include <nntile/kernel/randn.hh>
#include <nntile/kernel/relu.hh>
#include <nntile/kernel/sgd_momentum.hh>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename as sgd_step

* distributed-memory heterogeneous systems based on StarPU runtime system.
*
* @file include/nntile/kernel/sgd_momentum.hh
* SGD_MOMENTUM low-level kernels
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fused SGD


// SGD_MOMENTUM operation on a buffer
template<typename T>
void cpu(Index nelems, T *data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally incorrect API, where are the gradients, parameters like learning rate?

{

template<typename T>
void cuda(cudaStream_t stream, Index nelems, T *data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parameters of the function are incorrect

@@ -127,10 +128,11 @@ void init()
gemm::init();
hypot::init();
hypot_scalar_inverse::init();
nrm2::init();
nrm2::init();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove space?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do not put binary files into git repo

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do not put binary files into git repo

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tests shall appear in test directory

# NNTile is software framework for fast training of big neural networks on
# distributed-memory heterogeneous systems based on StarPU runtime system.
#
# @file wrappers/python/nntile/optimizer/sgd.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

take a look at wrappers/python/optimizer/adam.py

@@ -516,6 +516,15 @@ void def_mod_tensor(py::module_ &m)
m.def("relu_fp32", &relu<fp32_t>);
m.def("relu_fp32_fast_tf32", &relu<fp32_fast_tf32_t>);

// Add activation functions for Tensor<T>
m.def("sgd_momentum_async_fp64", &sgd_momentum_async<fp64_t>);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function call shall be propagated also into wrappers/python/nntile/functions.py (find adam_step)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0