8000 Support mixed and low precision training by iden-kalemaj · Pull Request #764 · pytorch/opacus · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Support mixed and low precision training #764

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

iden-kalemaj
Copy link
Contributor

Summary:
We add support for mixed and low precision training in Opacus.

Mixed precision training is supported iw 8000 th "hooks", "ghost" grad_sample_modes.

Low-precision trainig is additionally supported with "functorch".

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision.

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 30, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request May 30, 2025
Summary:

We add support for mixed and low precision training in Opacus. 

Mixed precision training is supported iwth "hooks", "ghost" grad_sample_modes. 

Low-precision trainig is additionally supported with "functorch".

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision. 

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request May 30, 2025
Summary:
Pull Request resolved: pytorch#764

We add support for mixed and low precision training in Opacus.

Mixed precision training is supported iwth "hooks", "ghost" grad_sample_modes.

Low-precision trainig is additionally supported with "functorch".

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision.

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jun 1, 2025
Summary:

We add support for mixed and low precision training in Opacus. 

Mixed precision training is supported iwth "hooks", "ghost" grad_sample_modes. 

Low-precision trainig is additionally supported with "functorch", "ew"

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision. 

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jun 2, 2025
Summary:

We add support for mixed and low precision training in Opacus. 

Mixed precision training is supported iwth "hooks", "ghost" grad_sample_modes. 

Low-precision trainig is additionally supported with "functorch", "ew"

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision. 

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

@iden-kalemaj iden-kalemaj mentioned this pull request Jun 2, 2025
iden-kalemaj added a commit to iden-kalemaj/opacus that referenced this pull request Jun 2, 2025
Summary:

We add support for mixed and low precision training in Opacus. 

Mixed precision training is supported with "hooks", "ghost" grad_sample_modes. 

Low-precision training is additionally supported with "functorch", "ew"

Why there is no functorch support for mixed precision trainig: The backward pass with functorch performs both a forward and backward pass to compute per-sample gradients. The forrwad pass happens outside of the torch.amp context, so it cannot handle mixed precision. 

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

Summary:

We add support for mixed and low precision training in Opacus. 

Mixed precision training is supported with "hooks", "ghost", "functorch" grad_sample_modes. 

Low-precision training is additionally supported with "ew"

Support for low and mixed precision training is GPU dependent.

Differential Revision: D72415906
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D72415906

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0