8000 Tags · andrewjrae/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Tags: andrewjrae/pytorch

Tags

ciflow/macos/72993

Toggle ciflow/macos/72993's commit message
remove redundant computation

ciflow/win/72932

Toggle ciflow/win/72932's commit message
[ZeRO] Add ctor support for multiple param groups (pytorch#72578)

Summary:
Pull Request resolved: pytorch#72578

**Overview**
This adds `ZeroRedundancyOptimizer` constructor support for multiple parameter groups (i.e. passing an `iterable` of `dict`s instead of an `iterable` of `torch.Tensor` as the `parameters` argument) to mirror the API for non-sharded optimizers.

Fixes pytorch#71347 and pytorch#59973.

This modifies `test_collect_shards()` to skip if ROCm.

**Test Plan**
I adjusted the existing constructor test, and I added a test for parity between constructing with two parameter groups up front versus constructor with one parameter group and adding the second parameter group after (via `add_param_group()`) versus a non-sharded optimizer.

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D34106940

Pulled By: awgu

fbshipit-source-id: 7e70fc0b3cec891646e0698eaedf02ff4354c128
(cherry picked from commit 40f2d45)

ciflow/macos/73094

Toggle ciflow/macos/73094's commit message
Update on "[quant] Add ConvTranspose reference module - Reland pytorc…

…h#73031"

Summary:
Add ConvTranspose reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_2d

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D34352228](https://our.internmc.facebook.com/intern/diff/D34352228)

[ghstack-poisoned]

ciflow/libtorch/73011

Toggle ciflow/libtorch/73011's commit message
Test Test test

ciflow/cuda/73076

Toggle ciflow/cuda/73076's commit message
Sparse CSR CPU: implement addmm(dense, sparse, sparse) -> dense

This PR adds a possiblity to multiply two sparse matrices and add the
result to a dense matrix. It uses an MKL function and only CPU path is
implemented for now.

ciflow/cuda/73075

Toggle ciflow/cuda/73075's commit message
Enable CSR inputs for torch.sparse.mm

Previously `torch.sparse.mm` supported only COO and dense inputs.

Computing derivatives works wrt dense input for sparse_csr x dense -> dense

Modified implementation of `torch.sparse.mm` to be directly bound to
ATen function.

ciflow/cuda/71756

Toggle ciflow/cuda/71756's commit message
Merge remote-tracking branch 'upstream/viable/strict' into lu-solve-b…

…atch-expand

ciflow/binaries/73011

Toggle ciflow/binaries/73011's commit message
Test Test test

ciflow/binaries/72795

Toggle ciflow/binaries/72795's commit message
Merge branch 'master' of https://github.com/mszhanyi/pytorch into zha…

…nyi/ignoreioenv

ciflow/binaries/72306

Toggle ciflow/binaries/72306's commit message
Update on "Add BUILD_LAZY_CUDA_LINALG option"

When enable, it will generate `torch_cuda_linalg` library, which would depend on cusolve and magma and registers dynamic bindings to it from LinearAlgebraStubs

Differential Revision: [D33992795](https://our.internmc.facebook.com/intern/diff/D33992795)

[ghstack-poisoned]
0