8000 Tags · ROCm/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Tags: ROCm/pytorch

Tags

v2.3.0

Toggle v2.3.0's commit message

Verified

8000
This commit was created on GitHub.com and signed with GitHub’s verified signature.
[release/2.3] pin sympy==1.12.1 and skip pytorch-nightly installation… (

#1529)

This PR pins sympy==1.12.1 in the .ci/docker/requirements-ci.txt file
Also it skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)

v2.1.2

Toggle v2.1.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[release/2.1] pin sympy==1.12.1 and skip pytorch-nightly installation… (

#1530)

This PR pins sympy==1.12.1 in the .ci/docker/requirements-ci.txt file
Also it skips pytorch-nightly installation in docker images

Installation of pytorch-nightly is needed to prefetch mobilenet_v2 avd
v3 models for some tests.
Came from

85bd6bc
Models are downloaded on first use to the folder /root/.cache/torch/hub
But pytorch-nightly installation also overrides
.ci/docker/requirements-ci.txt settings and upgrades some of python
packages (sympy from 1.12.0 to 1.13.0) which causes several
'dynamic_shapes' tests to fail
Skip prefetching models affects these tests without any errors (but
**internet access required**):

- python test/mobile/model_test/gen_test_model.py mobilenet_v2
- python test/quantization/eager/test_numeric_suite_eager.py -k
test_mobilenet_v3

Issue ROCm/frameworks-internal#8772

Also, in case of some issues these models can be prefetched after
pytorch building and before testing

(cherry picked from commit b92b34d)

v2.2.1

Toggle v2.2.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[release/2.2] Pin the required package versions (#1522)

merge-20230301-prev

Toggle merge-20230301-prev's commit message
Before merging IFU-2023-03-01

rocm2.8

Toggle rocm2.8's commit message
PyTorch wheel compiled with ROCm 2.8

enable_distributed_nccl_before_memcpy_fix

Toggle enable_distributed_nccl_before_memcpy_fix's commit message
Fix Caffe2 CI job for new Windows AMI

Summary: Pull Request resolved: pytorch#21410

Differential Revision: D15660575

Pulled By: ezyang

fbshipit-source-id: cfc0f325b0fbc22282686a4d12c7a53236d973d4

v0.3.1

Toggle v0.3.1's commit message
Scopes 0.3.1 backport (pytorch#5153)

* Introduce scopes during tracing (pytorch#3016)

* Fix segfault during ONNX export

* Further fix to tracing scope (pytorch#4558)

* Set missing temporary scope in callPySymbolicMethod

* Use expected traces in all scope tests

* Fix tracking of tracing scopes during ONNX pass (pytorch#4524)

* Fix tracking of tracing scopes during ONNX pass

* Use ResourceGuard to manage setting a temporary current scope in Graph

* Add tests for ONNX pass scopes

* Remove unused num_classes argument

* Expose node scopeName to python (pytorch#4200)

* Inherit JIT scopes when cloning only when it's correct

It's correct only when the new graph owns the same scope tree
as the original one. We can end up with dangling pointers otherwise.

* Fixes after cherry-picking, still one test to go

* Fix for last failing test after scope cherry-pick

* Fix linting issue

v0.3.0

Toggle v0.3.0's commit message
Backport transposes optimization to v0.3.0 (pytorch#3994)

* Optimizer: optimize transposes in variety of circumstances (pytorch#3509)

* Optimizer: Optimize transposes in variety of circumstances

- No-op transposes
- Consecutive transposes (fuse them)
- Transposes into Gemm (fuse them into transA/transB parameter)

* touch up out of date comment

* Backporting optimizer changes

v0.2.0

Toggle v0.2.0's commit message
fix static linkage and make THD statically linked

v0.1.12

Toggle v0.1.12's commit message
version bump

0