8000 functionalization: add native fill() op by bdhirsh · Pull Request #76084 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

functionalization: add native fill() op #76084

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from

Conversation

bdhirsh
Copy link
Contributor
@bdhirsh bdhirsh commented Apr 20, 2022

Addresses fill_ issue in pytorch/torchdynamo#88

Adding out-of-place fill.Tensor and fill.Scalar ops, that way fill_() can be properly functionalized.

I ended up giving fill a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into fill_(), so we don't want to run that decomposition as part of tracing.

Stack from ghstack:

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented Apr 20, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 1788d58 (more details on the Dr. CI page):

Expand to see more

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-04-25T22:06:28.4761276Z RuntimeError: test_ops_jit failed! Received signal: SIGIOT
2022-04-25T22:06:27.9921699Z Generated XML report: test-reports/python-unittest/test_ops_gradients/TEST-TestGradientsCPU-20220425215722.xml
2022-04-25T22:06:28.3805106Z Running test_ops_jit ... [2022-04-25 22:06:28.380122]
2022-04-25T22:06:28.3805629Z Executing ['/opt/conda/bin/python', 'test_ops_jit.py', '-v', '--import-slow-tests', '--import-disabled-tests'] ... [2022-04-25 22:06:28.380209]
2022-04-25T22:06:28.3832512Z OMP: Error #15: Initializing libiomp5.so, but found unknown library already initialized.
2022-04-25T22:06:28.3833614Z OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
2022-04-25T22:06:28.4754993Z Traceback (most recent call last):
2022-04-25T22:06:28.4755264Z   File "test/run_test.py", line 1063, in <module>
2022-04-25T22:06:28.4757844Z     main()
2022-04-25T22:06:28.4758128Z   File "test/run_test.py", line 1041, in main
2022-04-25T22:06:28.4760939Z     raise RuntimeError(err_message)
2022-04-25T22:06:28.4761276Z RuntimeError: test_ops_jit failed! Received signal: SIGIOT
2022-04-25T22:06:28.7271518Z 
2022-04-25T22:06:28.7272072Z real	9m10.071s
2022-04-25T22:06:28.7272595Z user	28m21.387s
2022-04-25T22:06:28.7272888Z sys	0m24.780s
2022-04-25T22:06:28.7273151Z + cleanup
2022-04-25T22:06:28.7275236Z + retcode=1
2022-04-25T22:06:28.7275557Z + set +x
2022-04-25T22:06:28.7309110Z ##[error]Process completed with exit code 1.
2022-04-25T22:06:28.7353026Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-04-25T22:06:28.7353291Z with:

❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See GitHub Actions build pull / pytorch-xla-linux-bionic-py3.7-clang8 / build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun) ❄️

2022-04-25T21:46:22.9104836Z CondaHTTPError: HT...naconda.com/pkgs/r/linux-64/current_repodata.json>
2022-04-25T21:46:20.4119772Z ++ [[ pytorch-xla-linux-bionic-py3.7-clang8 == *linux-trusty-py3.6-gcc7* ]]
2022-04-25T21:46:20.4120167Z ++ BUILD_TEST_LIBTORCH=0
2022-04-25T21:46:20.4120375Z ++ [[ '' == *xla* ]]
2022-04-25T21:46:20.4120671Z ++ [[ pytorch-xla-linux-bionic-py3.7-clang8 == *centos* ]]
2022-04-25T21:46:20.4121099Z ++ [[ pytorch-xla-linux-bionic-py3.7-clang8 == *linux-bionic* ]]
2022-04-25T21:46:20.4121346Z ++ which conda
2022-04-25T21:46:20.4128376Z /opt/conda/bin/conda
2022-04-25T21:46:20.4128876Z ++ conda install -q -y cmake
2022-04-25T21:46:22.9102959Z Collecting package metadata (current_repodata.json): ...working... failed
2022-04-25T21:46:22.9104130Z 
2022-04-25T21:46:22.9104836Z CondaHTTPError: HTTP 409 CONFLICT for url <https://repo.anaconda.com/pkgs/r/linux-64/current_repodata.json>
2022-04-25T21:46:22.9105389Z Elapsed: 00:00.042786
2022-04-25T21:46:22.9105639Z CF-RAY: 701a45610c809c18-IAD
2022-04-25T21:46:22.9105760Z 
2022-04-25T21:46:22.9105884Z An HTTP error occurred when trying to retrieve this URL.
2022-04-25T21:46:22.9106184Z HTTP errors are often intermittent, and a simple retry will get you on your way.
2022-04-25T21:46:22.9106363Z 
2022-04-25T21:46:22.9106533Z If your current network has https://www.anaconda.com blocked, please file
2022-04-25T21:46:22.9106815Z a support request with your network engineering team.
2022-04-25T21:46:22.9106973Z 
2022-04-25T21:46:22.9107138Z 'https://repo.anaconda.com/pkgs/r/linux-64'

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@@ -98,6 +98,8 @@
'replace_', # only used by the functionalization pass, doesn't need to be exposed to python
'zero', # only used by the functionalization pass, doesn't need to be exposed to python
'copy', # only used by the functionalization pass
'fill.Tensor', # only used by the functionalization pass
'fill.Scalar', # only used by the functionalization pass
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This info should probably get moved into native_functions.yaml at some point

@bdhirsh bdhirsh requested a review from ezyang April 20, 2022 13:46
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
bdhirsh added 4 commits April 21, 2022 16:36
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88

Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized.

I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing.




[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Apr 25, 2022

@pytorchbot merge this please

@pytorchmergebot
Copy link
Collaborator

Merge failed due to Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x b36afa294f5b0e0c49b910a67f39a4ed6027cc74 returned non-zero exit code 1
Auto-merging aten/src/ATen/native/Fill.cpp
Auto-merging aten/src/ATen/native/native_functions.yaml
Auto-merging test/test_functionalization.py
CONFLICT (content): Merge conflict in test/test_functionalization.py
Auto-merging tools/autograd/derivatives.yaml
Auto-merging tools/autograd/gen_python_functions.py
CONFLICT (content): Merge conflict in tools/autograd/gen_python_functions.py
error: could not apply b36afa2... functionalization: add native fill() op
hint: After resolving the conflicts, mark them with
hint: "git add/rm ", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".

Raised by https://github.com/pytorch/pytorch/actions/runs/2222996651

@bdhirsh bdhirsh reopened this Apr 25, 2022
facebook-github-bot pushed a commit that referenced this pull request Apr 26, 2022
Summary:
Pull Request resolved: #76084

Approved by: https://github.com/ezyang

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/ea5209c9fd78a34849600875d0f3f5b994201833

Reviewed By: osalpekar

Differential Revision: D35938183

Pulled By: bdhirsh

fbshipit-source-id: 73c318c03e49671cdaee997807e2dba2a0c57198
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/210/head branch April 29, 2022 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0