8000 fix CompositeImplicitAutograd ops improperly labeled by bdhirsh · Pull Request #69169 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

fix CompositeImplicitAutograd ops improperly labeled #69169

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

bdhirsh
Copy link
Contributor
@bdhirsh bdhirsh commented Nov 30, 2021

I checked derivatives.yaml, and it doesn't look like logical_not/and/xor are meant to work with autograd. Those 3 ops are currently set as CompositeImplicitAutograd though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)

Stack from ghstack:

Differential Revision: D32739976

@pytorch-probot
Copy link
pytorch-probot bot commented Nov 30, 2021
CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/ceb78497ffe70ce794afe6644ae81d5537198860/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented Nov 30, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit ceb7849 (more details on the Dr. CI page):


  • 7/7 failures introduced in this PR

🕵️ 7 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.6-clang9 / test (noarch, 1, 1, linux.2xlarge) (1/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T17:10:52.3276224Z RuntimeError: test_ops failed!
2021-12-09T17:10:49.5119933Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsMETA-20211209165441.xml
2021-12-09T17:10:49.5876775Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20211209165441.xml
2021-12-09T17:10:49.6428339Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsCPU-20211209165441.xml
2021-12-09T17:10:49.7317140Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitMETA-20211209165441.xml
2021-12-09T17:10:49.8096258Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsMETA-20211209165441.xml
2021-12-09T17:10:52.3271309Z Traceback (most recent call last):
2021-12-09T17:10:52.3271993Z   File "test/run_test.py", line 1057, in <module>
2021-12-09T17:10:52.3273031Z     main()
2021-12-09T17:10:52.3273647Z   File "test/run_test.py", line 1035, in main
2021-12-09T17:10:52.3275699Z     raise RuntimeError(err_message)
2021-12-09T17:10:52.3276224Z RuntimeError: test_ops failed!
2021-12-09T17:10:52.5784813Z 
2021-12-09T17:10:52.5785800Z real	16m15.633s
2021-12-09T17:10:52.5786221Z user	31m18.806s
2021-12-09T17:10:52.5786493Z sys	1m2.599s
2021-12-09T17:10:52.5786756Z + cleanup
2021-12-09T17:10:52.5787370Z + retcode=1
2021-12-09T17:10:52.5787823Z + set +x
2021-12-09T17:10:52.5826035Z ##[error]Process completed with exit code 1.
2021-12-09T17:10:52.5867895Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-09T17:10:52.5868619Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build linux-bionic-py3.6-clang9 / test (default, 1, 2, linux.2xlarge) (2/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T17:09:24.5153630Z RuntimeError: test_ops failed!
2021-12-09T17:09:20.0501591Z Generating XML reports...
2021-12-09T17:09:20.6094417Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20211209165450.xml
2021-12-09T17:09:21.4505394Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20211209165450.xml
2021-12-09T17:09:21.5300874Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20211209165450.xml
2021-12-09T17:09:21.5880602Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsCPU-20211209165450.xml
2021-12-09T17:09:24.5143745Z Traceback (most recent call last):
2021-12-09T17:09:24.5151253Z   File "test/run_test.py", line 1057, in <module>
2021-12-09T17:09:24.5151905Z     main()
2021-12-09T17:09:24.5152772Z   File "test/run_test.py", line 1035, in main
2021-12-09T17:09:24.5153200Z     raise RuntimeError(err_message)
2021-12-09T17:09:24.5153630Z RuntimeError: test_ops failed!
2021-12-09T17:09:24.7459210Z 
2021-12-09T17:09:24.7459818Z real	14m38.965s
2021-12-09T17:09:24.7460251Z user	30m13.668s
2021-12-09T17:09:24.7460579Z sys	1m3.817s
2021-12-09T17:09:24.7460854Z + cleanup
2021-12-09T17:09:24.7461135Z + retcode=1
2021-12-09T17:09:24.7461392Z + set +x
2021-12-09T17:09:24.7499431Z ##[error]Process completed with exit code 1.
2021-12-09T17:09:24.7540072Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-09T17:09:24.7540771Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build linux-xenial-py3.6-clang7-asan / test (default, 1, 2, linux.2xlarge) (3/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T18:55:09.9711033Z RuntimeError: test_ops failed!
2021-12-09T18:54:52.8477215Z Generating XML reports...
2021-12-09T18:54:53.5813029Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20211209165954.xml
2021-12-09T18:54:54.5927010Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20211209165954.xml
2021-12-09T18:54:54.6836274Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20211209165954.xml
2021-12-09T18:54:54.7599051Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsCPU-20211209165954.xml
2021-12-09T18:55:09.9700160Z Traceback (most recent call last):
2021-12-09T18:55:09.9701313Z   File "test/run_test.py", line 1057, in <module>
2021-12-09T18:55:09.9705348Z     main()
2021-12-09T18:55:09.9705984Z   File "test/run_test.py", line 1035, in main
2021-12-09T18:55:09.9710290Z     raise RuntimeError(err_message)
2021-12-09T18:55:09.9711033Z RuntimeError: test_ops failed!
2021-12-09T18:55:10.3621522Z + cleanup
2021-12-09T18:55:10.3621957Z + retcode=1
2021-12-09T18:55:10.3622257Z + set +x
2021-12-09T18:55:10.3657344Z ##[error]Process completed with exit code 1.
2021-12-09T18:55:10.3699959Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-09T18:55:10.3700649Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2021-12-09T18:55:10.3701240Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2021-12-09T18:55:10.3777311Z shell: /usr/bin/bash -e {0}
2021-12-09T18:55:10.3777623Z env:
2021-12-09T18:55:10.3778099Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-clang7-asan

See GitHub Actions build win-vs2019-cuda11.3-py3 / test (force_on_cpu, 1, 1, windows.4xlarge) (4/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T19:00:30.0070871Z RuntimeError: test_ops failed!
2021-12-09T19:00:26.5932505Z Generating XML reports...
2021-12-09T19:00:26.5933197Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestCommonCPU-20211209184009.xml
2021-12-09T19:00:26.5934432Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestGradientsCPU-20211209184009.xml
2021-12-09T19:00:26.5935428Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestJitCPU-20211209184009.xml
2021-12-09T19:00:26.5936408Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestMathBitsCPU-20211209184009.xml
2021-12-09T19:00:30.0067608Z Traceback (most recent call last):
2021-12-09T19:00:30.0069276Z   File "run_test.py", line 1057, in <module>
2021-12-09T19:00:30.0069617Z     main()
2021-12-09T19:00:30.0070045Z   File "run_test.py", line 1035, in main
2021-12-09T19:00:30.0070464Z     raise RuntimeError(err_message)
2021-12-09T19:00:30.0070871Z RuntimeError: test_ops failed!
2021-12-09T19:00:30.2261793Z 
2021-12-09T19:00:30.2262242Z (base) C:\actions-runner\_work\pytorch\pytorch\test>popd
2021-12-09T19:00:30.2266390Z 
2021-12-09T19:00:30.2266844Z (base) C:\actions-runner\_work\pytorch\pytorch>if ERRORLEVEL 1 exit /b 1 
2021-12-09T19:00:30.2302491Z + cleanup
2021-12-09T19:00:30.2302880Z + retcode=1
2021-12-09T19:00:30.2303149Z + set +x
2021-12-09T19:00:30.2356271Z ##[error]Process completed with exit code 1.
2021-12-09T19:00:30.2507371Z ##[group]Run # -ir => recursive include all files in pattern
2021-12-09T19:00:30.2508017Z �[36;1m# -ir => recursive include all files in pattern�[0m

See GitHub Actions build linux-xenial-py3.6-gcc7 / test (default, 1, 2, linux.2xlarge) (5/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T17:08:44.8639094Z RuntimeError: test_ops failed!
2021-12-09T17:08:40.4790879Z Generating XML reports...
2021-12-09T17:08:41.0301095Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20211209165347.xml
2021-12-09T17:08:41.8718665Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20211209165347.xml
2021-12-09T17:08:41.9503793Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20211209165347.xml
2021-12-09T17:08:42.0079353Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsCPU-20211209165347.xml
2021-12-09T17:08:44.8633914Z Traceback (most recent call last):
2021-12-09T17:08:44.8634475Z   File "test/run_test.py", line 1057, in <module>
2021-12-09T17:08:44.8636249Z     main()
2021-12-09T17:08:44.8636601Z   File "test/run_test.py", line 1035, in main
2021-12-09T17:08:44.8638647Z     raise RuntimeError(err_message)
2021-12-09T17:08:44.8639094Z RuntimeError: test_ops failed!
2021-12-09T17:08:45.1278916Z + cleanup
2021-12-09T17:08:45.1279411Z + retcode=1
2021-12-09T17:08:45.1280375Z + set +x
2021-12-09T17:08:45.1314076Z ##[error]Process completed with exit code 1.
2021-12-09T17:08:45.1359479Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-09T17:08:45.1360203Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2021-12-09T17:08:45.1360821Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2021-12-09T17:08:45.1374700Z shell: /usr/bin/bash -e {0}
2021-12-09T17:08:45.1375128Z env:
2021-12-09T17:08:45.1375574Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-gcc7

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 1, 2, linux.2xlarge) (6/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T17:08:58.3536818Z RuntimeError: test_ops failed!
2021-12-09T17:08:54.0105402Z Generating XML reports...
2021-12-09T17:08:54.5658458Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestCommonCPU-20211209165337.xml
2021-12-09T17:08:55.4120697Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestGradientsCPU-20211209165337.xml
2021-12-09T17:08:55.4915707Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestJitCPU-20211209165337.xml
2021-12-09T17:08:55.5503075Z Generated XML report: test-reports/python-unittest/test_ops/TEST-TestMathBitsCPU-20211209165337.xml
2021-12-09T17:08:58.3531838Z Traceback (most recent call last):
2021-12-09T17:08:58.3532393Z   File "test/run_test.py", line 1057, in <module>
2021-12-09T17:08:58.3534038Z     main()
2021-12-09T17:08:58.3534417Z   File "test/run_test.py", line 1035, in main
2021-12-09T17:08:58.3536340Z     raise RuntimeError(err_message)
2021-12-09T17:08:58.3536818Z RuntimeError: test_ops failed!
2021-12-09T17:08:58.6114829Z + cleanup
2021-12-09T17:08:58.6115170Z + retcode=1
2021-12-09T17:08:58.6116249Z + set +x
2021-12-09T17:08:58.6152221Z ##[error]Process completed with exit code 1.
2021-12-09T17:08:58.6196080Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-12-09T17:08:58.6196835Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2021-12-09T17:08:58.6197485Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2021-12-09T17:08:58.6215203Z shell: /usr/bin/bash -e {0}
2021-12-09T17:08:58.6215536Z env:
2021-12-09T17:08:58.6216021Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-gcc5.4

See GitHub Actions build win-vs2019-cpu-py3 / test (default, 2, 2, windows.4xlarge) (7/7)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-12-09T18:21:04.3075947Z RuntimeError: test_ops failed!
2021-12-09T18:21:01.1979051Z Generating XML reports...
2021-12-09T18:21:01.1979726Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestCommonCPU-20211209180152.xml
2021-12-09T18:21:01.1980769Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestGradientsCPU-20211209180152.xml
2021-12-09T18:21:01.1981843Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestJitCPU-20211209180152.xml
2021-12-09T18:21:01.1982817Z Generated XML report: test-reports\dist-gloo\test_ops\TEST-TestMathBitsCPU-20211209180152.xml
2021-12-09T18:21:04.3072830Z Traceback (most recent call last):
2021-12-09T18:21:04.3074248Z   File "run_test.py", line 1057, in <module>
2021-12-09T18:21:04.3074602Z     main()
2021-12-09T18:21:04.3075090Z   File "run_test.py", line 1035, in main
2021-12-09T18:21:04.3075557Z     raise RuntimeError(err_message)
2021-12-09T18:21:04.3075947Z RuntimeError: test_ops failed!
2021-12-09T18:21:04.5009251Z 
2021-12-09T18:21:04.5009715Z (base) C:\actions-runner\_work\pytorch\pytorch\test>popd
2021-12-09T18:21:04.5014477Z 
2021-12-09T18:21:04.5014914Z (base) C:\actions-runner\_work\pytorch\pytorch>if ERRORLEVEL 1 exit /b 1 
2021-12-09T18:21:04.5047497Z + cleanup
2021-12-09T18:21:04.5047857Z + retcode=1
2021-12-09T18:21:04.5048111Z + set +x
2021-12-09T18:21:04.5192484Z ##[error]Process completed with exit code 1.
2021-12-09T18:21:04.5359959Z ##[group]Run # -ir => recursive include all files in pattern
2021-12-09T18:21:04.5360611Z �[36;1m# -ir => recursive include all files in pattern�[0m

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@bdhirsh
Copy link
Contributor Author
bdhirsh commented Nov 30, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@@ -922,10 +922,14 @@
- func: logical_not(Tensor self) -> Tensor
device_check: NoCheck # TensorIterator
variants: function, method
dispatch:
CompositeExplicitAutograd: logical_not
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need to add a backward formula that says the input is not differentiable, right?

e.g.

- name: new_empty(Tensor self, int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
self: non_differentiable
output_differentiability: [False]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, right. thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should do this by default, this is a common thing folks miss

Copy link
Contributor
@soulitzer soulitzer Nov 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to add this because we don't want to raise an error if the inputs require grad? Is it possible for it to not raise an error here even without the formula, because the output is bool... i.e, the number of differentiable outputs is 0.

Edit: or perhaps because it doesn't have a differentiable info at all, it would automatically skip to adding the torch checks without first checking the dtypes of the outputs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this PR, logical_not does not raise an error when passed inputs that require_grad:

>>> import torch
>>> x = torch.randn(3, requires_grad=True)
>>> torch.logical_not(x)
tensor([False, False, False])

So we probably want to replicate that behavior, and to replicate that behavior the way I know of is to add the backward formula (otherwise, the operator errors out, right?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah you are definitely right here. Codegen has no way of checking the dtype of the output, so it can only rely on output_differentiability, which can only exist if the formula exists. More generally though the logic is slightly more involved.
For example new_zeros is composite, but doesn't have a formula, and shouldn't error because it is listed in DONT_REQUIRE_DERIVATIVE.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we want to make sure that these do no fail when passed an input that requires gradient. Only the output should not have requires grad.

Comment on lines +982 to +983
dispatch:
CompositeExplicitAutograd: logical_or_
Copy link
Contributor
@zou3519 zou3519 Nov 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think) This PR fixes the root cause of the composite compliance test failure for these operators (

# Pre-existing condition; Needs to be fixed
DecorateInfo(unittest.expectedFailure, 'TestCommon', 'test_composite_compliance'),
). You'll probably have to remove those expected failures

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)




[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Nov 30, 2021
ghstack-source-id: 9d53e6d
Pull Request resolved: #69169
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Nov 30, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

output_differentiability: [False]

- name: logical_and(Tensor self, Tensor other) -> Tensor
output_differentiability: [False]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soulitzer what's the best way to declare these ops as non-differentiable? I guess the best case scenario would be if we can error out immediately in the forward pass, if someone tries to call torch.logical_and(torch.ones(2, requres_grad=True)).

I'm getting codegen errors that say RuntimeError: output_differentiability=False for inplace operation (version_counter won't get updated). I see another option from the docs is to add these ops to the DONT_REQUIRE_DERIVATIVE in gen_variable_type?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is pretty counterintuitive but declaring these ops as non-differentiable actually makes it so that they do not error when someone passes in a tensor that has requires_grad=True. The default behavior when no formula is specified is actually to error immediately in the forward pass.

A good example of why that is the case is eq, we wouldn't want users to have to detach everytime they want to compare two tensors. Maybe it is less clear what we really want to do in the logical_{and,or} cases, but adding it would be good to avoid bc-breaking issues.

I'm actually wondering how outdated that error is, since we update the version counter in the ADInplaceOrView kernel now. Perhaps we should just remove it?

It is also weird that eq does a similar thing (i.e., it has a in-place variant), but doesn't run into the error. I'm not sure why eq is able to work, but if you look into that perhaps you can emulate how it is done there.

Adding to DONT_REQUIRE_DERIVATIVE should also work, so I guess we can do that as a easy workaround for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good example of why that is the case is eq, we wouldn't want users to have to detach everytime they want to compare two tensors.

Ah okay, then keeping the current behavior of not immediately erroring out in the forward pass sounds reasonable, to not be BC-breaking.

Adding to DONT_REQUIRE_DERIVATIVE should also work, so I guess we can do that as a easy workaround for now.

I'll try this for now, th 9E88 anks! I looked at eq and I don't think I'm doing anything differently for logical_and/xor/not.

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)


Differential Revision: [D32739976](https://our.internmc.facebook.com/intern/diff/D32739976)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Dec 1, 2021
ghstack-source-id: a3b6877
Pull Request resolved: #69169
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Dec 1, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)


Differential Revision: [D32739976](https://our.internmc.facebook.com/intern/diff/D32739976)

[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Dec 2, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)


Differential Revision: [D32739976](https://our.internmc.facebook.com/intern/diff/D32739976)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Dec 2, 2021
ghstack-source-id: 61cf798
Pull Request resolved: #69169
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Dec 2, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bdhirsh
Copy link
Contributor Author
bdhirsh commented Dec 9, 2021

@soulitzer @zou3519 I think this is ready for another round of review

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)


Differential Revision: [D32739976](https://our.internmc.facebook.com/intern/diff/D32739976)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Dec 9, 2021
ghstack-source-id: 469db1b
Pull Request resolved: #69169
@bdhirsh
Copy link
Contributor Author
bdhirsh commented Dec 9, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor
@soulitzer soulitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

… for a composite op with no derivative formula"

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator
```




[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Dec 10, 2021
… op with no derivative formula"

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator
```




[ghstack-poisoned]
PaliC added a commit that referenced this pull request Dec 10, 2021
Summary:
Pull Request resolved: #69169

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32739976

Pulled By: bdhirsh

fbshipit-source-id: a756dd9e0b87276368063c8f4934be59dca371d3
desertfire pushed a commit that referenced this pull request Dec 13, 2021
Summary:
Pull Request resolved: #69169

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32739976

Pulled By: bdhirsh

fbshipit-source-id: a756dd9e0b87276368063c8f4934be59dca371d3
desertfire pushed a commit that referenced this pull request Dec 14, 2021
Summary:
Pull Request resolved: #69169

I checked `derivatives.yaml`, and it doesn't look like `logical_not/and/xor` are meant to work with autograd. Those 3 ops are currently set as `CompositeImplicitAutograd` though, implying that they do work with autograd. Updating them to be CompositeExplicitAutograd instead.

This came up because I'm trying to improve the error checking in external backend codegen, and these ops being improperly labeled incorrectly triggers my new error checks for XLA (see #67090)

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32739976

Pulled By: bdhirsh

fbshipit-source-id: a756dd9e0b87276368063c8f4934be59dca371d3
bdhirsh added a commit that referenced this pull request Dec 14, 2021
… for a composite op with no derivative formula"

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator
```


Differential Revision: [D33018607](https://our.internmc.facebook.com/intern/diff/D33018607)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Dec 14, 2021
… op with no derivative formula"

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator

Error if someone tries to modify the yaml by adding a composite op that would result in losing autograd for that op (because we can no longer use the composite formula).

This fails locally (correctly) on master because `logical_and` is improperly labeled as `CompositeExplicitAutograd` but doesn't have a derivative formula. So I'll merge this after #69169 goes through.

Failure error:
```
AssertionError:
Found an entry for 'logical_not' in the yaml. This operator is composite, which means that by default it will decompose into base operators. If decomposing is fine, then you can simply remove the entry from the yaml file. If you wish to provide an explicit kernel directly for this composite op, you can do so by:
 (a) moving the operator name under the 'autograd' entry in the yaml file
 (b) writing a custom Autograd Function for the operator
```


Differential Revision: [D33018607](https://our.internmc.facebook.com/intern/diff/D33018607)

[ghstack-poisoned]
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/165/head branch December 14, 2021 15:16
@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by 41c344d. To re-land this change, follow these steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0