8000 Back out "DispatchKeySet perf improvements" by bdhirsh · Pull Request #74858 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Back out "DispatchKeySet perf improvements" #74858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

bdhirsh
Copy link
Contributor
@bdhirsh bdhirsh commented Mar 28, 2022

Stack from ghstack (oldest at bottom):

Original commit changeset: c7695e16dba3

Original Phabricator Diff: D34227615

The Milan Assistant is crashing, see P489979944

After back out the change D34227615 and D34227616, it works fine now

Differential Revision: D35192317

Original commit changeset: c7695e16dba3

Original Phabricator Diff: D34227615

The Milan Assistant is crashing, see P489979944

After back out the change D34227615 and D34227616, it works fine now

Differential Revision: [D35192317](https://our.internmc.facebook.com/intern/diff/D35192317/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented Mar 28, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 49af97d (more details on the Dr. CI page):


  • 3/4 failures introduced in this PR
  • 1/4 broken upstream at merge base c96ced8 on Mar 28 from 6:46am to 8:46am

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build pull / linux-bionic-rocm4.5-py3.7 / test (default, 2, 2, linux.rocm.gpu) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-28T23:56:07.6724682Z FAIL [0.013s]: test_caching_pinned_memory (__main__.TestCuda)
2022-03-28T23:56:07.6642876Z   test_scatter_cpu_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6649003Z   test_scatter_cpu_neg_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6655232Z   test_scatter_cpu_sizes (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6663826Z   test_scatter_gpu (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6670303Z   test_scatter_gpu_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6677927Z   test_scatter_gpu_neg_dim (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6684770Z   test_scatter_gpu_sizes (__main__.TestCudaComm) ... skip: only one GPU detected (0.001s)
2022-03-28T23:56:07.6722475Z   test_scatter_namedtuple (__main__.TestCudaComm) ... skip: Test needs multiple GPUs (0.003s)
2022-03-28T23:56:07.6723166Z 
2022-03-28T23:56:07.6723797Z ======================================================================
2022-03-28T23:56:07.6724682Z FAIL [0.013s]: test_caching_pinned_memory (__main__.TestCuda)
2022-03-28T23:56:07.6726143Z ----------------------------------------------------------------------
2022-03-28T23:56:07.6727043Z Traceback (most recent call last):
2022-03-28T23:56:07.6728427Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1780, in wrapper
2022-03-28T23:56:07.6729360Z     method(*args, **kwargs)
2022-03-28T23:56:07.6730131Z   File "test_cuda.py", line 1370, in test_caching_pinned_memory
2022-03-28T23:56:07.6731369Z     self.assertNotEqual(t.data_ptr(), ptr, msg='allocation re-used too soon')
2022-03-28T23:56:07.6732954Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2186, in assertNotEqual
2022-03-28T23:56:07.6734055Z     self.assertEqual(x, y, msg, atol=atol, rtol=rtol, **kwargs)
2022-03-28T23:56:07.6735266Z AssertionError: AssertionError not raised : allocation re-used too soon
2022-03-28T23:56:07.6735859Z 

2 failures not recognized by patterns:

Job Step Action
GitHub Actions pull / linux-docs / build-docs (cpp) Unknown 🔁 rerun
GitHub Actions pull / linux-xenial-py3.7-gcc5.4 / test (docs_test, 1, 1, linux.2xlarge) Test 🔁 rerun

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

facebook-github-bot pushed a commit that referenced this pull request Mar 29, 2022
Summary:
Pull Request resolved: #74858

Original commit changeset: c7695e16dba3

Original Phabricator Diff: D34227615 (c0491c9)

The Milan Assistant is crashing, see P489979944

After back out the change D34227615 (c0491c9) and D34227616 (2cbddc0), it works fine now
ghstack-source-id: 152380988

(Note: this ignores all push blocking failures!)

Test Plan:
Test on Milan with "get weather utterance"

buck build fbsource//fbandroid/mode/opt fbsource//fbandroid/mode/milan_build_rdk  //fbandroid/apps/wearable/system/speechservice:speechservice_target30_xhdpi_armv7_release_debug_keystore -c  pt.has_backtaces=1

Reviewed By: phding

Differential Revision: D35192317

fbshipit-source-id: e38081810a569b45ca037e019ec1c8773971534d
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/189/head branch April 2, 2022 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0