8000 make logsumexp composite compliant by bdhirsh · Pull Request #77130 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

make logsumexp composite compliant #77130

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 16 commits into from

Conversation

bdhirsh
Copy link
Contributor
@bdhirsh bdhirsh commented May 10, 2022

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented May 10, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 5345f09 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@bdhirsh bdhirsh requested a review from ezyang May 11, 2022 14:28
bdhirsh added 3 commits May 11, 2022 07:31
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
bdhirsh added 5 commits May 17, 2022 19:58
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
bdhirsh added 6 commits May 19, 2022 09:00
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
`logsumexp` advertises as `CompositeExplicitAutograd`, so it should go through the dispatcher when calling other ops (`logsumexp.out`).

This came up because `logsumexp.out` calls view ops internally (`squeeze()`), so I want to opt it out of LTC/XLA with a new `CompositeExplicitAutogradNonFunctional` alias key).  Calling `logsumexp` should then know to not use the native implementation of `logsumexp.out` when running on the LTC/XLA backend.






[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author
bdhirsh commented May 24, 2022

@pytorchbot merge

@github-actions
Copy link
Contributor

Hey @bdhirsh.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request May 26, 2022
Summary:
Pull Request resolved: #77130

 composite

Approved by: https://github.com/ezyang

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/6185478edbe3b7cb58dd18f7479ef6d0baba031e

Reviewed By: mehtanirav

Differential Revision: D36668393

Pulled By: bdhirsh

fbshipit-source-id: 8623bd11874afcbcea83d5f651bc676562bc970e
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/228/head branch May 28, 2022 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0