-
Notifications
You must be signed in to change notification settings - Fork 24.5k
functionalization: add native fill() op #76084
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 1788d58 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
[ghstack-poisoned]
@@ -98,6 +98,8 @@ | |||
'replace_', # only used by the functionalization pass, doesn't need to be exposed to python | |||
'zero', # only used by the functionalization pass, doesn't need to be exposed to python | |||
'copy', # only used by the functionalization pass | |||
'fill.Tensor', # only used by the functionalization pass | |||
'fill.Scalar', # only used by the functionalization pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This info should probably get moved into native_functions.yaml at some point
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
Addresses `fill_` issue in pytorch/torchdynamo#88 Adding out-of-place `fill.Tensor` and `fill.Scalar` ops, that way `fill_()` can be properly functionalized. I ended up giving `fill` a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back into `fill_()`, so we don't want to run that decomposition as part of tracing. [ghstack-poisoned]
@pytorchbot merge this please |
Merge failed due to Command Raised by https://github.com/pytorch/pytorch/actions/runs/2222996651 |
Summary: Pull Request resolved: #76084 Approved by: https://github.com/ezyang Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/ea5209c9fd78a34849600875d0f3f5b994201833 Reviewed By: osalpekar Differential Revision: D35938183 Pulled By: bdhirsh fbshipit-source-id: 73c318c03e49671cdaee997807e2dba2a0c57198
Addresses
fill_
issue in pytorch/torchdynamo#88Adding out-of-place
fill.Tensor
andfill.Scalar
ops, that wayfill_()
can be properly functionalized.I ended up giving
fill
a derivative formula, because I think that we want to consider it a "base op" as part of tracing. The decomposition I wrote for it just calls back intofill_()
, so we don't want to run that decomposition as part of tracing.Stack from ghstack: