8000 xpu: installed pytorch is missing aten xpu ops headers (ATen/ops/cat_xpu_dispatch.h and others) · Issue #145902 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
xpu: installed pytorch is missing aten xpu ops headers (ATen/ops/cat_xpu_dispatch.h and others) #145902
Open
@dvrogozh

Description

@dvrogozh

With 635b98f.

As first noted in #132945 (comment), pytorch built with XPU backend is missing a range of ATen operator headers (missing installed: they are generated, but not actually installed). Specifically, only those headers are available which are generated from torch stock sources (i.e. from ./aten/src/ATen/native/native_functions.yaml), these are:

$ find torch/include/ATen/ops -name "*xpu*"
torch/include/ATen/ops/baddbmm_xpu_dispatch.h
torch/include/
806B
ATen/ops/bmm_xpu_dispatch.h
torch/include/ATen/ops/mm_xpu_dispatch.h
torch/include/ATen/ops/_addmm_activation_xpu_dispatch.h
torch/include/ATen/ops/addmm_xpu_dispatch.h
torch/include/ATen/ops/addmv_xpu_dispatch.h
torch/include/ATen/ops/addbmm_xpu_dispatch.h

For CUDA a lot more are generated (from the same .yaml file):

$ find torch/include/ATen/ops -name "*cuda*" | wc -l
604

For XPU however, those missing headers are generated separately from torch-xpu-ops. These are defined by https://github.com/intel/torch-xpu-ops/blob/main/yaml/native/native_functions.yaml. These XPU headers are generated, but not installed.

These missing headers might be used by some third party projects. On pytorch level these headers are used in some tests. For example, see #138088 which needs ATen/ops/cat_*_dispatch.h and ATen/ops/norm_*_dispatch.h.

Can missing aten xpu headers be installed?

I think that current challenge is that torch-xpu-ops generates not only XPU specific files from https://github.com/intel/torch-xpu-ops/blob/main/yaml/native/native_functions.yaml, but generic files as well. I.e. we get duplicated files generated from stock pytorch sources and from xpu sources. For example:

$ find . -name _aminmax.h
./build/aten/src/ATen/ops/_aminmax.h
./build/xpu/ATen/ops/_aminmax.h
./torch/include/ATen/ops/_aminmax.h

I guess there are 2 ways forward:

  1. Filter out non-XPU header files and install only them
  2. Move XPU headers generation to stock pytorch and drop XPU torch-xpu-ops level customization

CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @albanD

cc @seemethere @malfet @osalpekar @atalman @gujinghui @EikanWang @fengyuan14 @guangyey

Metadata

Metadata

Assignees

Labels

module: binariesAnything related to official binaries that we release to usersmodule: buildBuild system issuesmodule: xpuIntel XPU related issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

TODO

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0