Open
Description
Hope to support ONNX Runtime (Training version & Inferencing version) and DirectML.
They can optimize the training process and inferring process, if you use it as a back end.
ONNX Runtime supports any OS and any kind of GPU, including CUDA(NVIDIA), ROCm (AMD), oneDNN (Intel), Metal (Apple M1) and other devices as in the following picture. And its performance will be much better than OpenCL.
https://github.com/microsoft/onnxruntime
DirectML supports any kind of GPU on Windows, but its cost of code migrating is much less than it of ONNX Runtime. And its performance will also be better than OpenCL.
https://github.com/microsoft/DirectML
Metadata
Metadata
Assignees
Labels
No labels