-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mnist_mind实例运行失败 #357
Comments
你选了cpu的推理,要选ascend的 |
create flowunit 'mnist_infer' failed. -> current environment does not support the inference type: 'mindspore:ascend' |
我这边没有任何作用 |
modelbox-tool driver -info -details |
Device Information : name: 1 name: 2 name: 3 name: 4 name: 5 name: 6 name: 7 name: 0 Driver Information : driver name: device-cpu driver name: graphconf-graphvize driver name: acl_inference driver name: crop driver name: padding driver name: resize driver name: video_decoder driver name: base64_decoder driver name: buff_meta_mapping driver name: crop driver name: data_source_generator driver name: obs driver name: restful driver name: url driver name: vcn_restful driver name: vis driver name: data_source_parser driver name: draw_bbox driver name: httpserver_async driver name: httpserver_sync driver name: image_decoder driver name: image_rotate driver name: mean driver name: modeldecrypt-plugin driver name: normalize driver name: obs driver name: webhook driver name: output_broker driver name: packed_planar_transpose driver name: padding driver name: python driver name: resize driver name: video_decoder driver name: video_demuxer driver name: video_encoder driver name: video_input driver name: yolov3_postprocess driver name: inference driver name: python driver name: yolo_postprocess FlowUnit Information :flowunit name : crop flowunit name : padding flowunit name : resize flowunit name : video_decoder flowunit name : base64_decoder flowunit name : buff_meta_mapping flowunit name : crop flowunit name : data_source_generator flowunit name : data_source_parser flowunit name : draw_bbox flowunit name : httpserver_async flowunit name : httpserver_sync_receive flowunit name : httpserver_sync_reply flowunit name : image_decoder flowunit name : image_rotate flowunit name : mean flowunit name : normalize flowunit name : output_broker flowunit name : packed_planar_transpose flowunit name : padding flowunit name : resize flowunit name : video_decoder flowunit name : video_demuxer flowunit name : video_encoder flowunit name : video_input |
没有mindspore的东西。 modelbox是你自己编译,还是镜像里面带的? |
用的镜像。然后python 中有 |
还有你是怎么进入镜像的?你ssh看看,有可能缺少环境变量。 |
/usr/local/lib/python3.8/dist-packages/mindspore |
ldd看一下mindspore-flowunit.so文件的依赖是否满足。 |
好的,谢谢您 |
我也遇到了这个问题,请问这个问题是如何解决的?
|
从提示看,是缺少cpu版本的mindspore推理引擎。镜像自带的一般是NPU版本的mindspore,如果要用CPU推理,则需要安装CPU版本的mindspore。 |
使用CPU和Ascend都报错:current environment does not support the inference type: 'mindspore:cpu'。
EE8888: Inner Error!
Create stream failed, ret:207000
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:425 Init
EE8888: Inner Error!
Create stream failed, ret:207000
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:425 Init
具体还有什么办法定位问题吗? 拉取的镜像后还需要哪些配置?? |
如果想用ascend推理:
如果只是想用CPU测试mnist的话,可以参考这个: 也可以执行懒人脚本: -m 参数表示使用国内镜像 |
hostOS中驱动已经安装了: version: 1.0runtime_running_version=[1.84.15.2.220:6.0.2] 除了推理单元,其他示例都能跑通,现在就想用npu来推理,上面的驱动安装或者版本对吗? |
你改下推理单元的设备类型,改成ascend。 |
运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)
使用dockers
modelbox/modelbox-develop-mindspore_1.9.0-cann_6.0.1-d310p-ubuntu-x86_64:latest
npu -info
描述问题 | Describe the current behavior:
运行Mnist_mind实例,失败
错误信息:request invalid, job config is invalid, Not found, build graph failed, please check graph config. -> create flowunit 'mnist_infer' failed. -> current environment does not support the inference type: 'mindspore:cpu'
同时新建单元后面功能无法圈选
期望的行为 | Describe the expected behavior:
重现步骤描述 | Standalone code to reproduce the issue:
使用acl_inference 的om模型也无法运行
提供具体重现问题的步骤,如果可能,提供相关的截图信息,日志信息。
Provide a reproducible test case that is the bare minimum necessary to replicate the problem.
日志信息 | Logs
收集ModelBox的运行日志,路径为/var/log/modelbox
Please Provide modleobx logs, log path /var/log/modelbox
其他信息 | Other Info.
The text was updated successfully, but these errors were encountered: