-
-
Notifications
You must be signed in to change notification settings - Fork 39
Deploying Custom model #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue 8000 and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
After pressing the continue button, the process sometimes halts after 5, 10, or 15 minutes, and I encounter the following error message: "libusb_handle_events() failed with LIBUSB_ERROR_NO_DEVICE, unable to purge FTDI RX buffers: LIBUSB_ERROR_NO_DEVICE, error while flushing MPSSE queue: -4, Error while calling vexriscv_is_cpu_running, Could not fetch register 'pc'; remote failure reply 'E0E'." How can I extend the runtime of this process? |
ztachip current supporting layers that are typically found in CNN model such as Mobinet and SSD-Mobinet
If you need other layers, you can follow examples under apps/nn/ , there is a file for each layer type implementation. |
Based on my understanding, I need to create a folder named "pose estimation" let's assume, where I will write two files: |
ztachip supports for tflite is generic, so you dont really have to create anything for poseestimation.
The way you use ztachip for pose_estimation or any other model are the same.
Reference https://github.com/ztachip/ztachip/raw/master/Documentation/visionai_programmer_guide.pdf ; section 2.7.x
vision_ai.cpp is a good reference for your application.
…________________________________
From: bionicimager ***@***.***>
Sent: March 26, 2024 7:36 AM
To: ztachip/ztachip ***@***.***>
Cc: ztachip ***@***.***>; Comment ***@***.***>
Subject: Re: [ztachip/ztachip] Deploying Custom model (Issue #12)
Based on my understanding, I need to create a folder named "pose estimation" let's assume, where I will write two files: posestimation.cpp and posestimation.h. Within this folder, I will also create a subfolder called "kernels," in which I should include three files: posestimation.h, posestimation.m, and posestimation.p. In addition, I need to download a .tflite model and store it in a folder named "fs." Lastly, I must update the code in vision.cpp and test.cpp as needed. Is that correct?
—
Reply to this email directly, view it on GitHub<#12 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACSDUFSW45BZXC5V7QY4AQ3Y2FMSNAVCNFSM6AAAAABEXU5X4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRQGE4DSMZVG4>.
You are receiving this because you commented.Message ID: ***@***.***>
|
There probably an error with your USB driver. You may want to double check your USB connection settings/
…________________________________
From: bionicimager ***@***.***>
Sent: March 18, 2024 1:13 AM
To: ztachip/ztachip ***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [ztachip/ztachip] Deploying Custom model (Issue #12)
After pressing the continue button, the process sometimes halts after 5, 10, or 15 minutes, and I encounter the following error message: "libusb_handle_events() failed with LIBUSB_ERROR_NO_DEVICE, unable to purge FTDI RX buffers: LIBUSB_ERROR_NO_DEVICE, error while flushing MPSSE queue: -4, Error while calling vexriscv_is_cpu_running, Could not fetch register 'pc'; remote failure reply 'E0E'." How can I extend the runtime of this process?
—
Reply to this email directly, view it on GitHub<#12 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACSDUFRFEREHVHU3R44IMKDYYZZWTAVCNFSM6AAAAABEXU5X4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHE2DENRZGM>.
You are receiving this because you are subscribed to this thread.
|
@bionicimager did you deploy the custom model for pose estimation ? Please let me know if you are able to do it. |
I never have a chance to do this one
However, if you like to collaborate on this, I can give you some pointers.
Basically you would like to port the Google's TensorFlowLite stack to RISCV. Google even has a RISCV accelerated version in assembly. And then simply replace the important AI functions that are taking most of the processing (like 99% in most cases) such as convolution, FCN... with ztachip version. These important ones are already accelerated by ztachip so it is much faster.
This way you can run almost any TensorFlowLite models in ztachip.
The TensorFlow support is under SW/apps/nn/.cpp with accelerated functions under SW/apps/nn/kernels
…________________________________
From: digi-bytes ***@***.***>
Sent: August 22, 2024 3:16 AM
To: ztachip/ztachip ***@***.***>
Cc: ztachip ***@***.***>; Comment ***@***.***>
Subject: Re: [ztachip/ztachip] Deploying Custom model (Issue #12)
@bionicimager<https://github.com/bionicimager> did you deploy the custom model for pose estimation ? Please let me know if you are able to do it.
—
Reply to this email directly, view it on GitHub<#12 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACSDUFXG767BVRSNCLEIEJDZSWF4DAVCNFSM6AAAAABEXU5X4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBTHE2TCOBRHE>.
You are receiving this because you commented.
|
I want to deploy my custom model on ztachip. Could you assist me with the deployment process or point me towards documentation?
The text was updated successfully, but these errors were encountered: