You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
WSL2
MediaPipe Tasks SDK version
0.10.18
Task name (e.g. Image classification, Gesture recognition etc.)
convert model
Programming Language and version (e.g. C++, Python, Java)
Python
Describe the actual behavior
Runtime Erorr for generating cpu model
Describe the expected behaviour
convert model and generate a tflite file
Standalone code/steps you may have used to try to get what you need
Using the provided LLM inference example as found in github (text-to-text)
Other info / Complete Logs
Running the conversion using the gpu backend works and load on device (is super slow). cpu backend stops the process with the runtime error
RuntimeError: INTERNAL: ; RET_CHECK failure (external/odml/odml/infra/genai/inference/utils/xnn_utils/model_ckpt_util.cc:116) tensor
I tried different ubuntu versions, both generated same runtime error for the cpu backend and worked fine with gpu backend.
The text was updated successfully, but these errors were encountered:
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
WSL2
MediaPipe Tasks SDK version
0.10.18
Task name (e.g. Image classification, Gesture recognition etc.)
convert model
Programming Language and version (e.g. C++, Python, Java)
Python
Describe the actual behavior
Runtime Erorr for generating cpu model
Describe the expected behaviour
convert model and generate a tflite file
Standalone code/steps you may have used to try to get what you need
Using the provided LLM inference example as found in github (text-to-text)
Other info / Complete Logs
The text was updated successfully, but these errors were encountered: