We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After I export the onnx model with huggingface optimum, I run the following script to check my model which has no issue.
import onnx onnx_model = onnx.load("/content/deta_test/model.onnx") onnx.checker.check_model(onnx_model)
However, when I tried to run inference with onnxruntime, I got the following error:
InvalidGraph Traceback (most recent call last) [<ipython-input-7-53b3f77fe0ed>](https://localhost:8080/#) in <cell line: 11>() 9 # Step 2: Load the ONNX model 10 model_path = "/content/deta_test/model.onnx" # Update with your model path ---> 11 ort_session = ort.InferenceSession(model_path) 12 13 # Step 3: Prepare inputs for inference 1 frames [/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py](https://localhost:8080/#) in _create_inference_session(self, providers, provider_options, disabled_optimizers) 478 479 if self._model_path: --> 480 sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) 481 else: 482 sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /content/deta_test/model.onnx failed:This is an invalid model. Type Error: Type 'tensor(bool)' of input parameter (/model/Equal_7_output_0) of operator (CumSum) in node (/model/CumSum_1) is invalid.
Environment: Google Colab
Inference code:
import numpy as np import onnxruntime as ort batch_size = 1 pixel_values = np.random.rand(batch_size, 3, 800, 1066).astype(np.float32) pixel_mask = np.random.randint(0, 2, size=(batch_size, 800, 1066)).astype(np.int64) model_path = "/content/deta_test/model.onnx" # Update with your model path ort_session = ort.InferenceSession(model_path) input_names = [input.name for input in ort_session.get_inputs()] inputs = { input_names[0]: pixel_values, input_names[1]: pixel_mask } outputs = ort_session.run(None, inputs) print(outputs)
No response
Web Browser
Google Colab
Built from Source
1.19.2
Python
X64
Default CPU
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the issue
After I export the onnx model with huggingface optimum, I run the following script to check my model which has no issue.
However, when I tried to run inference with onnxruntime, I got the following error:
To reproduce
Environment: Google Colab
Inference code:
Urgency
No response
Platform
Web Browser
OS Version
Google Colab
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.19.2
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: