How to process the output of a yolov5n model #188
-
My axis camera is AXIS P1467-LE Bullet Camera, and I was able to successfully upload the yolov5n tflite model to it. I want to use that yolov5n tflite model, but I am struggling with post-processing the output. The exported tflite model output has only 1 element instead of the usual 4 (which can be found in SSD mobilenetv4 model. These 4 stand for bounding boxes, classes, scores for each class, and overall confidence). How can I extract the things mentioned before from the output of the inference and display the bounding boxes correctly on my image? I already tried exporting the yolov5n with nms to get the 4 outputs but then the model becomes unsupported by the camera as it can't allocate the tensors. I have tried implementing the solution provided here: https://stackoverflow.com/questions/65824714/process-output-data-from-yolov5-tflite with the Inference client instead of the tflite interpreter but with no luck as the output image still didn't have the bounding boxes. Can you please provide me with an example code on how to actually use the yolov5n tflite model on an axis Camera. On another note: I tried the SSD mobilenetv2 tflite model provided in the acap model zoo, and it had a different architecture than the one I trained from the tensorflow model zoo which made the acap tflite model significantly faster than tensorflow's tflite model on my axis camera. I just wanted to ask where can I get the tensorflow weights for the SSD mobilenetv2 tflite model, so that I can train it on my custom data. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
Here you can find some guidelines on how to post-process yolo output: Not sure if I understood the second part, what version was faster? And what weights are you looking for? |
Beta Was this translation helpful? Give feedback.
-
Hello @Corallo |
Beta Was this translation helpful? Give feedback.
-
It is most likely that your version of SSD mobilenetv2 was fine, but quantized by channel. That can have a large impact on latency. |
Beta Was this translation helpful? Give feedback.
-
@Corallo Now for the SSD mobilenetv2, I have been quantizing tensorflow's zoo SSD mobilenetv2 using per tensor as recommended in the documentation, but the architecture of acap's model is different when viewing it on netron and much faster on the camera. |
Beta Was this translation helpful? Give feedback.
-
Hi, That version of ssd mobilenet comes from here: https://coral.ai/models/object-detection/ |
Beta Was this translation helpful? Give feedback.
Hi @HardcoreBudget
Here you can find some guidelines on how to post-process yolo output:
AxisCommunications/axis-model-zoo#45
Not sure if I understood the second part, what version was faster? And what weights are you looking for?