-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CoreML conversion/export and usage (non-max suppression) #5157
Comments
👋 Hello @rajahaseeb147, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started: $ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
I found that the CoreML export worked when using CoreMLTools 5.0b3 and set the model to eval in export.py. This will give four outputs, three for the detectors and one for the concatenated output (with 640px that's a 1 x 25200 x 85 MLMultiArray). No NMS though, I wrote that myself. With this setup I get 15FPS inference time on an A13 CPU (SE II), 45-50 FPS with 320px. |
@softmatic Hi and thank you for your reply. I used 5.0b3 and now I have the concat layer. Did you upload your code to any repo? Would appreciate if I can take a look at your NMS part and how to process the concatenated output array. Thanks again! |
@softmatic on a side note, i worked with this https://github.com/dbsystel/yolov5-coreml-tools repo and my final model includes NMS now! |
No repo, but here's a snippet that I use for testing (775 is the concat layer in my model with yolov5s, yours will be different, use Netron to find it): if let output = try? cml.prediction(image: pixelBuffer){
let confidenceThreshold : Float = 0.3
let output1D: [Float] = try Array(UnsafeBufferPointer<Float>(output._775))
let rows = output._775.shape[1].intValue // 25200 @ 640x640
for i in 0..<rows{
let confidence = output1D[(i * 85) + 4]
if(confidence > confidenceThreshold){
let row = Array(output1D[(i * 85)..<(i + 1) * 85])
let classes = Array(row.dropFirst(5))
let classIndex : Int = classes.firstIndex(of: classes.max() ?? 0) ?? 0
let detection: [Float] = [row[0] - row[2]/2, row[1] - row[3]/2, row[2], row[3], confidence, Float(classIndex)]
if detections[String(classIndex)] != nil{
var existing = detections[String(classIndex)]!
existing.append(detection)
detections[String(classIndex)] = existing
}
else{
detections[String(classIndex)] = [detection]
}
}
}
} I put the detections in a dict with the class no. as the key. That makes it easier to do NMS later for each class. Obviously, going through the rows one by one like this is sub-optimal. Should be possible to speed this up with either vDSP or a compute shader. |
@softmatic Thanks a lot for sharing this and explaining. Appreciate it!! |
As a follow up, some experimenting shows that the loop is considerably faster than using compactMap or map to filter the rows with a confidence above the threshold, e.g.: let output1D: [Float] = try Array(UnsafeBufferPointer<Float>(output._775))
let candidates : [Int] = output1D.enumerated().compactMap { index, element in
(index % 85) != 4 ? nil : element > confidenceThreshold ? (index - 4) / 85 : nil
} |
@softmatic Thanks a lot, I will check this out as well. I made my model work with the repo that I mentioned above. Now my model has NMS integrated into it. I will also use concatenated model with your NMS script and then compare results from both models. Appreciated it :)) |
Got it up to 60FPS (320x320) on an SE II with some optimization. Video. |
@softmatic Great work, can you elaborate on your optimization? |
@softmatic That looks pretty good! I still haven't tested my model on video feed, just images for now. May I ask what was the key optimization step? |
@rajahaseeb147 Thanks! I had forked the TF Lite iOS sample as a starting point but found that their way of resizing the pixel buffer before inference took almost 20ms. Swapping this part for a CI-based solution reduced this to 2ms. I also do the overlays with a CALayer rather than a overlay UIView, that's another speedup. Finally, I made the size of the video capture depend on the model size; no point in capturing Full HD if you scale it down to 320x320 anyway. BTW, I also tried CoreML INT8 quantization to see if inference is faster but to my surprise it didn't make any difference (just a smaller model). This is very different behaviour from TF Lite where you get a 2-3x speedup. |
We’re you able to get the export repo working with the current version of Yolov5 or are you using an older version? I’ve been struggling to get export to work. |
@rajahaseeb147 Where you able to get the NMS working from the export repo above on the current Yolov5 version or a previous version? I’ve been struggling to get export to work |
@tylercollins590 Hi mate. Have you seen the link repo? So what I did was that during conversion, I used source code from yolov5 4.0 instead of 6.0 and it worked. i will also leave a link to my repo just in case. https://github.com/rajahaseeb147/Yolov5Export/tree/main/poetry_yolov5 |
@tylercollins590 I'm using the latest version (checked out on Wednesday) Here's my requirements (Python 3.8.10): absl-py==0.15.0 Export command: python export.py --weights yolov5s.pt --imgsz 320 320 --include coreml As written above, I had to set the model to eval (as per the coremlexport docs) for export (line 118 in export.py) or I wouldn't get the concat layer. Output of the export command is here, converted model is here. Note that this model does not do NMS, you'll have to implement that yourself. |
@softmatic |
@softmatic @tylercollins590 I found out that if you use Apple Vission automatically resizes the input buffer to the required dimension by the model, so no need to resize it manually! I tested it actually and it detects fine without any resizing. However, My inference speed is slow at the moment. |
I am getting around 27 fps now on the new M1 Ipad Pro 11. |
@softmatic hi! Thank you for your guidelines on how to export CoreML in evaluation mode. However, it didn't work for me( |
In case if someone still struggles with this, I solved by simply switching to Linux😅 |
@MBichurin, my setup was WSL 2 (Standard Ubuntu distro) on Windows 11. This works for training and export with the restriction that only one of my two GPUs is used for training (pytorch uses an NCCL version that is not compatible with WSL 2 atm., see NVIDIA/nccl#442) |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
@pytholic @softmatic @tylercollins590 @abhimanyuchadha96 @MBichurin good news 😃! Your original issue may now be fixed ✅ in PR #6195. This PR adds support for YOLOv5 CoreML inference. !python export.py --weights yolov5s.pt --include coreml # CoreML export
!python detect.py --weights yolov5s.mlmodel # CoreML inference (MacOS-only)
!python val.py --weights yolov5s.mlmodel # CoreML validation (MacOS-only)
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.mlmodel') # CoreML PyTorch Hub model To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher Thank you for the update. Cheers! |
Hello, I tried various methods to use this tool https://github.com/dbsystel/yolov5-coreml-tools you have used, but I have encountered the issues mentioned in the repo. I tried all solutions they have proposed and I wasn't able to solve them. Yolov5 converting to coreml this problem has been bothering me for three days, do you have other methods? Thank you so so much! 🙏🙏🙏 |
@evelynTen hi, have you seen this comment? Also, what kind of errors are you facing? |
I tried to switch to yolov5 4.0, but I ran into issue 1, I think maybe I did something wrong. The good news is that I used your repo and it worked!!! One question I have is did you retrain your model with yolov5 v4.0? Because I experimented with yolov5s.pt in yolov5 v6.0 and it reported the error "can not find SPPF", but v4.0's yolov5.pt file is fine. Thank you~:) |
Well that is a blessing in disguise then xD In my case I did not retrain my model with 4.0 version. Model was 6.0, but the source code during conversion was 4.0. |
One more thing, when you downlod source code of version 4.0 during conversion, do you change number of classes (nc) to your number of classes in yolov*.yaml? |
Thanks for your reminding! I changes nc in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5/models/yolov5s.yaml, also the range in this file: https://github.com/pytholic/Yolov5Export/blob/main/poetry_yolov5/yolov5-coreml-tools/src/coreml_export/main.py, both to 2 which is my number of classes. But I still got the error:
When I ran yolov5s.pt from v4.0, everything was fine. |
What happens when you run yolo5s.pt from version 6.0? |
It got the same error. sad... |
I just tried retrain my model with v4.0, but the system automatically downloaded v6.1's yolov5s.pt... OMG, the error came up again... |
@evelynTen somehow I missed your comment. Sorry!! |
Submitted a pull request #7263 which has an updated export.py script so that the exported CoreML model has a NMS layer |
@mshamash really appreciate you putting together this PR. Does this work on the current version of YoloV5 as well as the v6 models? |
@tylercollins590 I tested it on the current version of the Yv5 models, "v6.1", as well as "v5" models. I haven't tested it on "v6" models, but am fairly confident it would work on those too. |
@mshamash That's great news! I'm going to give your PR a try for myself. Thanks again for the effort. |
@mshamash thank you for this code. It works well. One thing I noticed is that the CoreML NMS layer outputs confidences that are much higher than the raw confidences output from CoreMLExportModel. Do you have any idea why that is? |
Hi and thank you for the repository.
I trained my model on custom dataset and now trying to export it as coreml model to use in iOS. However I am facing some difficulties.
Is it normal?
In the converted model, I lose the concatenation part (combing output of three levels) and also NMS. Then I manually implement NMS in swift but I really want it to be integrated in the model itself. Can you help with this? I think I am missing something during conversion step maybe.
When I test my code on iOS, in the end I have 10 boxes with highest scores. However, some of these boxes have exactly same score (even up to 7 decimal points) which I think should not be possible because boxes are different. So then when I apply NMS to keep only the best box, it doesn't really work because the some boxes have same score. Can you give some idea regarding this?
I have also attached converted mlmodel image with properties. Thank you in advance!!
The text was updated successfully, but these errors were encountered: