Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: example of how to use ONNX Runtime (Ort) in algorithms #1358

Merged
merged 17 commits into from
Apr 17, 2024
Merged

Conversation

wdconinc
Copy link
Contributor

@wdconinc wdconinc commented Apr 2, 2024

Briefly, what does this PR introduce?

This PR adds the hooks for ONNX Runtime (CPU only) for fast inference, and an example that doesn't actually do anything except... Enabled by default but not actually doing anything.

What kind of change does this PR introduce?

  • Bug fix (issue #__)
  • New feature (issue: use ONNX for ML)
  • Documentation update
  • Other: __

Please check if this PR fulfills the following:

  • Tests for the changes have been added
  • Documentation has been added / updated
  • Changes have been communicated to collaborators @rahmans1

Does this PR introduce breaking changes? What changes might users need to make to their code?

No.

Does this PR change default behavior?

No.

@veprbl veprbl linked an issue Apr 2, 2024 that may be closed by this pull request
@wdconinc
Copy link
Contributor Author

wdconinc commented Apr 6, 2024

@wdconinc
Copy link
Contributor Author

This latest commit now runs with

bin/eicrecon -Preco:InclusiveKinematicsML:modelPath=$PWD/calibrations/identity_gemm_w1x1_b1.onnx sim_dis_18x275_minQ2\=1000_craterlake.edm4hep.root

using the model file at https://github.com/eic/epic-data/blob/main/identity_gemm_w1x1_b1.onnx

@wdconinc wdconinc requested review from veprbl and sly2j April 12, 2024 01:24
Copy link
Contributor

@ruse-traveler ruse-traveler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice! I only have a couple of extremely minor questions (the other one is below in this comment), but am otherwise very happy to see this!

Out of curiosity: do we want to keep all ML algorithms (or at least all that use ONNX) in a separate directory? Hopefully, we'll have lots of ML algorithms across all categories (e.g. calorimetry, PID, tracking, reco, etc.), so would it make sense to keep those with the corresponding non-ML algorithms?

src/algorithms/onnx/InclusiveKinematicsML.cc Show resolved Hide resolved
@wdconinc
Copy link
Contributor Author

do we want to keep all ML algorithms in a separate directory?

No, this was just as an example so I wanted to keep it self-contained. I suspect this will change as it gets evolved and integrated (e.g. the ONNX support should go into a service).

@ruse-traveler
Copy link
Contributor

No, this was just as an example so I wanted to keep it self-contained. I suspect this will change as it gets evolved and integrated (e.g. the ONNX support should go into a service).

I see! Makes sense! This will work well for the time being!

ruse-traveler
ruse-traveler previously approved these changes Apr 15, 2024
Copy link
Contributor

@ruse-traveler ruse-traveler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with this as-is! This is a really great example!

@wdconinc
Copy link
Contributor Author

@veprbl This doesn't download the model by hand anymore in the CI pipelines here, and instead uses the one that is pulled in through the geometry calibrations distribution.

Copy link
Member

@veprbl veprbl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wdconinc wdconinc added this pull request to the merge queue Apr 17, 2024
Merged via the queue into main with commit 850dd43 Apr 17, 2024
74 of 75 checks passed
@wdconinc wdconinc deleted the onnxruntime branch April 17, 2024 03:58
ajentsch pushed a commit that referenced this pull request May 20, 2024
This PR adds the hooks for ONNX Runtime (CPU only) for fast inference,
and an example that doesn't actually do anything except... Enabled by
default but not actually doing anything.

- [ ] Bug fix (issue #__)
- [x] New feature (issue: use ONNX for ML)
- [ ] Documentation update
- [ ] Other: __

- [ ] Tests for the changes have been added
- [ ] Documentation has been added / updated
- [x] Changes have been communicated to collaborators @rahmans1

need to make to their code?
No.

No.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

Demonstration of ML Integration in EICrecon
3 participants