Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Decoder Pipeline Model Execution #907

Merged
merged 7 commits into from
Sep 24, 2024

Conversation

baijumeswani
Copy link
Contributor

#729 introduced support to execute a pipeline of ort sessions sequentially as defined by the config file.

This pull-request builds on top by:

  • Fixing a memory leak bug where the ortvalues stored in the ortvalue_pool_ were raw pointers and were not released.
  • Clears outputs from pipeline state that was only run on prompt. Before this pull-request, those ortvalues lived in the ortvalue_store_ forever using unnecessary memory.
  • Enable registering the managed inputs as pipeline model outputs.
  • Sharing an allocator across ort sessions.

Co-contributor: @edgchen1 who helped identify the bug(s).

src/generators.cpp Outdated Show resolved Hide resolved
src/generators.cpp Outdated Show resolved Hide resolved
src/models/model.cpp Show resolved Hide resolved
@baijumeswani baijumeswani merged commit 2348dc9 into main Sep 24, 2024
12 of 13 checks passed
@baijumeswani baijumeswani deleted the baijumeswani/optimize-decoder-pipeline branch September 24, 2024 23:31
@baijumeswani
Copy link
Contributor Author

Thank you for the review. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants