Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML: Aggregated changes to add all required ops for priority model #21472

Merged
merged 11 commits into from
Jul 25, 2024

Conversation

skottmckay
Copy link
Contributor

@skottmckay skottmckay commented Jul 24, 2024

Description

Add these changes to one PR to simplify checkin

Other changes

  • updated partitioning utils to support dropping constant initializers from a ComputeCapability's inputs.
    • noticed that the list of inputs to the coreml model was unexpectedly long due to this
    • we copy constant initializers to a CoreML model so don't need the originals, and if they remain as inputs ORT can't free them as they appear to be in use.

Motivation and Context

- Add Concat (#21423)
- Add DepthToSpace (#21426)
- Add LeakyRelu (#21453)
- Add test scripts (#21427)
- Add ability to set coreml flags from python (#21434)

Also updated partitioning utils to support dropping constant initializers from a ComputeCapability's inputs. We copy these to a CoreML model so don't need the originals. If they remain as inputs ORT can't free them as they appear to be in use.

Misc changes
- Fix SkipLayerNormFusion incorrectly setting `modified`
  - causes unnecessary loops of the L2 transformers
Drop SkipLayerNormFusion change - need to investigate test failures.
Update session state and allocation planner to handle ORT format models where an EP drops constant initializers.
@skottmckay skottmckay merged commit b0e1f7f into main Jul 25, 2024
99 checks passed
@skottmckay skottmckay deleted the skottmckay/CoreML_AggregatedPR branch July 25, 2024 22:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants