Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ET][Memory planning] Improve greedy memory planning. #7926

Open
wants to merge 2 commits into
base: gh/kimishpatel/151/base
Choose a base branch
from

Conversation

kimishpatel
Copy link
Contributor

@kimishpatel kimishpatel commented Jan 24, 2025

Stack from ghstack (oldest at bottom):

This diff replaces the old greedy algorithm. Older algorithm resulted in 35%
worse compared to theoretical optimum. THis matter for long context even more
since additional overhead can be few hundred MB.
For example the theorical optimial for llama3_2 8B, 4-bit quantized modelw ith
context length of 2k needs about 1G of memory. This theoretcial max can be
observed by looking at the peaks in memory profile.

Current agorithm resulted in about 1.6GB of planned memory. New algorithm
reduce that to about 1.1G.

Differential Revision: D68448332

This diff replaces the old greedy algorithm. Older algorithm resulted in 35%
worse compared to theoretical optimum. THis matter for long context even more
since additional overhead can be few hundred MB.
For example the theorical optimial for llama3_2 8B, 4-bit quantized modelw ith
context length of 2k needs about 1G of memory. This theoretcial max can be
observed by looking at the peaks in memory profile.

Current agorithm resulted in about 1.6GB of planned memory. New algorithm
reduce that to about 1.1G.

Differential Revision: [D68448332](https://our.internmc.facebook.com/intern/diff/D68448332/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Jan 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7926

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Unrelated Failure

As of commit 816efe9 with merge base f73b8cf (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 24, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68448332

Copy link

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

This diff replaces the old greedy algorithm. Older algorithm resulted in 35%
worse compared to theoretical optimum. THis matter for long context even more
since additional overhead can be few hundred MB.
For example the theorical optimial for llama3_2 8B, 4-bit quantized modelw ith
context length of 2k needs about 1G of memory. This theoretcial max can be
observed by looking at the peaks in memory profile.

Current agorithm resulted in about 1.6GB of planned memory. New algorithm
reduce that to about 1.1G.

Differential Revision: [D68448332](https://our.internmc.facebook.com/intern/diff/D68448332/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68448332

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants