Skip to content

Commit

Permalink
Disable llama_v2_7b_16h on optim benchmarks as they currently OOM (#1792
Browse files Browse the repository at this point in the history
)

Summary:
Fixes #1791

Pull Request resolved: #1792

Test Plan: https://github.com/pytorch/benchmark/actions/runs/5693564120 a run of the optim benchmarks

Reviewed By: xuzhao9

Differential Revision: D47873488

Pulled By: janeyx99

fbshipit-source-id: 322adb790aa739d5aff915de655641f17c6834e1
  • Loading branch information
janeyx99 authored and facebook-github-bot committed Jul 28, 2023
1 parent 648a32e commit 561abe3
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions userbenchmark/optim/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,10 @@ def get_unstable_models() -> Set[str]:
# Skip models deemed unstable by torch-nightly
{'model': m} for m in unstable_models
] + [
# 16h currently OOMs, but once it supports train, we should remove this line
# See tracker https://github.com/pytorch/benchmark/issues/1793
{'model': 'llama_v2_7b_16h'}
] +[
# SparseAdam does not support dense gradients
{'optim': 'SparseAdam', 'model': m} for m in DENSE_MODELS
] + [
Expand Down

0 comments on commit 561abe3

Please sign in to comment.