Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add user release benchmark so that we can run it on pull request #2489

Closed
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions .github/workflows/userbenchmark-a100-release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
name: Release TorchBench Userbenchmark on A100
on:
pull_request:
paths:
- userbenchmark/release-test/*

jobs:
run-userbenchmark:
runs-on: [a100-runner]
timeout-minutes: 1440 # 24 hours
environment: docker-s3-upload
env:
BASE_CONDA_ENV: "torchbench"
CONDA_ENV: "userbenchmark-a100"
PLATFORM_NAME: "gcp_a100"
SETUP_SCRIPT: "/workspace/setup_instance.sh"
steps:
- name: Checkout TorchBench
uses: actions/checkout@v3
with:
path: benchmark
- name: Tune Nvidia GPU
run: |
sudo nvidia-smi -pm 1
sudo nvidia-smi -ac 1215,1410
nvidia-smi
- name: Clone and setup conda env
run: |
CONDA_ENV=${BASE_CONDA_ENV} . "${SETUP_SCRIPT}"
conda create --name "${CONDA_ENV}" --clone "${BASE_CONDA_ENV}"
- name: Install TorchBench
run: |
set -x
. "${SETUP_SCRIPT}"
pushd benchmark
python install.py
- name: Run user benchmark
run: |
set -x
. "${SETUP_SCRIPT}"
# remove old results
if [ -d benchmark-output ]; then rm -Rf benchmark-output; fi
pushd benchmark
if [ -d .userbenchmark ]; then rm -Rf .userbenchmark; fi
MANUAL_WORKFLOW="release-test"
atalman marked this conversation as resolved.
Show resolved Hide resolved
if [ -z "${MANUAL_WORKFLOW}" ]; then
# Figure out what userbenchmarks we should run, and run it
python ./.github/scripts/userbenchmark/schedule-benchmarks.py --platform ${PLATFORM_NAME}
if [ -d ./.userbenchmark ]; then
cp -r ./.userbenchmark ../benchmark-output
else
mkdir ../benchmark-output
fi
else
python run_benchmark.py release-test
cp -r ./.userbenchmark/release-test ../benchmark-output
fi
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: TorchBench result
path: benchmark-output/
- name: Clean up Conda env
if: always()
run: |
. "${SETUP_SCRIPT}"
conda deactivate && conda deactivate
conda remove -n "${CONDA_ENV}" --all
Loading