Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fea(ci): reduced slow test_diffusers timing. minor fixes #1330

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

imangohari1
Copy link
Contributor

What does this PR do?

This PR addresses the following issues:

  • reduced the clocktime for diffuser slow tests to complete the CI faster.
  • fixes CI errors like AssertionError: 29.7294 not greater than or equal to 29.8925 in test_no_generation_regression and test_no_generation_regression_ldm3d.

Tests

tldr; these changes are tested on both 8x G1 and G2. All 4 changed tests passes after the changes.

G2

GAUDI2_CI=1 RUN_SLOW=true python -m pytest tests/test_diffusers.py -s -v -k test_textual_inversion 2>&1 | tee -a $logfile
Items="test_depth2img_pipeline_latency_bf16 test_no_generation_regression test_no_generation_regression_ldm3d"
for i in $Items; do
 GAUDI2_CI=1 RUN_SLOW=true python -m pytest tests/test_diffusers.py -s -v -k $i 2>&1 | tee -a $logfile
done 
GAUDI2_CI=1 RUN_SLOW=true python -m pytest tests/test_diffusers.py -s -v -k test_train_controlnet 2>&1 | tee -a $logfile
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /scratch-1/sgohari/codes/ig/optimum-habana
configfile: setup.cfg
collecting ... collected 155 items / 153 deselected / 2 selected

tests/test_diffusers.py::TrainControlNet::test_train_controlnet Running with the following model specific env vars:
MASTER_ADDR=localhost
MASTER_PORT=29500
DistributedRunner run(): command = mpirun -n 8 --bind-to core --map-by socket:PE=9 --rank-by core --report-bindings --allow-run-as-root /usr/bin/python3 /scratch-1/sgohari/codes/ig/optimum-habana/examples/stable-diffusion/training/train_controlnet.py --pretrained_model_name_or_path CompVis/stable-diffusion-v1-4 --dataset_name fusing/fill50k --resolution 256 --train_batch_size 4 --learning_rate 1e-05 --validation_steps 1000 --validation_image "/tmp/tmpxa5wxh6_/conditioning_image_1.png" "/tmp/tmpxa5wxh6_/conditioning_image_2.png" --validation_prompt "red circle with blue background" "cyan circle with brown floral background" --checkpointing_steps 1000 --throughput_warmup_steps 3 --use_hpu_graphs --bf16 --max_train_steps 10 --output_dir /tmp/tmpxa5wxh6_ --trust_remote_code
[walmart-1:00467] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core 1[hwt 0-1]], socket 0[core 2[hwt 0-1]], socket 0[core 3[hwt 0-1]], socket 0[core 4[hwt 0-1]], socket 0[core 5[hwt 0-1]], socket 0[core 6[hwt 0-1]], socket 0[core 7[hwt 0-1]], socket 0[core 8[hwt 0-1]]: [BB/BB/BB/BB/BB/BB/BB/BB/BB/../../../../../../../../../../../../../../../../../../../../../../../../../../../../..][../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../..]
[walmart-1:00467] MCW rank 1 bound to socket 0[core 9[hwt 0-1]], socket 0[core 10[hwt 0-1]], socket 0[core 11[hwt 0-1]], socket 0[core 12[hwt 0-1]], socket 0[core 13[hwt 0-1]], socket 0[core 14[hwt 0-1]], socket 0[core 15[hwt 0-1]], socket 0[core 16[hwt 0-1]], socket 0[core 17[hwt 0-1]]: [../../../../../../../../../BB/BB/BB/BB/BB/BB/BB/BB/BB/../../../../../../../../../../../../../../../../../../../..][../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../..]
[walmart-1:00467] MCW rank 2 bound to socket 0[core 18[hwt 0-1]], socket 0[core 19[hwt 0-1]], socket 0[core 20[hwt 0-1]], socket 0[core 21[hwt 0-1]], socket 0[core 22[hwt 0-1]], socket 0[core 23[hwt 0-1]], socket 0[core 24[hwt 0-1]], socket 0[core 25[hwt 0-1]], socket 0[core 26[hwt 0-1]]: [../../../../../../../../../../../../../../../../../../BB/BB/BB/BB/BB/BB/BB/BB/BB/../../../../../../../../../../..][../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../..]
.
.
------------------------------------------------------------------------------
100%|██████████| 1/1 [00:25<00:00, 25.51s/it]
[INFO|pipeline_controlnet.py:640] 2024-09-13 00:55:20,564 >> Speed metrics: {'generation_runtime': 25.512, 'generation_samples_per_second': 1.064, 'generation_steps_per_second': 21.282}
PASSED
tests/test_diffusers.py::TrainControlNet::test_train_controlnet_script /scratch-1/sgohari/codes/ig/optimum-habana/examples/stable-diffusion/training/train_controlnet.py
PASSED

========== 2 passed, 153 deselected, 5 warnings in 269.51s (0:04:29) ===========
============================ test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /scratch-1/sgohari/codes/ig/optimum-habana
configfile: setup.cfg
collecting ... collected 155 items / 154 deselected / 1 selected

Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 18.35it/s]st_textual_inversion
Running with the following model specific env vars:
MASTER_ADDR=localhost
MASTER_PORT=29500
DistributedRunner run(): command = mpirun -n 8 --bind-to core --map-by socket:PE=9 --rank-by core --report-bindings --allow-run-as-root /usr/bin/python3 /scratch-1/sgohari/codes/ig/optimum-habana/examples/stable-diffusion/training/textual_inversion.py --pretrained_model_name_or_path CompVis/stable-diffusion-v1-4 --train_data_dir /tmp/tmpi0xomu19 --learnable_property "object" --placeholder_token "<cat-toy>" --initializer_token "toy" --resolution 256 --train_batch_size 4 --max_train_steps 10 --learning_rate 5.0e-04 --scale_lr --lr_scheduler "constant" --lr_warmup_steps 0 --output_dir /tmp/tmpedjaa86f --save_as_full_pipeline --gaudi_config_name Habana/stable-diffusion --throughput_warmup_steps 3 --seed 27
[walmart-1:13569] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core 1[hwt 0-1]], socket 0[core 2[hwt 0-1]], socket 0[core 3[hwt 0-1]], socket 0[core 4[hwt 0-1]], socket 0[core 5[hwt 0-1]], socket 0[core 6[hwt 0-1]], socket 0[core 7[hwt 0-1]], socket 0[core 8[hwt 0-1]]: [BB/BB/BB/BB/BB/BB/BB/BB/BB/../../../../../../../../../../../../../../../../../../../../../../../../../../../../..][../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../../..]
.
.
[INFO|pipeline_stable_diffusion.py:606] 2024-09-13 00:57:45,278 >> Speed metrics: {'generation_runtime': 18.2185, 'generation_samples_per_second': 0.773, 'generation_steps_per_second': 38.63}
PASSED

========== 1 passed, 154 deselected, 5 warnings in 141.82s (0:02:21) ===========
============================= HABANA PT BRIDGE CONFIGURATION ===========================
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 152
CPU RAM       : 1056439544 KB
------------------------------------------------------------------------------
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.GaudiStableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion.py:411] 2024-09-13 01:00:05,291 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion.py:416] 2024-09-13 01:00:05,291 >> The first two iterations are slower so it is recommended to feed more batches.
100%|██████████| 1/1 [00:28<00:00, 28.88s/it]
[INFO|pipeline_stable_diffusion.py:606] 2024-09-13 01:00:34,191 >> Speed metrics: {'generation_runtime': 28.8818, 'generation_samples_per_second': 0.355, 'generation_steps_per_second': 17.772}
PASSED
Fetching 12 files: 100%|██████████| 12/12 [00:28<00:00,  2.39s/it]no_generation_regression_ldm3d
Loading pipeline components...: 100%|██████████| 6/6 [00:00<00:00, 10.60it/s]
[INFO|pipeline_utils.py:130] 2024-09-13 01:01:20,612 >> Enabled HPU graphs.
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.GaudiStableDiffusionLDM3DPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion_ldm3d.py:268] 2024-09-13 01:01:21,625 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_ldm3d.py:273] 2024-09-13 01:01:21,626 >> The first two iterations are slower so it is recommended to feed more batches.
100%|██████████| 1/1 [00:06<00:00,  6.94s/it]
[INFO|pipeline_stable_diffusion_ldm3d.py:426] 2024-09-13 01:01:28,581 >> Speed metrics: {'generation_runtime': 6.9405, 'generation_samples_per_second': 0.358, 'generation_steps_per_second': 17.882}
PASSED
Fetching 12 files: 100%|██████████| 12/12 [00:04<00:00,  2.99it/s]no_generation_regression_upscale
Loading pipeline components...: 100%|██████████| 6/6 [00:00<00:00, 10.08it/s]
[INFO|pipeline_utils.py:130] 2024-09-13 01:01:49,862 >> Enabled HPU graphs.
[INFO|pipeline_stable_diffusion_upscale.py:350] 2024-09-13 01:01:50,965 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_upscale.py:355] 2024-09-13 01:01:50,965 >> The first two iterations are slower so it is recommended to feed more batches.
100%|██████████| 1/1 [00:36<00:00, 36.28s/it]
[INFO|pipeline_stable_diffusion_upscale.py:547] 2024-09-13 01:02:27,280 >> Speed metrics: {'generation_runtime': 36.2849, 'generation_samples_per_second': 0.34, 'generation_steps_per_second': 25.493}
PASSED

========== 3 passed, 152 deselected, 3 warnings in 167.65s (0:02:47) ===========
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /scratch-1/sgohari/codes/ig/optimum-habana
configfile: setup.cfg
collecting ... collected 155 items / 154 deselected / 1 selected

Loading pipeline components...: 100%|██████████| 6/6 [00:00<00:00, 11.57it/s]on_regression_ldm3d
[INFO|pipeline_utils.py:130] 2024-09-13 01:02:52,158 >> Enabled HPU graphs.
============================= HABANA PT BRIDGE CONFIGURATION ===========================
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 152
CPU RAM       : 1056439544 KB
------------------------------------------------------------------------------
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.GaudiStableDiffusionLDM3DPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion_ldm3d.py:268] 2024-09-13 01:02:55,762 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_ldm3d.py:273] 2024-09-13 01:02:55,762 >> The first two iterations are slower so it is recommended to feed more batches.
100%|██████████| 1/1 [00:29<00:00, 29.32s/it]
[INFO|pipeline_stable_diffusion_ldm3d.py:426] 2024-09-13 01:03:25,104 >> Speed metrics: {'generation_runtime': 29.3228, 'generation_samples_per_second': 0.356, 'generation_steps_per_second': 17.799}
PASSED

================ 1 passed, 154 deselected, 3 warnings in 55.69s ================

G1

same cmd as G2 with GAUDI2_CI=0.

Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  5.89it/s]
[INFO|pipeline_utils.py:130] 2024-09-12 22:51:48,799 >> Enabled HPU graphs.
============================= HABANA PT BRIDGE CONFIGURATION ===========================
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 144
CPU RAM       : 1056455388 KB
------------------------------------------------------------------------------
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.89s/it]
[INFO|pipeline_controlnet.py:640] 2024-09-12 22:52:14,729 >> Speed metrics: {'generation_runtime': 23.8933, 'generation_samples_per_second': 0.42, 'generation_steps_per_second': 8.395}
PASSED
tests/test_diffusers.py::TrainControlNet::test_train_controlnet_script /tmp/optimum-habana/examples/stable-diffusion/training/train_controlnet.py
PASSED

====================================================================================== 2 passed, 153 deselected, 5 warnings in 234.65s (0:03:54) ======================================================================================
=========================================================================================== test session starts ============================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /tmp/optimum-habana
configfile: setup.cfg
collected 155 items / 154 deselected / 1 selected
.
.

 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 144
CPU RAM       : 1056455388 KB
------------------------------------------------------------------------------
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:22<00:00,  2.22it/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.10k/4.10k [00:00<00:00, 47.4MB/s]
pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 599M/599M [00:19<00:00, 31.2MB/s]
preprocessor_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 316/316 [00:00<00:00, 5.00MB/s]
tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 16.4MB/s]
vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 961k/961k [00:00<00:00, 14.5MB/s]
merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 28.6MB/s]
tokenizer.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.22M/2.22M [00:00<00:00, 4.22MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 7.19MB/s]
PASSED

======================================================================== 1 passed, 154 deselected, 6 warnings in 176.39s (0:02:56) =========================================================================
=========================================================================================== test session starts ============================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /tmp/optimum-habana
configfile: setup.cfg
collected 155 items / 152 deselected / 3 selected

Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 12.88it/s]
[INFO|pipeline_utils.py:130] 2024-09-12 22:37:51,280 >> Enabled HPU graphs.
============================= HABANA PT BRIDGE CONFIGURATION ===========================
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 144
CPU RAM       : 1056455388 KB
------------------------------------------------------------------------------
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.GaudiStableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that                            you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend                            to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a                            look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion.py:411] 2024-09-12 22:37:53,417 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion.py:416] 2024-09-12 22:37:53,417 >> The first two iterations are slower so it is recommended to feed more batches.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:35<00:00, 35.59s/it]
[INFO|pipeline_stable_diffusion.py:606] 2024-09-12 22:38:29,030 >> Speed metrics: {'generation_runtime': 35.5947, 'generation_samples_per_second': 0.096, 'generation_steps_per_second': 4.791}
PASSED
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 313/313 [00:00<00:00, 4.47MB/s]
.
.
.
[INFO|pipeline_utils.py:130] 2024-09-12 22:43:26,745 >> Enabled HPU graphs.
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.GaudiStableDiffusionLDM3DPipeline'> by passing `safety_checker=None`. E                           nsure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongl                           y recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, ple                           ase have a look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion_ldm3d.py:268] 2024-09-12 22:43:27,786 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_ldm3d.py:273] 2024-09-12 22:43:27,786 >> The first two iterations are slower so it is recommended to feed more batches.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:14<00:00, 14.58s/it]
[INFO|pipeline_stable_diffusion_ldm3d.py:426] 2024-09-12 22:43:42,385 >> Speed metrics: {'generation_runtime': 14.5831, 'generation_samples_per_second': 0.096, 'generation_steps_per_second': 4.797}
PASSED
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348/348 .
.
.
[INFO|pipeline_utils.py:130] 2024-09-12 22:44:54,515 >> Enabled HPU graphs.
[INFO|pipeline_stable_diffusion_upscale.py:350] 2024-09-12 22:44:56,011 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_upscale.py:355] 2024-09-12 22:44:56,011 >> The first two iterations are slower so it is recommended to feed more batches.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:45<00:00, 45.13s/it]
[INFO|pipeline_stable_diffusion_upscale.py:547] 2024-09-12 22:45:41,176 >> Speed metrics: {'generation_runtime': 45.1348, 'generation_samples_per_second': 0.068, 'generation_steps_per_second': 5.068}
PASSED


======================================================================== 3 passed, 152 deselected, 3 warnings in 490.96s (0:08:10) =========================================================================
=========================================================================================== test session starts ============================================================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /tmp/optimum-habana
configfile: setup.cfg
collected 155 items / 154 deselected / 1 selected

Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 10.32it/s]
[INFO|pipeline_utils.py:130] 2024-09-12 22:46:04,688 >> Enabled HPU graphs.
============================= HABANA PT BRIDGE CONFIGURATION ===========================
 PT_HPU_LAZY_MODE = 1
 PT_RECIPE_CACHE_PATH =
 PT_CACHE_FOLDER_DELETE = 0
 PT_HPU_RECIPE_CACHE_CONFIG =
 PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
 PT_HPU_LAZY_ACC_PAR_MODE = 1
 PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
---------------------------: System Configuration :---------------------------
Num CPU Cores : 144
CPU RAM       : 1056455388 KB
------------------------------------------------------------------------------
You have disabled the safety checker for <class 'optimum.habana.diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.GaudiStableDiffusionLDM3DPipeline'> by passing `safety_checker=None`. E                           nsure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongl                           y recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, ple                           ase have a look at https://github.com/huggingface/diffusers/pull/254 .
[INFO|pipeline_stable_diffusion_ldm3d.py:268] 2024-09-12 22:46:07,006 >> 1 prompt(s) received, 1 generation(s) per prompt, 1 sample(s) per batch, 1 total batch(es).
[WARNING|pipeline_stable_diffusion_ldm3d.py:273] 2024-09-12 22:46:07,006 >> The first two iterations are slower so it is recommended to feed more batches.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:36<00:00, 36.51s/it]
[INFO|pipeline_stable_diffusion_ldm3d.py:426] 2024-09-12 22:46:43,534 >> Speed metrics: {'generation_runtime': 36.5098, 'generation_samples_per_second': 0.096, 'generation_steps_per_second': 4.79}
PASSED

============================================================================== 1 passed, 154 deselected, 3 warnings in 57.01s ==============================================================================

Results

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Copy link
Contributor

@yafshar yafshar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@regisss, would you please check this PR

@imangohari1
Copy link
Contributor Author

CC: @jiminha
Here are the generated images of CI tests
textual_inversion (here)

pr1330_textual_inversion_pic

train_controlnet (here)
pr1330_train_controlnet_pic

@libinta
Copy link
Collaborator

libinta commented Sep 18, 2024

@imangohari1 we normally have the accuracy test at the end, with reduce of time, does the accuracy test pass?

@imangohari1
Copy link
Contributor Author

imangohari1 commented Sep 18, 2024

@imangohari1 we normally have the accuracy test at the end, with reduce of time, does the accuracy test pass?

@libinta
Stable Diff tests/pipeline do not output accuracy at the end of the iteration.
that is why I looked at the generated images here for output accuracy: #1330 (comment)

@libinta libinta added synapse1.18 run-test Run CI for PRs from external contributors and removed synapse1.18 labels Sep 24, 2024
@imangohari1
Copy link
Contributor Author

@regisss
ref: #1175 (comment)
I updated the latency value for test_depth2img_pipeline_latency_bf16 since it has been failing since coded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-test Run CI for PRs from external contributors
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants