Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to Transformers 4.43 #1163

Merged
merged 34 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
1390282
Update examples
regisss Jul 24, 2024
60d2a5a
Add changes for generate
regisss Jul 25, 2024
e06c663
Model changes
regisss Jul 26, 2024
ace3e10
Make style
regisss Jul 26, 2024
3b15d6d
Upgrade Accelerate
regisss Jul 26, 2024
f85ce43
Fix setup.py
regisss Jul 26, 2024
49d8774
Fixes
regisss Jul 26, 2024
7cf0f0a
Merge branch 'main' into transformers_4.43
regisss Jul 30, 2024
3a0be9f
Model fixes
regisss Jul 30, 2024
4cf3dbb
Fix FSDP
regisss Jul 31, 2024
0cbb67c
Other model fixes
regisss Jul 31, 2024
c5418b3
Merge branch 'main' into transformers_4.43
regisss Aug 1, 2024
820eb7d
Model fixes
regisss Aug 1, 2024
826b666
Merge branch 'main' into transformers_4.43
regisss Aug 2, 2024
2e8a80d
Merge branch 'main' into transformers_4.43
regisss Aug 4, 2024
4261160
Add support for contrastive search (#943)
skavulya Aug 5, 2024
c2fe976
Add sample and beam-sample to bucketting asserts
regisss Aug 5, 2024
6efef09
Merge branch 'main' into transformers_4.43
regisss Aug 5, 2024
55f9b0c
Fix max_position_embedding init for tf4.43 upgrade (#1202)
jiminha Aug 6, 2024
30918d4
Merge branch 'main' into transformers_4.43
regisss Aug 6, 2024
52ba812
Merge branch 'main' into transformers_4.43
regisss Aug 6, 2024
a004253
Hsub 443 integ (#1204)
regisss Aug 6, 2024
3fce38c
Fix LlamaConfig attr errors during test (#1206)
shepark Aug 6, 2024
66e2063
Fix max_position_embedding error for Mixtral (#1205)
jiminha Aug 6, 2024
2c1a68d
Sarkar/transformers 4.43 fixes part2 (#1207)
ssarkar2 Aug 6, 2024
7c96602
Revert "Fix max_position_embedding init for tf4.43 upgrade (#1202)" (…
jiminha Aug 6, 2024
8ebd789
Fix diffusers. From PR 1208 (#1210)
ssarkar2 Aug 6, 2024
76cfa0b
fea(): fixed pytest errors for gptneox (#1212)
imangohari1 Aug 6, 2024
74611d9
Sarkar/transformers 4.43 fixes for gpt2 test_torch_fx (#1213)
ssarkar2 Aug 6, 2024
29c6f57
fea(937): fixes related to OSError (#1214)
imangohari1 Aug 7, 2024
374369e
Starcoder2 : KVCache and flash attention (FusedSDPA) enablement (#1149)
abhatkal Aug 6, 2024
d3e3b3c
Align StarCoder2 with Transformers 4.43
regisss Aug 7, 2024
ee46bed
Merge branch 'main' into transformers_4.43
regisss Aug 7, 2024
24ba6f0
Update GPT-NeoX
regisss Aug 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions examples/audio-classification/run_audio_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")
Copy link

@Sanatan-Shrivastava Sanatan-Shrivastava Aug 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I am trying to run this test with versions (optimum-habana==1.12.1, transformers==4.43.0) but encountered the following error:

ERROR: Cannot install optimum-habana==1.12.1 and transformers==4.43.0 because these package versions have conflicting dependencies.\n#8 20.54 \n#8 20.54 The conflict is caused by:\n#8 20.54     The user requested transformers==4.43.0\n#8 20.54     optimum-habana 1.12.1 depends on transformers<4.41.0 and >=4.40.0

I also tried with optimum-habana version 1.12.0 and encountered the same error.

Can someone please point me to the correction PR or if there is any documentation to fix this?
Thanks!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should pip install optimimum habana from git, because this fork is ahead of 1.12.1, and have different dependencies, and this should be listed as 1.13.0.dev as the version number.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it should be check_optimum_habana_min_version("1.13.0.dev0"), I'll add a script to do that automatically.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should pip install optimimum habana from git, because this fork is ahead of 1.12.1, and have different dependencies, and this should be listed as 1.13.0.dev as the version number.

Ok great, Thanks for the info. So, currently, I am using pip install git+https://github.com/huggingface/optimum-habana.git@{{ optimum_habana_version }}. Would using pip install git+https://github.com/huggingface/optimum-habana.git to use the latest version that is compatible with transformers==4.43.0 work for future transformers versions?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may not work with future Transformers versions as there might be changes that are not compatible with what we do here in Optimum Habana.
In the coming weeks, I will open a new branch and try to maintain it so that new Transformers releases are supported but with potential perf regressions (which will be solved once it comes to the main branch).

Copy link

@Sanatan-Shrivastava Sanatan-Shrivastava Aug 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @regisss, that would be helpful.
Until then, I will be using transformers==4.43.0 while doing pip install git+https://github.com/huggingface/optimum-habana.git. It worked for me yesterday without specifying the optimum-habana version.
Can you please confirm if this^ would continue working for 4.43.0 only, please?
For future transformers' version - I'll be keep an eye out for the new branch you mentioned.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the main branch should work for Transformers 4.43.x till the next time we align the lib with a new version of Transformers.


require_version("datasets>=1.14.0", "To fix: pip install -r examples/pytorch/audio-classification/requirements.txt")

Expand Down
11 changes: 4 additions & 7 deletions examples/contrastive-image-text/clip_media_pipe.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
media_ext_reader_op_impl,
media_ext_reader_op_tensor_info,
)
from habana_frameworks.torch.hpu import get_device_name
except ImportError:
pass

Expand All @@ -47,7 +46,7 @@ class read_image_text_from_dataset(media_ext_reader_op_impl):

"""

def __init__(self, params):
def __init__(self, params, fw_params):
self.batch_size = 1
params = params["priv_params"]
self.meta_dtype = params["label_dtype"]
Expand All @@ -64,9 +63,7 @@ def __init__(self, params):
else:
self.max_file = get_max_file([img["path"] for img in self.dataset["image"]])
logger.info(f"The largest file is {self.max_file}.")

def set_params(self, params):
self.batch_size = params.batch_size
self.batch_size = fw_params.batch_size

def gen_output_info(self):
out_info = []
Expand Down Expand Up @@ -134,7 +131,7 @@ class ClipMediaPipe(MediaPipe):
instance_count = 0

def __init__(self, dataset=None, sampler=None, batch_size=512, drop_last=False, queue_depth=1):
self.device = get_device_name()
self.device = "legacy"
self.dataset = dataset
self.drop_last = drop_last
self.sampler = sampler
Expand All @@ -157,7 +154,7 @@ def __init__(self, dataset=None, sampler=None, batch_size=512, drop_last=False,
def_output_image_size = [self.image_size, self.image_size]
res_pp_filter = ftype.BICUBIC
self.decode = fn.ImageDecoder(
device=self.device,
device="hpu",
output_format=imgtype.RGB_P,
random_crop_type=randomCropType.CENTER_CROP,
resize=def_output_image_size,
Expand Down
4 changes: 2 additions & 2 deletions examples/contrastive-image-text/run_bridgetower.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/contrastive-image-text/requirements.txt")

Expand Down
4 changes: 2 additions & 2 deletions examples/contrastive-image-text/run_clip.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/contrastive-image-text/requirements.txt")

Expand Down
6 changes: 3 additions & 3 deletions examples/image-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ python run_image_classification.py \
--num_train_epochs 5 \
--per_device_train_batch_size 128 \
--per_device_eval_batch_size 64 \
--evaluation_strategy epoch \
--eval_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
Expand Down Expand Up @@ -197,7 +197,7 @@ python ../gaudi_spawn.py \
--num_train_epochs 5 \
--per_device_train_batch_size 128 \
--per_device_eval_batch_size 64 \
--evaluation_strategy epoch \
--eval_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
Expand Down Expand Up @@ -237,7 +237,7 @@ python ../gaudi_spawn.py \
--num_train_epochs 5 \
--per_device_train_batch_size 128 \
--per_device_eval_batch_size 64 \
--evaluation_strategy epoch \
--eval_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
Expand Down
4 changes: 2 additions & 2 deletions examples/image-classification/run_image_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/image-classification/requirements.txt")

Expand Down
29 changes: 25 additions & 4 deletions examples/image-to-text/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,9 @@ QUANT_CONFIG=./quantization_config/maxabs_quant.json python run_pipeline.py \

### Inference with FusedSDPA

Habana FusedSDPA is a fused and optimized implementation of torch.nn.functional.scaled_dot_product_attention() for Gaudi. For more details, refer to [Gaudi online documentation](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Optimization_in_PyTorch_Models.html?highlight=fusedsdpa#using-fused-scaled-dot-product-attention-fusedsdpa). Currently FusedSDPA works with BF16 precision for Llava models.
Habana FusedSDPA is a fused and optimized implementation of torch.nn.functional.scaled_dot_product_attention() for Gaudi. For more details, refer to [Gaudi online documentation](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Optimization_in_PyTorch_Models.html?highlight=fusedsdpa#using-fused-scaled-dot-product-attention-fusedsdpa).

Use the following commands to run Llava-1.5-7b inference with FusedSDPA
Use the following command to run Llava-1.5-7b BF16 inference with FusedSDPA
```bash
python3 run_pipeline.py \
--model_name_or_path llava-hf/llava-1.5-7b-hf \
Expand All @@ -149,12 +149,33 @@ python3 run_pipeline.py \
```


Use the following commands to run Llava-v1.6-mistral-7b inference with FusedSDPA
Use the following command to run Llava-v1.6-mistral-7b BF16 inference with FusedSDPA
```bash
python3 run_pipeline.py \
--model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf \
--image_path "https://llava-vl.github.io/static/images/view.jpg" \
--use_hpu_graphs \
--bf16 \
--use_flash_attention
```
```


Use the following commands to run Llava-v1.6-mistral-7b FP8 inference with FusedSDPA

Here is an example of measuring the tensor quantization statistics on Llava-v1.6-mistral-7b:
```bash
QUANT_CONFIG=./quantization_config/maxabs_measure.json python run_pipeline.py \
--model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf \
--image_path "https://llava-vl.github.io/static/images/view.jpg" \
--use_hpu_graphs \
--bf16 --use_flash_attention
```

Here is an example of quantizing the model based on previous measurements for Llava-v1.6-mistral-7b:
```bash
QUANT_CONFIG=./quantization_config/maxabs_quant.json python run_pipeline.py \
--model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf \
--image_path "https://llava-vl.github.io/static/images/view.jpg" \
--use_hpu_graphs \
--bf16 --use_flash_attention
```
18 changes: 9 additions & 9 deletions examples/language-modeling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ python3 run_lora_clm.py \
--output_dir ./model_lora_llama \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 1e-4 \
--warmup_ratio 0.03 \
Expand Down Expand Up @@ -410,7 +410,7 @@ LOWER_LIST=ops_bf16.txt python3 run_lora_clm.py \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 3e-4 \
--max_grad_norm 0.3 \
Expand Down Expand Up @@ -445,7 +445,7 @@ python ../gaudi_spawn.py \
--num_train_epochs 3 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 3e-4 \
--warmup_ratio 0.03 \
Expand Down Expand Up @@ -480,7 +480,7 @@ LOWER_LIST=ops_bf16.txt python ../gaudi_spawn.py \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 3e-4 \
--warmup_ratio 0.03 \
Expand Down Expand Up @@ -518,7 +518,7 @@ python ../gaudi_spawn.py \
--num_train_epochs 5 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 1e-4 \
--logging_steps 1 \
Expand Down Expand Up @@ -547,7 +547,7 @@ LOWER_LIST=ops_bf16.txt python3 ../gaudi_spawn.py \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 4e-4 \
--max_grad_norm 0.3 \
Expand Down Expand Up @@ -589,7 +589,7 @@ python3 ../gaudi_spawn.py --use_deepspeed --world_size 8 run_lora_clm.py \
--per_device_train_batch_size 10 \
--per_device_eval_batch_size 1 \
--gradient_checkpointing \
--evaluation_strategy epoch \
--eval_strategy epoch \
--eval_delay 2 \
--save_strategy no \
--learning_rate 0.0018 \
Expand Down Expand Up @@ -641,7 +641,7 @@ python3 ../gaudi_spawn.py --world_size 8 --use_mpi run_lora_clm.py \
--fsdp_config fsdp_config.json \
--fsdp auto_wrap \
--num_train_epochs 2 \
--evaluation_strategy epoch \
--eval_strategy epoch \
--per_device_eval_batch_size 1 \
--eval_delay 2 \
--do_eval \
Expand All @@ -668,7 +668,7 @@ DEEPSPEED_HPU_ZERO3_SYNC_MARK_STEP_REQUIRED=1 LOWER_LIST=ops_bf16.txt python3 ..
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 4e-4 \
--max_grad_norm 0.3 \
Expand Down
2 changes: 1 addition & 1 deletion examples/language-modeling/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ sentencepiece != 0.1.92
protobuf
evaluate
scikit-learn
peft == 0.10.0
peft == 0.12.0
4 changes: 2 additions & 2 deletions examples/language-modeling/run_clm.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")

Expand Down
4 changes: 2 additions & 2 deletions examples/language-modeling/run_mlm.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")

Expand Down
2 changes: 1 addition & 1 deletion examples/protein-folding/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ python ../gaudi_spawn.py --world_size 8 --use_mpi run_sequence_classification.py
--num_train_epochs 100 \
--lr_scheduler_type constant \
--do_eval \
--evaluation_strategy epoch \
--eval_strategy epoch \
--per_device_eval_batch_size 32 \
--logging_strategy epoch \
--save_strategy epoch \
Expand Down
2 changes: 1 addition & 1 deletion examples/protein-folding/run_esmfold.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.11.0")
check_optimum_habana_min_version("1.12.0")


def convert_outputs_to_pdb(outputs):
Expand Down
2 changes: 1 addition & 1 deletion examples/protein-folding/run_sequence_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.11.0")
check_optimum_habana_min_version("1.12.0")

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion examples/protein-folding/run_zero_shot_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.11.0")
check_optimum_habana_min_version("1.12.0")


logging.basicConfig(
Expand Down
4 changes: 2 additions & 2 deletions examples/question-answering/run_qa.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt")

Expand Down
4 changes: 2 additions & 2 deletions examples/question-answering/run_seq2seq_qa.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt")

Expand Down
2 changes: 1 addition & 1 deletion examples/speech-recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ python run_speech_recognition_seq2seq.py \
--logging_steps="25" \
--learning_rate="1e-5" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--eval_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
Expand Down
4 changes: 2 additions & 2 deletions examples/speech-recognition/run_speech_recognition_ctc.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ def check_optimum_habana_min_version(*a, **b):
logger = logging.getLogger(__name__)

# Will error if the minimal version of Transformers and Optimum Habana are not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.40.0")
check_optimum_habana_min_version("1.11.0")
check_min_version("4.43.0")
check_optimum_habana_min_version("1.12.0")

require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")

Expand Down
2 changes: 1 addition & 1 deletion examples/stable-diffusion/image_to_image_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.10.0")
check_optimum_habana_min_version("1.12.0")


logger = logging.getLogger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion examples/stable-diffusion/image_to_video_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.8.1")
check_optimum_habana_min_version("1.12.0")


logger = logging.getLogger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion examples/stable-diffusion/text_to_image_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def check_optimum_habana_min_version(*a, **b):


# Will error if the minimal version of Optimum Habana is not installed. Remove at your own risks.
check_optimum_habana_min_version("1.11.0")
check_optimum_habana_min_version("1.12.0")


logger = logging.getLogger(__name__)
Expand Down
Loading
Loading