You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
whenever I run train.py file using various parameters or path getting the below error. I am unable to understand the purpose of "train.txt". Please help
Hi,
whenever I run train.py file using various parameters or path getting the below error. I am unable to understand the purpose of "train.txt". Please help
Command line args:
{'--checkpoint': None,
'--checkpoint-dir': 'checkpoints',
'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--data-root': './prepro',
'--help': False,
'--hparams': '',
'--load-embedding': None,
'--log-event-path': None,
'--preset': 'presets/deepvoice3_ljspeech.json',
'--reset-optimizer': False,
'--restore-parts': None,
'--speaker-id': None,
'--train-postnet-only': False,
'--train-seq2seq-only': False}
Training whole model
Training seq2seq model
[!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1
Hyperparameters:
adam_beta1: 0.5
adam_beta2: 0.9
adam_eps: 1e-06
allow_clipping_in_normalization: True
amsgrad: False
batch_size: 16
binary_divergence_weight: 0.1
builder: deepvoice3
checkpoint_interval: 10000
clip_thresh: 0.1
converter_channels: 256
decoder_channels: 256
downsample_step: 4
dropout: 0.050000000000000044
embedding_weight_std: 0.1
encoder_channels: 512
eval_interval: 10000
fft_size: 1024
fmax: 7600
fmin: 125
force_monotonic_attention: True
freeze_embedding: False
frontend: en
guided_attention_sigma: 0.2
hop_size: 256
ignore_recognition_level: 2
initial_learning_rate: 0.0005
kernel_size: 3
key_position_rate: 1.385
key_projection: True
lr_schedule: noam_learning_rate_decay
lr_schedule_kwargs: {}
masked_loss_weight: 0.5
max_positions: 512
min_level_db: -100
min_text: 20
n_speakers: 1
name: deepvoice3
nepochs: 2000
num_mels: 80
num_workers: 2
outputs_per_step: 1
padding_idx: 0
pin_memory: True
power: 1.4
preemphasis: 0.97
priority_freq: 3000
priority_freq_weight: 0.0
process_only_htk_aligned: False
query_position_rate: 1.0
ref_level_db: 20
replace_pronunciation_prob: 0.5
rescaling: False
rescaling_max: 0.999
sample_rate: 22050
save_optimizer_state: True
speaker_embed_dim: 16
speaker_embedding_weight_std: 0.01
text_embed_dim: 256
trainable_positional_encodings: False
use_decoder_state_for_postnet_input: True
use_guided_attention: True
use_memory_mask: True
value_projection: True
weight_decay: 0.0
window_ahead: 3
window_backward: 1
Traceback (most recent call last):
File "train.py", line 954, in
X = FileSourceDataset(TextDataSource(data_root, speaker_id))
File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init
collected_files = self.file_data_source.collect_files()
File "train.py", line 106, in collect_files
with open(meta, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: './prepro\train.txt'
(tf-gpu) C:\Windows\System32\deepvoice3_pytorch>python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=.\prepro
Command line args:
{'--checkpoint': None,
'--checkpoint-dir': 'checkpoints',
'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--data-root': '.\prepro',
'--help': False,
'--hparams': '',
'--load-embedding': None,
'--log-event-path': None,
'--preset': 'presets/deepvoice3_ljspeech.json',
'--reset-optimizer': False,
'--restore-parts': None,
'--speaker-id': None,
'--train-postnet-only': False,
'--train-seq2seq-only': False}
Training whole model
Training seq2seq model
[!] Windows Detected - IF THAllocator.c 0x05 error occurs SET num_workers to 1
Hyperparameters:
adam_beta1: 0.5
adam_beta2: 0.9
adam_eps: 1e-06
allow_clipping_in_normalization: True
amsgrad: False
batch_size: 16
binary_divergence_weight: 0.1
builder: deepvoice3
checkpoint_interval: 10000
clip_thresh: 0.1
converter_channels: 256
decoder_channels: 256
downsample_step: 4
dropout: 0.050000000000000044
embedding_weight_std: 0.1
encoder_channels: 512
eval_interval: 10000
fft_size: 1024
fmax: 7600
fmin: 125
force_monotonic_attention: True
freeze_embedding: False
frontend: en
guided_attention_sigma: 0.2
hop_size: 256
ignore_recognition_level: 2
initial_learning_rate: 0.0005
kernel_size: 3
key_position_rate: 1.385
key_projection: True
lr_schedule: noam_learning_rate_decay
lr_schedule_kwargs: {}
masked_loss_weight: 0.5
max_positions: 512
min_level_db: -100
min_text: 20
n_speakers: 1
name: deepvoice3
nepochs: 2000
num_mels: 80
num_workers: 2
outputs_per_step: 1
padding_idx: 0
pin_memory: True
power: 1.4
preemphasis: 0.97
priority_freq: 3000
priority_freq_weight: 0.0
process_only_htk_aligned: False
query_position_rate: 1.0
ref_level_db: 20
replace_pronunciation_prob: 0.5
rescaling: False
rescaling_max: 0.999
sample_rate: 22050
save_optimizer_state: True
speaker_embed_dim: 16
speaker_embedding_weight_std: 0.01
text_embed_dim: 256
trainable_positional_encodings: False
use_decoder_state_for_postnet_input: True
use_guided_attention: True
use_memory_mask: True
value_projection: True
weight_decay: 0.0
window_ahead: 3
window_backward: 1
Traceback (most recent call last):
File "train.py", line 954, in
X = FileSourceDataset(TextDataSource(data_root, speaker_id))
File "C:\ProgramData\Anaconda3\envs\tf-gpu\lib\site-packages\nnmnkwii\datasets_init_.py", line 108, in init
collected_files = self.file_data_source.collect_files()
File "train.py", line 106, in collect_files
with open(meta, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: '.\prepro\train.txt'
The text was updated successfully, but these errors were encountered: