Skip to content

The improved implementation of FastSpeech based on pytorch.

Notifications You must be signed in to change notification settings

giranntu/FastSpeech

 
 

Repository files navigation

FastSpeech-Pytorch (2019/10/23 update)

The Implementation of FastSpeech Based on Pytorch.

Update

  1. Fix bugs in alignment;
  2. Fix bugs in transformer;
  3. Fix bugs in LengthRegulator;
  4. Change the way to process audio;
  5. Use waveglow to synthesize.

Model

My Blog

Start

Dependencies

  • python 3.6
  • CUDA 10.0
  • pytorch 1.1.0
  • numpy 1.16.2
  • scipy 1.2.1
  • librosa 0.6.3
  • inflect 2.1.0
  • matplotlib 2.2.2

1.Using the NGC(NVIDIA Docker Image)

docker pull nvcr.io/nvidia/pytorch:19.06-py3

2.Then docker run, make sure you have docker and NVIDIA-docker2 and so on... (would update detail if required)

NV_GPU=0,1,2,3 nvidia-docker run -it -v /raid/ryan/ryancode:/mnt -p 5771:8888 --name "pytorch_fasterVC" -p 7412:6006 nvcr.io/nvidia/pytorch:19.06-py3

  1. Install the requirement.

pip install -r requirement.txt

pip install --upgrade jupyterlab

  1. Launch the Jupyter Lab.

jupyter lab --ip=0.0.0.0 --no-browser --NotebookApp.token='' --allow-root --NotebookApp.allow_origin='*' --notebook-dir='/';

Prepare Dataset

  1. Download and extract LJSpeech dataset.
  2. Put LJSpeech dataset in data.
  3. Put Nvidia pretrained Tacotron2 model in the Tacotron2/pre_trained_model;
  4. Put Nvidia pretrained waveglow model in the waveglow/pre_trained_model;
  5. Run python preprocess.py.

Training

Run python train.py.

Test

Run python synthesis.py -t "<GIVEN SENTENCE>" Run python synthesis.py -t "hello world, make the world the better place." Run python synthesis.py -t "Roses are red, violets are blue. Whatevery I am writing. Damn. I have no clue."

Pretrained Model

Notes

  • In the paper of FastSpeech, authors use pre-trained Transformer-TTS to provide the target of alignment. I didn't have a well-trained Transformer-TTS model so I use Tacotron2 instead.
  • The examples of audio are in results.
  • The outputs and alignment of Tacotron2 are shown as follows (The sentence for synthesizing is "I want to go to CMU to do research on deep learning."):
  • The outputs of FastSpeech and Tacotron2 (Right one is tacotron2) are shown as follows (The sentence for synthesizing is "Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition."):

Reference

About

The improved implementation of FastSpeech based on pytorch.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%