This Colab notebook provides a step-by-step guide to generate a deepfake video by cloning a voice onto a video. The process involves uploading video and voice files, renaming them, extracting audio, creating audio chunks, and finally using Wav2Lip for deepfake generation.
Before executing this notebook we need to have a folder in our Google Drive named deepfake
with at least a video file (mp4 format). It is strongly recommended to also include an audio (mp3 format) file to clone the voice from. Especially for cases of non-English language in the video, it is essential to upload an English audio file as well.
Caution: Text prompt should be separated with '|' every one to two sentences (every ~20secs it takes to read it). If you get any warnings and restart session is suggested (after installing a library - e.g. librosa, as shown in the figure below), click 'cancel'. In the free version (T4 or V100 with 15GB VRAM and ~13GB RAM) the maximum audio/video duration can be ~50secs (takes ~30mins to run the script and obtain results). For a longer text prompt a larger GPU is needed (paid version using L4 with 22.5GB VRAM and ~63GB of RAM or A100 with 40GB VRAM and ~84GB RAM - the latter uses more compute units/hour).
- Mount Google Drive to access files.
- Change directory to the specified path.
from google.colab import drive
drive.mount('/content/gdrive')
cd gdrive/MyDrive/deepfake
Specify the base path for video and audio files.
base_path='/content/gdrive/MyDrive/deepfake'
Install TTS, pydub, and moviepy libraries.
!pip install -q pydub==0.25.1 TTS==0.22.0 moviepy==1.0.3
Set the English text that will be read with the cloned voice.
text_to_read="Joining two modalities results in a surprising increase in generalization! \\\n What would happen if we combined them all?\"
Rename the uploaded audio and video files to input_voice.mp3
and video_full.mp4
, respectively.
If only a video is provided, extract audio from it to be used to clone the individual.
Create a folder with 10-second chunks of audio to be used as input in Tortoise.
Ensure audio and video have the same duration. If not, trim the longer one to match the shorter one (or cut them both to 20 seconds).
Clone Wav2Lip GitHub repository, download pre-trained models, and install dependencies.
Run the Wav2Lip inference script to generate the deepfake video.
Remove temporary files and folders.