This Code Pattern is part of the series Extracting Textual Insights from Videos with IBM Watson
As part of the series which extracts insights from virtual meetings or classrooms, the very first step is to extract audio from video and store it in a common accessible storage space. In this code pattern, we will consider a video recording of a meeting, extract audio from that video file using open source library FFMPEG in a python flask runtime. FFMPEG is a complete, cross-platform solution to record, convert and stream audio and video. Lastly, we will store the extracted audio in IBM Cloud Object Storage. IBM Cloud Object Storage is a highly scalable cloud storage service, designed for high durability, resiliency and security. The stored audio files will be used for further processing to provide speaker diarization in the next code pattern of the series.
In this code pattern, Given a video recording of the virtual meeting or a virtual classroom, we extract the audio from the video and store it in IBM Cloud Object Storage.
When you have completed this code pattern, you will understand how to:
- Connect applications directly to Cloud Object Storage.
- Use other IBM Cloud Services and open-source tools with your data.
-
User uploads video file to the application.
-
The FFMPEG library extracts the audio from the video.
-
The extracted audio file is stored in Cloud Object Storage.
Clone the convert-video-to-audio
repo locally. In a terminal, run:
$ git clone https://github.com/IBM/convert-video-to-audio
We will be using the following datasets:
data/earnings-call-train-data.mp4
data/earnings-call-test-data.mp4
data/earnings-call-Q-and-A.mp4
For the code pattern demonstration, we have considered IBM Earnings Call Q1 2019
Webex recording. The data has 40min of IBM Revenue discussion, and 20+ min of Q & A at the end of the recording. We have split the data into 3 parts:
-
earnings-call-train-data.mp4
- (Duration - 24:40) This is the initial part of the discussion from the recording which we will be using to train the custom Watson Speech To Text model in the second code pattern from the series. -
earnings-call-test-data.mp4
- (Duration - 36:08) This is the full discussion from the recording which will be used to test the custom Speech To Text model and also to get transcript for further analysis in the third code patten from the series. -
earnings-call-Q-and-A.mp4
- (Duration - 2:40) This is a part of Q & A's asked at the end of the meeting. The purpose of this data is to demonstrate how Watson Speech To Text can detect different speakers from an audio which will be demonstrated in the second code pattern from the series.
-
Create a Cloud Object Storage Service if not already created.
-
In Cloud Object Dashboard, Click on Services Credentials
- Click on New credential and add a service credential as shown. Once the credential is created, copy and save the credentials in a text file for using it in later steps in this code pattern.
- In the repo parent folder, open the credentials.json file and paste the credentials copied in step 2 and save the file.
With Docker Installed
- change directory to repo parent folder :
$ cd convert-video-to-audio/
- Build the Dockerfile as follows :
$ docker image build -t convert-video-to-audio .
- once the dockerfile is built run the dockerfile as follows :
$ docker run -p 8080:8080 convert-video-to-audio
- The Application will be available on http://localhost:8080
Without Docker
- Install the FFMPEG library.
For Mac users run the following command:
$ brew install ffmpeg
Other platform users can refer to the ffmpeg documentation to install the library.
-
Install the python libraries as follows:
- change directory to repo parent folder
$ cd convert-video-to-audio/
- use
python pip
to install the libraries
$ pip install -r requirements.txt
-
Finally run the application as follows:
$ python app.py
- The Application will be available on http://localhost:8080
-
Visit http://localhost:8080 on your browser to run the application.
-
You can extract the audio and store it in Cloud Object Storage in just 3 steps:
-
Enter a
Bucket Name
to get started.
- Upload the video files
earnings-call-train-data.mp4
,earnings-call-test-data.mp4
&earnings-call-Q-and-A.mp4
from thedata
directory of the cloned repo and click onUpload
button.
- Click on
Extract Audio
button to extract the audio.
- Download the
earnings-call-test-data.flac
&earnings-call-Q-and-A.flac
as shown, it will be used in the second code pattern from the series.
We have seen how to extract audio from video files and store the result in Cloud Object Storage. In the next code pattern of the series we will learn how to train a custom Speech to Text model to transcribe the text from the extracted audio files.
- CLIENT ERROR: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: Container textmining exists with a different storage location than requested.
This is a common error that occurs if the specified bucket name is already present in some storage location.
- In the repo parent folder, open the credentials.json file and delete the
bucket_name
from the json file and refresh the application. Use a different bucket name instead.
{
"apikey": "*****",
"cos_hmac_keys": {
"access_key_id": "*****",
"secret_access_key": "*****"
},
"endpoints": "*****",
"iam_apikey_description": "*****",
"iam_apikey_name": "*****",
"iam_role_crn": "*****",
"iam_serviceid_crn": "*****",
"resource_instance_id": "*****",
"bucket_name": "text-mining"
}
NOTE: Make sure to delete the
,
at the end ofresource_instance_id
as it its a json file.
This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.