From acb1f05aa718c5105e0a628b2cab661a55e4face Mon Sep 17 00:00:00 2001 From: Vladimir Iashin Date: Tue, 16 Jun 2020 15:01:04 +0300 Subject: [PATCH] readme update (#11) --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f2bae18..1603995 100644 --- a/README.md +++ b/README.md @@ -71,11 +71,11 @@ If you want to skip the training procedure, you may replicate the main results o As we mentioned in the paper, we didn't have access to the full dataset as [ActivityNet Captions](https://cs.stanford.edu/people/ranjaykrishna/densevid/) is distributed as the list of links to YouTube video. Consequently, many videos (~8.8 %) were no longer available at the time when we were downloading the dataset. In addition, some videos didn't have any speech. We filtered out such videos from the validation files and reported the results as `no missings` in the paper. We provide these filtered ground truth files in `./data`. ## Raw Data & Details on Feature Extraction -If you are feeling brave, you may want extract features on your own. Check out our script for extraction of the I3D and VGGish features from a set of videos: [video_features on GitHub](https://github.com/v-iashin/video_features). Also see [#7](https://github.com/v-iashin/MDVC/issues/7) for more details on configuration. +If you are feeling brave, you may want extract features on your own. Check out our script for extraction of the I3D and VGGish features from a set of videos: [video_features on GitHub](https://github.com/v-iashin/video_features). Also see [#7](https://github.com/v-iashin/MDVC/issues/7) for more details on configuration. We also provide the script used to process the timestamps `./utils/parse_subs.py`. ## Misc. We additionally provide -- the file with subtitles with original timestamps in `./data/asr_en.csv` +- the file with subtitles with original timestamps in `./data/asr_en.csv`. - the file with video categories in `./data/vid2cat.json` ## Acknowledgments