Location encoded attention module that utilizes the location encoding of logical regions of slide image.
The architecture of the proposed classroom slide segmentation network for classroom slide segmentation. The network consists of three modules --- (i) attention module (upper dotted region), (ii) multi-scale feature extraction module (lower region), (iii) feature concatenation module. Here, ⊕ and ⊗ represent the element-wise summation and multiplication of features, respectively.
This repository provides the official PyTorch implementation of the paper:
Classroom Slide Narration System
Jobin K.V., Ajoy Mondal, and C. V. Jawahar
In CVIP 2021
Abstract: Slide presentations are an effective and efficient tool used by the teaching community for classroom communication. However, this teaching model can be challenging for the blind and visually impaired (VI) students. The VI student required a personal human assistance for understand the presented slide. This shortcoming motivates us to design a Classroom Slide Narration System (CSNS) that generates audio descriptions corresponding to the slide content. This problem poses as an image-to-markup language generation task. The initial step is to extract logical regions such as title, text, equation, figure, and table from the slide image. In the classroom slide images, the logical regions are distributed based on the location of the image. To utilize the location of the logical regions for slide image segmentation, we propose the architecture, Classroom Slide Segmentation Network (CSSN). The unique attributes of this architecture differs from most other semantic segmentation networks. Publicly available benchmark datasets such as WiSe and SPaSe are used to validate the performance of our segmentation architecture. We obtained 9.54% segmentation accuracy improvement in WiSe dataset. We extract content (information) from the slide using four well-established modules such as optical character recognition (OCR), figure classification, equation description, and table structure recognizer. With this information, we build a Classroom Slide Narration System (CSNS) to help VI students understand the slide content. The users have given better feedback on the quality output of the proposed CSNS in comparison to existing systems like Facebook’s Automatic Alt-Text (AAT) and Tesseract
Click the figure to watch the youtube video of our paper!
Clone this repository.
git clone git@github.com:jobinkv/CSNS.git
cd CSNS/CSSN
Install following packages.
conda create --name leanet python=3.6
conda activate leanet
conda install -y pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.1 -c pytorch
conda install scipy==1.4.1
conda install tqdm==4.46.0
conda install scikit-image==0.16.2
pip install tensorboardX==2.0
pip install thop
We evaludated CSSN on WiSe and SPaSe.
The files and make the directory structures as follows.
spase
|-- img
`-- labels
wise
|-- img
`-- labels
You should modify the path in "<path_to_cssn>/config.py" according to your WiSe and SPaSe dataset path.
#Dir Location
__C.DATASET.WISE_DIR = '/path/to/dataset/wise'
__C.DATASET.SPASE_DIR = '/path/to/dataset/spase'
You can download all models evaluated in our paper at One Drive
To train ResNet-101 based HANet, you should download ImageNet pretrained ResNet-101 from this link. Put it into following directory.
<path_to_cssn>/pretrained/resnet101-imagenet.pth
This pretrained model is from MIT CSAIL Computer Vision Group
According to the specification of your gpu system, you may modify the training script.
dataset='spase' #'spase' #'wise'
tails='_final.pth'
mode='train' #'trainval'
model='deepv3'
dot='.'
arch='DeepR101V3PlusD_LEANet_OS8'
cd /path/to/CSSN/
python trainslide.py --dataset $dataset\
--arch network.$model$dot$arch \
--city_mode $mode --lr 0.04 --poly_exp 0.9 \
--hanet_lr 0.04 --hanet_poly_exp 0.9 \
--crop_size 564 --color_aug 0.25 --max_iter 57000 \
--bs_mult 2 --pos_rfactor 18 --dropout 0.1 \
--best_model_name $model_name --jobid '001'\
--exp 'leanet_01' --ckpt /path/to/save/trained/model/ \
--tb_path "/path/to/tensor/flow/out" --syncbn --sgd --gblur --aux_loss \
--template_selection_loss_contri 0.1 --backbone_lr 0.01 --multi_optim
You can evaluate CSSNet (based on ResNet-101) using finely annotated training and validation set with following command.
python trainslide.py --dataset $dataset\
--arch network.$model$dot$arch \
--snapshot "/path/to/trained/model.pth"
If you find this work useful for your research, please cite our paper:
@InProceedings{Jobin_2021_CVIP,
author = {Jobin K.V., Ajoy Mondal, and C. V. Jawahar},
title = {Classroom Slide Narration System},
booktitle = {Conference on COMPUTER VISION and IMAGE PROCESSING (CVIP)},
month = {December},
year = {2021}
}
Our pytorch implementation is heavily derived from NVIDIA segmentation. Thanks to the NVIDIA implementations.