This is a python implementation of an indoor scene classifier comparison with experimentation results.
Install the dependencies using the requirements.txt
file.
There are two ways of setting up the code.
- Extracting features from the dataset to train and test the classifiers.
- Using the extracted features to train and test the classifiers.
Let ROOT
be the directory where all data/code/results will be placed.
-
Download the indoor scene classification dataset using this link and place all class folders in
${ROOT}/Images/airport_inside
etc. -
Download the train labels using this link and test labels using this link and place them in
${ROOT}/Dataset/TrainImages.txt
and${ROOT}/Dataset/TestImages.txt
. -
Execute the feature extraction script using
python feature_extractor.py
. This will create themnasnet1_0-features.h5
andresnext101-features.h5
in your${ROOT}/Dataset/
folder.
Let ROOT
be the directory where all data/code/results will be placed.
-
Download the extracted features using this link and place them in
${ROOT}/Dataset/mnasnet1_0-features.h5
and${ROOT}/Dataset/resnext101-features.h5
. -
Download the train labels using this link and test labels using this link and place them in
${ROOT}/Dataset/TrainImages.txt
and${ROOT}/Dataset/TestImages.txt
.
Execute python train_all.py
. Note: This may take more than 48 hours based on the specs of the PC.
Execute python extract_results.py > results/results.txt
.
This will create a csv file in ${ROOT}/results/results.csv
and a text file ${ROOT}/results/results.txt
with a LaTeX table with selected columns.
You can modify this by updating the columns in this line.
Execute python plot.py
.
This will generate all the plots in the ${ROOT}/plots/
folder.
All the experiment results can be downloaded using this link.