This project utilizes the U2Net deep learning model to perform car segmentation in images. After segmenting the car from the background, it allows you to replace the background with a virtual background of your choice.
I suggest utilizing the provided Colab notebook for a comprehensive explanation, as I have employed this notebook for tasks such as data creation, training, inference, and visualization.
This section describes how to use the pretrained model for car segmentation and add a virtual background to car images.
To perform car segmentation and add virtual backgrounds, you'll need to download the pretrained U2Net model from this link (168 MB). Once downloaded, place it in the saved_models/u2net
folder.
I have trained two network for 200 epochs with multi bce loss and multi dice loss fucntions link (168 MB). Once downloaded, place it in the saved_models/u2net
folder.
You can run inference by provid ing the following arguments:
--image_dir
: Path to the directory containing car images.--mask_dir
: Path to the directory containing segmentation masks generated by U2Net.--background_path
: Path to the virtual background image you want to use.--save_dir
: Path to the directory where the output images will be saved.
Here's the command to run inference:
python car_virtual_background.py --image_dir /path/to/car/images --mask_dir /path/to/segmentation/masks --background_path /path/to/virtual/background.jpg --save_dir /path/to/output/directory
For example:
python car_virtual_background.py --image_dir /content/U-2-Net/dataset/Image --mask_dir /content/U-2-Net/runs/u2net_muti_dice_loss_checkpoint_epoch_200_results --background_path /content/U-2-Net/saved_models/background.jpg --save_dir /content/U-2-Net/car_virtual_bg/u2net_dice_200
The output images will be saved in the save_dir
directory specified in the command.
- U2net model is from original u2net repo. Thanks to Xuebin Qin for amazing repo.
- Complete repo follows structure of Pix2pixHD repo