Our work builds upon the context encoder baseline model for image outpainting proposed in Image Outpaintng and Harmonization using Generative Adversarial Networks. This project was for the class Deep Learning by Professor Jacob Whitehill at Worcester Polytechnic Institute.
We generate a 192x192 image from the given ground truth of the same size, masked to only show 128x128 of the target. We qualitatively evaluate improvements to the generative network and discriminator including implementing super-resolution upscaling techniques.
Our models are separated in their respective folders, but each use a train and val folder in the repository root for training. The dataset zips linked to these respective folders contain images from the MIT Places365-Standard dataset.
- Run
train.py
of each model to train the network - Evaluate custom input image by running
forward.py input.jpg output.jpg