This repo is a intermediate to advance level Image processing code examples and guide with open cv in python
This jupyter notebook consists of solid practical examples for multiple concepts of image processing.
OpenCV is a great tool for image processing and performing computer vision tasks. It is an open-source library that can be used to perform tasks like face detection, objection tracking, landmark detection, and much more. It supports multiple languages including python, java C++.
The Wand is an Imagick library for python. It supports the functionalities of Imagick API in Python 2.6, 2.7, 3.3+, and PyPy. This library not only helps in processing the images but also provides valuable functionalities for Machine Learning codes using NumPy.
Gamma correction, which is used to display an image accurately onscreen, controls the brightness of an image and can be used to change the red-to-green-to-blue ratio, Two examples are implemented for Gamma correction one with opencv and other with wand library.
Deconvolution is used to correct blurry images, which helps restore contrast. With blurred images, it is difficult to determine pixel intensity. To make this correction, we use what is called the point spread function (PSF).
we deconvolve an image using Richardson-Lucy deconvolution algorithm.
The algorithm is based on a PSF (Point Spread Function), where PSF is described as the impulse response of the optical system. The blurred image is sharpened through a number of iterations, which needs to be hand-tuned.
- Box Blur
- Gaussian Blur
- Median Blur
- Sharpening
- Emboss
- RGB to HSV
- RGB to LAB
- Blend two images
- Changing Contrast and Brightness
- Add text to images
- Smoothing images with (MedianBlur, GaussianBlur, BilateralBlur)
- Image Erosion
- Image Dilation
- Effect Image Threshold
- Calculate Gradients
- Perform Histogram Equalization
- Find and constructing a space to ensure scale invariance
- Find the difference between the gaussians
- Find the important points present inside the image
- Remove the unimportant points to make efficient comparisons
- Provide orientation to the important points found in step 3
- Identifying the key features uniquely.
Suppose we have two images of a single place from an aerial view. One image depicts the place using satellites whereas the second one shows a part of the same image using drones. Satellite images get updated in terms of years, whereas drone images are taken much more frequently. So, there may be a situation in which the drone image captures developments not see in the satellite image. In this scenario, we may want to put the drone image in exactly the same place where it belongs in the satellite image, but also show the latest updates. This process of putting one image over the other, at exactly the same place where it is present, is called image registration.
RANSAC is one of the best algorithms to use for image registration, which consists of four steps:
- Feature detection and extraction
- Feature matching
- Transformation function fitting
- Image transformation and image resampling
Find lines of hand make them prominent on the image
Detect faces in the image.
Train on some data and see the results
Notes: The datasets used are also in the repo incase you want to test on similar data.
Access the notebook on Google Colab: https://colab.research.google.com/drive/1uYMU8Zv7TS1w5q481ajPEZ_A_73DvoPV?usp=sharing