Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
arunponnusamy authored May 12, 2020
1 parent 27f4a2c commit a21fc33
Showing 1 changed file with 12 additions and 12 deletions.
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,14 @@ pip install .
Detecting faces in an image is as simple as just calling the function `detect_face()`. It will return the bounding box corners and corresponding confidence for all the faces detected.
### Example :

```
```python
import cvlib as cv
faces, confidences = cv.detect_face(image)
```
Seriously, that's all it takes to do face detection with `cvlib`. Underneath it is using OpenCV's `dnn` module with a pre-trained caffemodel to detect faces.

To enable GPU
```
```python
faces, confidences = cv.detect_face(image, enable_gpu=True)
```

Expand All @@ -67,14 +67,14 @@ Once face is detected, it can be passed on to `detect_gender()` function to reco

### Example

```
```python
label, confidence = cv.detect_gender(face)
```

Underneath `cvlib` is using an AlexNet-like model trained on [Adience dataset](https://talhassner.github.io/home/projects/Adience/Adience-data.html#agegender) by Gil Levi and Tal Hassner for their [CVPR 2015 ](https://talhassner.github.io/home/publication/2015_CVPR) paper.

To enable GPU
```
```python
label, confidence = cv.detect_gender(face, enable_gpu=True)
```

Expand All @@ -89,7 +89,7 @@ Detecting common objects in the scene is enabled through a single function call

### Example :

```
```python
import cvlib as cv
from cvlib.object_detection import draw_bbox

Expand All @@ -100,7 +100,7 @@ output_image = draw_bbox(img, bbox, label, conf)
Underneath it uses [YOLOv3](https://pjreddie.com/darknet/yolo/) model trained on [COCO dataset](http://cocodataset.org/) capable of detecting 80 [common objects](https://github.com/arunponnusamy/object-detection-opencv/blob/master/yolov3.txt) in context.

To enable GPU
```
```python
bbox, label, conf = cv.detect_common_objects(img, enable_gpu=True)
```

Expand All @@ -109,22 +109,22 @@ Checkout `object_detection.py` in `examples` directory for the complete code.
### Real time object detection
`YOLOv3` is actually a heavy model to run on CPU. If you are working with real time webcam / video feed and doesn't have GPU, try using `tiny yolo` which is a smaller version of the original YOLO model. It's significantly fast but less accurate.

```
```python
bbox, label, conf = cv.detect_common_objects(img, confidence=0.25, model='yolov3-tiny')
```
Check out the [example](examples/object_detection_webcam_yolov3_tiny.py) to learn more.

### Custom trained YOLO weights
To run inference with custom trained YOLOv3 weights try the following
```
```python
from cvlib.object_detection import YOLO

yolo = YOLO(weights, config, labels)
bbox, label, conf = yolo.detect_objects(img)
yolo.draw_bbox(img, bbox, label, conf)
```
To enable GPU
```
```python
bbox, label, conf = yolo.detect_objects(img, enable_gpu=True)
```

Expand All @@ -137,19 +137,19 @@ Checkout the [example](examples/yolo_custom_weights_inference.py) to learn more.
## Utils
### Video to frames
`get_frames( )` method can be helpful when you want to grab all the frames from a video. Just pass the path to the video, it will return all the frames in a list. Each frame in the list is a numpy array.
```
```python
import cvlib as cv
frames = cv.get_frames('~/Downloads/demo.mp4')
```
Optionally you can pass in a directory path to save all the frames to disk.
```
```python
frames = cv.get_frames('~/Downloads/demo.mp4', '~/Downloads/demo_frames/')
```

### Creating gif
`animate( )` method lets you create gif from a list of images. Just pass a list of images or path to a directory containing images and output gif name as arguments to the method, it will create a gif out of the images and save it to disk for you.

```
```python
cv.animate(frames, '~/Documents/frames.gif')
```

Expand Down

0 comments on commit a21fc33

Please sign in to comment.