-
Notifications
You must be signed in to change notification settings - Fork 273
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add keypoint detection task page (#873)
Co-authored-by: Merve Noyan <mervenoyan@Merve-MacBook-Pro.local> Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
- Loading branch information
1 parent
c34499f
commit e5e4969
Showing
2 changed files
with
105 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
## Task Variants | ||
|
||
### Pose Estimation | ||
|
||
Pose estimation is the process of determining the position and orientation of an object or a camera in a 3D space. It is a fundamental task in computer vision and is widely used in various applications such as robotics, augmented reality, and 3D reconstruction. | ||
|
||
## Use Cases for Keypoint Detection | ||
|
||
### Facial Landmark Estimation | ||
|
||
Keypoint detection models can be used to estimate the position of facial landmarks. Facial landmarks are points on the face such as the corners of the mouth, the outer corners of the eyes, and the tip of the nose. These landmarks can be used for a variety of applications, such as facial expression recognition, 3D face reconstruction, and cinematic animation. | ||
|
||
### Fitness Tracking | ||
|
||
Keypoint detection models can be used to track the movement of the human body, e.g. position of the joints in a 3D space. This can be used for a variety of applications, such as fitness tracking, sports analysis or virtual reality applications. | ||
|
||
## Inference Code | ||
|
||
Below you can find an example of how to use a keypoint detection model and how to visualize the results. | ||
|
||
```python | ||
from transformers import AutoImageProcessor, SuperPointForKeypointDetection | ||
import torch | ||
import matplotlib.pyplot as plt | ||
from PIL import Image | ||
import requests | ||
|
||
url_image = "http://images.cocodataset.org/val2017/000000039769.jpg" | ||
image = Image.open(requests.get(url_image_1, stream=True).raw) | ||
|
||
# initialize the model and processor | ||
processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") | ||
model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") | ||
|
||
# infer | ||
inputs = processor(image, return_tensors="pt").to(model.device, model.dtype) | ||
outputs = model(**inputs) | ||
|
||
# visualize the output | ||
image_width, image_height = image.size | ||
image_mask = outputs.mask | ||
image_indices = torch.nonzero(image_mask).squeeze() | ||
|
||
image_scores = outputs.scores.squeeze() | ||
image_keypoints = outputs.keypoints.squeeze() | ||
keypoints = image_keypoints.detach().numpy() | ||
scores = image_scores.detach().numpy() | ||
|
||
plt.axis('off') | ||
plt.imshow(image) | ||
plt.scatter( | ||
keypoints[:, 0], | ||
keypoints[:, 1], | ||
s=scores * 100, | ||
c='cyan', | ||
alpha=0.4 | ||
) | ||
plt.show() | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
import type { TaskDataCustom } from ".."; | ||
|
||
const taskData: TaskDataCustom = { | ||
datasets: [ | ||
{ | ||
description: "A dataset of hand keypoints of over 500k examples.", | ||
id: "Vincent-luo/hagrid-mediapipe-hands", | ||
}, | ||
], | ||
demo: { | ||
inputs: [ | ||
{ | ||
filename: "keypoint-detection-input.png", | ||
type: "img", | ||
}, | ||
], | ||
outputs: [ | ||
{ | ||
filename: "keypoint-detection-output.png", | ||
type: "img", | ||
}, | ||
], | ||
}, | ||
metrics: [], | ||
models: [ | ||
{ | ||
description: "A robust keypoint detection model.", | ||
id: "magic-leap-community/superpoint", | ||
}, | ||
{ | ||
description: "Strong keypoint detection model used to detect human pose.", | ||
id: "qualcomm/MediaPipe-Pose-Estimation", | ||
}, | ||
], | ||
spaces: [ | ||
{ | ||
description: "An application that detects hand keypoints in real-time.", | ||
id: "datasciencedojo/Hand-Keypoint-Detection-Realtime", | ||
}, | ||
], | ||
summary: "Keypoint detection is the task of identifying meaningful distinctive points or features in an image.", | ||
widgetModels: [], | ||
youtubeId: "", | ||
}; | ||
|
||
export default taskData; |