Skip to content

Commit

Permalink
Add documentation and notebook example for Prune API (#1070)
Browse files Browse the repository at this point in the history
- Ticket no. 112199
- Add documentation for prune feature
- Add Jupyter notebook example for prune feature
  • Loading branch information
sooahleex authored Jul 11, 2023
1 parent c48c128 commit 3bbb425
Show file tree
Hide file tree
Showing 11 changed files with 4,435 additions and 5 deletions.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
(<https://github.com/openvinotoolkit/datumaro/pull/1077>, <https://github.com/openvinotoolkit/datumaro/pull/1081>)
- Support mask annotations for CVAT data format
(<https://github.com/openvinotoolkit/datumaro/pull/1078>)
- Add documentation and notebook example for Prune API
(<https://github.com/openvinotoolkit/datumaro/pull/1070>)

### Enhancements
- Enhance import performance for built-in plugins
Expand Down
Binary file added docs/images/centroid.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/cluster_random.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/entropy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/query_clust.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
93 changes: 93 additions & 0 deletions docs/source/docs/command-reference/context_free/prune.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Prune

## Prune Dataset

This command prunes dataset to extract representative subset of the entire dataset. You can effectively handle large-scale dataset having redundancy through this command. The result consists of a representative and manageable subset.

Prune supports various methodology.
- Randomized
- Hash-based
- Clustering-based

`Randomized` approach is based on the most fundamental form of randomness that we are familiar with, where data is selected randomly from the dataset. `Hash-based` approach operates on hash basis like [Explorer](./explorer.md). The default model for calculating hash is CLIP, which could support both image and text modality. Supported model format is OpenVINO IR and those are uploaded in [openvinotoolkit storage](https://storage.openvinotoolkit.org/repositories/datumaro/models/). `Clustering-based` approach is based on clustering to cover unsupervised dataset either. We compute hashes for the images in the dataset or utilize label data to perform clustering.

By default, datasets are updated in-place. The `-o/--output-dir` option can be used to specify another output directory. When updating in-place, use the `--overwirte` parameter (in-place updates fail by default to prevent data loss), unless a project target is modified.

The current project (`-p/--project`) is also used as a context for plugins, so it can be useful for datasest paths having custom formats. When not specified, the current project's working tree is used.

The command can be applied to a dataset or a project build target, a stage or the combined `project` target, in which case all the project targets will be affected.

Usage:
```
datum prune [TARGET] -m METHOD [-r RATIO] [-h/--hash-type HASH_TYPE]
[-m MODEL] [-p PROJECT_DIR] [-o DST_DIR] [--overwrite]
```

Parameters:
- `<target>` (string) - Target [dataset revpath](../../user-manual/how_to_use_datumaro.md#dataset-path-concepts).
By default, prints info about the joined `project` dataset.
- `-m, --method` (string) - Prune method name (default: random).
- `-r, --ratio` (float) - Number how much you want to remain among dataset (default: 0.5).
- `--hash-type` (string) - Hash type based for clustering of `query_clust` (default: img). We support image and text hash to extract feature from datasetitem. To use text hash, put `txt` for `hash-type`.
- `-p, --project` (string) - Directory of the project to operate on (default: current directory).
- `-o, --output-dir` (string) - Output directory. Can be omitted for main project targets (i.e. data sources and the `project` target, but not intermediate stages) and dataset targets. If not specified, the results will be saved inplace.
- `--overwrite` - Allows to overwrite existing files in the output directory, when it is specified and is not empty.

Examples:
- Prune dataset through clustering random with image hash into ratio 80
```console
datum prune source1 -m cluster_random -h image -r 80
```

### Built-in prune methods
- [`random`](#random) - Select randomly among dataset
- [`cluster_random`](#cluster_random) - Select randomly among clusters
- [`centroid`](#centroid) - Select center of each cluster
- [`query_clust`](#query_clust) - Set init with label of cluster
- [`entropy`](#entropy) - Select dataset based label entropy among clusters
- [`ndr`](#ndr) - Removes duplicated images from dataset

#### `random`
Randomly select items from the dataset. The items are chosen from the entire dataset using the most common random method.
```console
datum prune -m random -r 0.8 -p </path/to/project/>
```

#### `cluster_random`
Select items randomly among each clusters. Cluster the entire dataset using K-means based on the number of labels, and select items from each cluster according to the desired ratio.
```console
datum prune -m cluster_random -r 0.8 -p </path/to/project/>
```
![cluster_random](../../../../images/cluster_random.png)

#### `centroid`
Clustering the entire dataset, set the number of desired data samples as the number of clusters. To perform clustering, a desired number of data points is selected as centroids from the entire dataset, based on the desired proportion. Then, the centers of each cluster are chosen.

```console
datum prune -m centroid -r 0.8 -p </path/to/project/>
```
![centroid](../../../../images/centroid.png)

#### `query_clust`
When clustering the entire dataset, set the representative query for each label as the center of the cluster. The representative query is calculated through image or text hash, and one representative query is set for each label. It supports the approach of randomly selecting one item per label and choosing it as the representative query. In the generated clusters, random selection of items is performed according to the desired ratio.
```console
datum prune -m query_clust -h img -r 0.8 -p </path/to/project/>
```

```console
datum prune -m query_clust -h txt -r 0.8 -p </path/to/project/>
```
![query_clust](../../../../images/query_clust.png)

#### `entropy`
After clustering the entire dataset, items are selected within each cluster based on the desired ratio, considering the entropy of labels.
```console
datum prune -m entropy -r 0.8 -p </path/to/project/>
```
![entropy](../../../../images/entropy.png)

#### `ndr`
Remove near-duplicated images in each subset. You could check detail for this method in [ndr](./transform.md#ndr).
```console
datum prune -m ndr -p </path/to/project/>
```
12 changes: 11 additions & 1 deletion docs/source/docs/jupyter_notebook_examples/refine.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Refine
######

We here provide the examples of dataset validation, correction and query-based filtration.
We here provide the examples of dataset validation, correction, query-based filtration and pruning.

Datumaro's validator detects 22 anomalies such as missing or undefined label, far-from-mean outliers
and generates the validation report by categorizing anomalies into `info`, `warning`, and `error`.
Expand All @@ -17,6 +17,8 @@ For instance, with a given XML file below, we can filter a dataset by the subset
through ``/item[image/width=image/height]``, and annotation information such as id (``id``), type
(``type``), label (``label_id``), bounding box (``x, y, w, h``), etc.

Through Prune API, you can create representative subsets of the entire dataset using various supported methods.

.. code-block::
<item>
Expand Down Expand Up @@ -61,6 +63,7 @@ datasets are updated in-place by default.
notebooks/11_validate
notebooks/12_correct_dataset
notebooks/04_filter
notebooks/17_data_pruning

.. grid:: 1 2 2 2
:gutter: 2
Expand All @@ -85,3 +88,10 @@ datasets are updated in-place by default.
:color: primary
:outline:
:expand:

.. grid-item-card::

.. button-ref:: notebooks/17_data_pruning
:color: primary
:outline:
:expand:
74 changes: 74 additions & 0 deletions docs/source/docs/level-up/advanced_skills/14_data_pruning.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
=====================================================
Level 14: Dataset Pruning
=====================================================


Datumaro support prune feature to extract representative subset of dataset. The pruned dataset allows us to examine the trade-off between
accuracy and convergence time when training on a reduced data sample. By selecting a subset of instances that captures the essential patterns
and characteristics of the data, we aim to evaluate the impact of dataset size on model performance.

More detailed descriptions about pruning are given by :doc:`Prune <../../command-reference/context_free/prune>`
The Python example for the usage of pruning is described in :doc:`here <../../jupyter_notebook_examples/notebooks/17_data_pruning>`.


.. tab-set::

.. tab-item:: Python

With Python API, we can prune dataset as below

.. code-block:: python
from datumaro.components.dataset import Dataset
from datumaro.components.environment import Environment
from datumaro.componenets.prune import prune
data_path = '/path/to/data'
env = Environment()
detected_formats = env.detect_dataset(data_path)
dataset = Dataset.import_from(data_path, detected_formats[0])
prune = Prune(dataset, cluster_method='<how/to/prune/dataset>')
result = prune.get_pruned(ratio='<how/much/to/prune/dataset>')
We can choose the desired method as ``<how/to/prune/dataset>`` among the provided ones. The default value is ``random``.
Additionally, we can specify how much of the dataset we want to retain by providing a float value between 0 and 1 for the ``<how/much/to/prune/dataset>`` parameter. The default value is 0.5.

.. tab-item:: CLI

Without the project declaration, we can simply ``prune`` dataset by

.. code-block:: bash
datum prune <target> -m METHOD -r RATIO -h HASH_TYPE
We could use ``--overwrite`` instead of setting ``-o/--output-dir``.
We can choose the desired method as ``METHOD`` among the provided ones. The default value is ``random``.
Additionally, we can specify how much of the dataset we want to retain by providing a float value between 0 and 1 for the ``RATIO`` parameter. The default value is 0.5.


.. tab-item:: ProjectCLI

With the project-based CLI, we first require to ``create`` a project by

.. code-block:: bash
datum project create --output-dir <path/to/project>
We now ``import`` data into project through

.. code-block:: bash
datum project import --project <path/to/project> <path/to/data>
We can ``prune`` dataset

.. code-block:: bash
datum prune -m METHOD -r RATIO -h HASH_TYPE -p <path/to/project>
We can choose the desired method as ``METHOD`` among the provided ones. The default value is ``random``.
Additionally, we can specify how much of the dataset we want to retain by providing a float value between 0 and 1 for the ``RATIO`` parameter. The default value is 0.5.
14 changes: 14 additions & 0 deletions docs/source/docs/level-up/advanced_skills/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Advanced Skills

12_project_versioning
13_pseudo_label_generation
14_data_pruning

.. grid:: 1 2 2 2
:gutter: 2
Expand All @@ -32,3 +33,16 @@ Advanced Skills
Level 13: Psuedo Label Generation

:bdg-success:`ProjectCLI`

.. grid-item-card::

.. button-ref:: 14_data_pruning
:color: primary
:outline:
:expand:

Level 14: Data Pruning

:bdg-warning:`Python`
:bdg-info:`CLI`
:bdg-success:`ProjectCLI`
4,237 changes: 4,237 additions & 0 deletions notebooks/17_data_pruning.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -160,11 +160,11 @@ def base(self, ratio, num_centers, labels, database_keys, item_list, source):

item_id_list = [item.id.split("/")[-1] for item in item_list]
centroids = [
database_keys[item_id_list.index(i.id.split(":")[-1])]
for i in list(center_dict.values())
if i
database_keys[item_id_list.index(item.id)] for item in center_dict.values() if item
]
kmeans = KMeans(n_clusters=num_centers, n_init=1, init=centroids, random_state=0)
kmeans = KMeans(
n_clusters=num_centers, n_init=1, init=np.stack(centroids, axis=0), random_state=0
)

clusters = kmeans.fit_predict(database_keys)
cluster_centers = kmeans.cluster_centers_
Expand Down

0 comments on commit 3bbb425

Please sign in to comment.