Skip to content

Index Pipelines

Jacobo Coll Moragón edited this page Sep 26, 2016 · 12 revisions

Overview

Index pipeline is the process of ingesting data into an OpenCGA-Storage backend. We define a general pipeline that is used and extended by the multiple format supported. This pipeline can be extended by additional steps of enrichment, which will be highly dependent on the file format. At the end, the data may be filtered to be visualized, or used as analysis input data.

This concept is represented in Catalog to help the tracking of this status in different files.

Index

Indexing data pipeline consists in three steps, transforming the input raw data into an intermediate format, loading it into the selected database, depending on the implementation, and adding more information to the loaded data by calculating statistics or adding extra information like annotation.

Indexation pipeline

  • Transform

The first and one of the most important steps is the transformation. At this point, the pipeline ensures that the input file is valid and can be loaded into the database. The input file is read and converted into the OpenCB models, defined in Biodata. See Data Models for more info.

Depending on the input data the process will be more or less complex, and, at the end, the file will be serialized into disk using some serialization schema like Avro or Protobuf.

Next steps can not start after the transformation is completely finished, so we ensure avoid loading non valid data. Only if the file is correctly transformed, we ensure that the data is able to be loaded, so the next steps can take place.

This step is shared between all the storage engine implementations of the same bioformat, so the result should be valid for any implementation.

  • Load

This step is intended to be as fast as possible, to avoid unnecessary downtimes in the database due to the work load. Because of this, all the convert and validate operations are made in the previous step.

Most of the storage engines are not going to load directly the opencb models, and some more engine dependent transformations are still expected. The storage engines grant that any valid instance of the input data model can be transformed and loaded.

In some scenarios the load step may be done in two steps, loading first into an intermediate system a batch of files, and then, move all of them to the real storage system. This could improve the loading speed by consuming more storage resources.

  • Enrichment

Despite input file formats contains a lot of interesting information, some of them is not directly available, and has to be calculated or fetched from external services.

Most of the storage engines provides mechanisms to calculate some statistics (either per record or aggregated) and store them back in the database to help the filtering process. By doing this, we can speed up some queries against pre-calculated statistics.

Some other information can not be inferred from the input data, and has to be feched from external annotation services like Cellbase.

Query / Export

Variant index pipeline

Index

Enrichment

  • Stats calculation
  • Annotation

Query

Metadata

Alignment index pipeline

Clone this wiki locally