Scio Dataflow job to load the discogs xml dumps into BigQuery tables.
Current available jobs:
- Labels
- Artists
- Masters
- Releases
To run the jobs, download the compressed XML dumps and upload them to your own gcp bucket. Follow the Scio instructions to set up your GCP project.
All the times were measured using the default run arguments for Dataflow.
sbt "runMain discogs.ReleasesJob
--project=your-gpc-project-id
--runner=DataflowRunner
--region=us-central1
--input=gs://your-bucket/releases.xml.gz
--output=your-project.bq-dataset.releases-bq-table"
sbt "runMain discogs.ArtistsJob
--project=your-gpc-project-id
--runner=DataflowRunner
--region=us-central1
--input=gs://your-bucket/artists.xml.gz
--output=your-project.bq-dataset.artists-bq-table"
sbt "runMain discogs.MastersJob
--project=your-gpc-project-id
--runner=DataflowRunner
--region=us-central1
--input=gs://your-bucket/masters.xml.gz
--output=your-project.bq-dataset.masters-bq-table"
Because of an Apache Beam XmlIO limitation regarding
nested tags with the same outer label (<label>
), the original labels file cannot be processed as is. There's a small
script to convert the nested <labels>
tags into <sublabels>
in src/main/java/utils/LabelRenamer.java
. Download the original final, decompress it
and run the converter. Then compress the output file and upload it to your GCP bucket. Then use that file as the input for the job.
sbt "runMain discogs.MastersJob
--project=your-gpc-project-id
--runner=DataflowRunner
--region=us-central1
--input=gs://your-bucket/labels-renamed.xml.gz
--output=your-project.bq-dataset.labels-bq-table"