Skip to content
/ scio Public
forked from spotify/scio

A Scala API for Apache Beam and Google Cloud Dataflow.

License

Notifications You must be signed in to change notification settings

turb/scio

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scio

Continuous Integration codecov.io GitHub license Maven Central Scaladoc Scala Steward badge

Scio Logo

Ecclesiastical Latin IPA: /ˈʃi.o/, [ˈʃiː.o], [ˈʃi.i̯o] Verb: I can, know, understand, have knowledge.

Scio is a Scala API for Apache Beam and Google Cloud Dataflow inspired by Apache Spark and Scalding.

Features

  • Scala API close to that of Spark and Scalding core APIs
  • Unified batch and streaming programming model
  • Fully managed service*
  • Integration with Google Cloud products: Cloud Storage, BigQuery, Pub/Sub, Datastore, Bigtable
  • Avro, Cassandra, Elasticsearch, gRPC, JDBC, neo4j, Parquet, Redis, TensorFlow IOs
  • Interactive mode with Scio REPL
  • Type safe BigQuery
  • Integration with Algebird and Breeze
  • Pipeline orchestration with Scala Futures
  • Distributed cache

* provided by Google Cloud Dataflow

Quick Start

Download and install the Java Development Kit (JDK) version 8.

Install sbt.

Use our giter8 template to quickly create a new Scio job repository:

sbt new spotify/scio.g8

Switch to the new repo (default scio-job) and build it:

cd scio-job
sbt stage

Run the included word count example:

target/universal/stage/bin/scio-job --output=wc

List result files and inspect content:

ls -l wc
cat wc/part-00000-of-00004.txt

Documentation

Getting Started is the best place to start with Scio. If you are new to Apache Beam and distributed data processing, check out the Beam Programming Guide first for a detailed explanation of the Beam programming model and concepts. If you have experience with other Scala data processing libraries, check out this comparison between Scio, Scalding and Spark.

Example Scio pipelines and tests can be found under scio-examples. A lot of them are direct ports from Beam's Java examples. See this page for some of them with side-by-side explanation. Also see Big Data Rosetta Code for common data processing code snippets in Scio, Scalding and Spark.

Artifacts

Scio includes the following artifacts:

  • scio-avro: add-on for Avro, can also be used standalone
  • scio-cassandra*: add-ons for Cassandra
  • scio-core: core library
  • scio-elasticsearch*: add-ons for Elasticsearch
  • scio-extra: extra utilities for working with collections, Breeze, etc., best effort support
  • scio-google-cloud-platform: add-on for Google Cloud IO's: BigQuery, Bigtable, Pub/Sub, Datastore, Spanner
  • scio-grpc: add-on for gRPC service calls
  • scio-jdbc: add-on for JDBC IO
  • scio-neo4j: add-on for Neo4J IO
  • scio-parquet: add-on for Parquet
  • scio-redis: add-on for Redis
  • scio-repl: extension of the Scala REPL with Scio specific operations
  • scio-smb: add-on for Sort Merge Bucket operations
  • scio-tensorflow: add-on for TensorFlow TFRecords IO and prediction
  • scio-test: all following test utilities. Add to your project as a "test" dependency
    • scio-test-core: test core utilities
    • scio-test-google-cloud-platform: test utilities for Google Cloud IO's
    • scio-test-parquet: test utilities for Parquet

License

Copyright 2024 Spotify AB.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

About

A Scala API for Apache Beam and Google Cloud Dataflow.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 74.3%
  • Java 25.2%
  • Other 0.5%