Libraries and tools for interoperability between Apache Hadoop related open-source software and Google Cloud Platform.
The Google Cloud Storage connector for Hadoop enables running MapReduce jobs directly on data in Google Cloud Storage by implementing the Hadoop FileSystem interface. For details, see the README.
This Readme may include documentation for changes that haven't been released yet. The latest release's documentation and source code are found here.
https://github.com/GoogleCloudDataproc/hadoop-connectors/tree/mastr
Note that build requires Java 11+ and fails with older Java versions.
To build the connector for specific Hadoop version, run the following commands from the main directory:
./mvnw clean package
In order to verify test coverage for specific Hadoop version, run the following commands from the main directory:
./mvnw -P coverage clean verify
The Cloud Storage connector JAR can be found in gcs/target/
directory.
Maven group ID is com.google.cloud.bigdataoss
and artifact ID for Cloud
Storage connector is gcs-connector
.
To add a dependency on Cloud Storage connector using Maven, use the following:
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>${next-gcs-connector-release-tag}</version>
</dependency>
On Stack Overflow, use the tag
google-cloud-dataproc
for questions about the connectors in this repository. This tag receives
responses from the Stack Overflow community and Google engineers, who monitor
the tag and offer unofficial support.