-
Notifications
You must be signed in to change notification settings - Fork 232
Client compatibility list
Pierre Gronlier edited this page Sep 24, 2019
·
13 revisions
This page lists applications known to have compatibility with S3Proxy and any configuration needed to use them. Usually one must explicitly set the endpoint, disable bucket-in-hostname, and use AWS signature v2.
$ aws configure
AWS Access Key ID [****]: local-identity
AWS Secret Access Key [****]: local-credential
Default region name [None]:
Default output format [None]:
$ aws s3 ls --endpoint-url http://127.0.0.1:8080
AmazonS3Client client = new AmazonS3Client(
new BasicAWSCredentials("identity", "credential"),
new ClientConfiguration().withSignerOverride("S3SignerType"));
client.setEndpoint("http://127.0.0.1:8080");
client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc.hadoopConfiguration.set("fs.s3a.endpoint", "127.0.0.1:4242")
sc.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false") // if no SSL suppport
// if user/password access
sc.hadoopConfiguration.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
sc.hadoopConfiguration.set("fs.s3a.access.key", "local-identity")
sc.hadoopConfiguration.set("fs.s3a.secret.key", "local-credential")
// if anonymous access
sc.hadoopConfiguration.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")
With latest hadoop-aws
jar, you need to set s3proxy.authorization=aws-v2-or-v4
in the s3proxy configuration.
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
endpoint: 'http://127.0.0.1:8080',
accessKeyId: 'local-identity',
secretAccessKey: 'local-credential',
s3ForcePathStyle: true
});
Also specify jclouds.regions = us-east-1
to work around aws/aws-sdk-js#919.
require 'aws-sdk'
Aws.config.update({
credentials: Aws::Credentials.new('identity', 'credential'),
endpoint: 'http://localhost:8080'
})
boto.connect_s3(aws_access_key_id = 'identity',
aws_secret_access_key = 'credential',
host = '127.0.0.1', port = 8080, is_secure = False,
calling_format = boto.s3.connection.OrdinaryCallingFormat())
session = boto3.session.Session(aws_access_key_id='identity',
aws_secret_access_key='credential')
config = boto3.session.Config(s3={'addressing_style': 'path'})
# low level S3 client
client = session.client('s3', endpoint_url='http://localhost:8080',
config=config)
# S3 resource object
resource = session.resource('s3', endpoint_url='http://localhost:8080',
config=config)
fog = Fog::Storage.new(provider: 'aws', endpoint: 'http://127.0.0.1:8080',
path_style: true)
jclouds blobstore container-list --api s3 \
--endpoint http://127.0.0.1:8080 \
--identity local-identity \
--credential local-credential
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
cls = get_driver(Provider.S3)
driver = cls('local-identity', 'local-credential', host='localhost', port=8080, secure=False)
s3cmd 1.5.0 and newer
Sample .s3cfg
:
[default]
access_key = local-identity
secret_key = local-credential
host_base = localhost:8080
host_bucket = localhost:8080
s3cmd before 1.5.0
Sample .s3cfg
:
[default]
access_key = local-identity
secret_key = local-credential
host_base = localhost:8080
host_bucket = %(bucket)s.localhost:8080
Also include this in your s3proxy.properties:
s3proxy.virtual-host=localhost
Additionally, you will need to make sure bucket.localhost is resolved to localhost:
127.0.0.1 bucket.localhost
You can make *.localhost resolve to 127.0.0.1 if you use dnsmasq. For example with NetworkManager managed dnsmasq:
echo address=/localhost/127.0.0.1 | sudo tee /etc/NetworkManager/dnsmasq.d/localhost.conf
$ echo local-identity:local-credential > passwd
$ chmod 600 passwd
$ s3fs bucketname dirname \
-o passwd_file=passwd \
-o sigv2 \
-o url=http://127.0.0.1:8080 \
-o use_path_request_style