Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce a ResolvedBucketName type to avoid temporal coupling (and e… #1754

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
/****************************************************************
* Licensed to the Apache Software Foundation (ASF) under one *
* or more contributor license agreements. See the NOTICE file *
* distributed with this work for additional information *
* regarding copyright ownership. The ASF licenses this file *
* to you under the Apache License, Version 2.0 (the *
* "License"); you may not use this file except in compliance *
* with the License. You may obtain a copy of the License at *
* *
* http://www.apache.org/licenses/LICENSE-2.0 *
* *
* Unless required by applicable law or agreed to in writing, *
* software distributed under the License is distributed on an *
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY *
* KIND, either express or implied. See the License for the *
* specific language governing permissions and limitations *
* under the License. *
****************************************************************/

package org.apache.james.blob.api;

import com.google.common.base.MoreObjects;
import com.google.common.base.Preconditions;
import org.apache.commons.lang3.StringUtils;

import java.util.Objects;

public final class ResolvedBucketName {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This POJO is defined in the API but only used in blob-s3.

I think introducing it is a good idea, but we should avoid poluting the API and move it into blob-s3.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @chibenwa ,
Do you have a plan to migrate to Java 14 (or 16), this will prevent from lot of code like this one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we upgrade better a LTS version, so 17 (or even 21 would be better, but maybe too big of a jump?)

I would support such a migration, but I'm not sure the all community is ready to upgrade their java version :)

Could perhaps start a thread on the ML though?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @chibenwa ,
Do you have a plan to migrate to Java 14 (or 16), this will prevent from lot of code like this one.

You is out of place, as a community we should embrasse a we.

I would personnalyy support a switch to Java 17 or even 21.

Do you think you can pull off a mail on this topic on the mailing lists?

public static ResolvedBucketName of(String value) {
return new ResolvedBucketName(value);
}

private final String value;

private ResolvedBucketName(String value) {
Preconditions.checkNotNull(value);
Preconditions.checkArgument(StringUtils.isNotBlank(value), "`value` cannot be blank");

this.value = value;
}

public String asString() {
return value;
}

@Override
public final boolean equals(Object o) {
if (o instanceof ResolvedBucketName) {
ResolvedBucketName that = (ResolvedBucketName) o;
return Objects.equals(this.value, that.value);
}
return false;
}

@Override
public final int hashCode() {
return Objects.hash(value);
}

@Override
public String toString() {
return MoreObjects.toStringHelper(this)
.add("value", value)
.toString();
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
import org.apache.james.blob.api.BucketName;

import com.google.common.base.Preconditions;
import org.apache.james.blob.api.ResolvedBucketName;

public class BucketNameResolver {
static class Builder {
Expand Down Expand Up @@ -84,33 +85,39 @@ private BucketNameResolver(Optional<BucketName> namespace, Optional<String> pref
this.prefix = prefix;
}

BucketName resolve(BucketName bucketName) {
ResolvedBucketName resolve(BucketName bucketName) {
Preconditions.checkNotNull(bucketName);

if (isNameSpace(bucketName)) {
return bucketName;
return ResolvedBucketName.of(bucketName.asString());
}
return prefix
.map(bucketPrefix -> BucketName.of(bucketPrefix + bucketName.asString()))
.orElse(bucketName);
.map(bucketPrefix -> ResolvedBucketName.of(bucketPrefix + bucketName.asString()))
.orElse(ResolvedBucketName.of(bucketName.asString()));
}

Optional<BucketName> unresolve(BucketName bucketName) {
Optional<BucketName> unresolve(ResolvedBucketName bucketName) {
if (isNameSpace(bucketName)) {
return Optional.of(bucketName);
return Optional.of(BucketName.of(bucketName.asString()));
}

return prefix.map(p -> {
if (bucketName.asString().startsWith(p)) {
return Optional.of(BucketName.of(bucketName.asString().substring(p.length())));
}
return Optional.<BucketName>empty();
}).orElse(Optional.of(bucketName));
}).orElse(Optional.of(BucketName.of(bucketName.asString())));
}

private boolean isNameSpace(BucketName bucketName) {
return namespace
.map(existingNamespace -> existingNamespace.equals(bucketName))
.map(existingNamespace -> existingNamespace.asString().equals(bucketName.asString()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💭 thought: the namespace term is also called defaultBucketName in other places ( it is called namespace in the configuration file, the configuration builder accepts a namespace setter but when building the actual configuration object it injects the namespace into a field named defaultBucketName a similarly named field is also exposed by the S3 dao but in the bucketname resolver it is again called namespace. It is unclear to me what this concept is supposed to represent and how it is different from bucketPrefix (whose name is fairly stable throughout the code)
I don't expect an action on this MR but seeing the name here reminded me of this thought.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ question: ‏why change the equality from BucketName#equals to asString().equals I agree that the result is identical but don't think this change makes the code clearer

I understand why it is necessary in the overload of isNameSpace that compares a ResolvedBucketName and a BucketName.

.orElse(false);
}

private boolean isNameSpace(ResolvedBucketName bucketName) {
return namespace
.map(existingNamespace -> existingNamespace.asString().equals(bucketName.asString()))
.orElse(false);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,7 @@
import javax.net.ssl.TrustManagerFactory;

import org.apache.commons.io.IOUtils;
import org.apache.james.blob.api.BlobId;
import org.apache.james.blob.api.BlobStoreDAO;
import org.apache.james.blob.api.BucketName;
import org.apache.james.blob.api.ObjectNotFoundException;
import org.apache.james.blob.api.ObjectStoreIOException;
import org.apache.james.blob.api.*;
import org.apache.james.lifecycle.api.Startable;
import org.apache.james.util.ReactorUtils;
import org.reactivestreams.Publisher;
Expand Down Expand Up @@ -192,7 +188,7 @@ public void close() {

@Override
public InputStream read(BucketName bucketName, BlobId blobId) throws ObjectStoreIOException, ObjectNotFoundException {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return ReactorUtils.toInputStream(getObject(resolvedBucketName, blobId)
.onErrorMap(NoSuchBucketException.class, e -> new ObjectNotFoundException("Bucket not found " + resolvedBucketName.asString(), e))
Expand All @@ -203,7 +199,7 @@ public InputStream read(BucketName bucketName, BlobId blobId) throws ObjectStore

@Override
public Publisher<InputStream> readReactive(BucketName bucketName, BlobId blobId) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return getObject(resolvedBucketName, blobId)
.onErrorMap(NoSuchBucketException.class, e -> new ObjectNotFoundException("Bucket not found " + resolvedBucketName.asString(), e))
Expand All @@ -217,7 +213,7 @@ private static class FluxResponse {
Flux<ByteBuffer> flux;
}

private Mono<FluxResponse> getObject(BucketName bucketName, BlobId blobId) {
private Mono<FluxResponse> getObject(ResolvedBucketName bucketName, BlobId blobId) {
return Mono.fromFuture(() ->
client.getObject(
builder -> builder.bucket(bucketName.asString()).key(blobId.asString()),
Expand Down Expand Up @@ -253,7 +249,7 @@ public void onStream(SdkPublisher<ByteBuffer> publisher) {

@Override
public Mono<byte[]> readBytes(BucketName bucketName, BlobId blobId) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return Mono.fromFuture(() ->
client.getObject(
Expand All @@ -268,7 +264,7 @@ public Mono<byte[]> readBytes(BucketName bucketName, BlobId blobId) {

@Override
public Mono<Void> save(BucketName bucketName, BlobId blobId, byte[] data) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return Mono.fromFuture(() ->
client.putObject(
Expand Down Expand Up @@ -300,7 +296,7 @@ private Mono<Void> uploadUsingFile(BucketName bucketName, BlobId blobId, InputSt

@Override
public Mono<Void> save(BucketName bucketName, BlobId blobId, ByteSource content) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return Mono.fromCallable(content::size)
.flatMap(contentLength ->
Expand All @@ -314,7 +310,7 @@ public Mono<Void> save(BucketName bucketName, BlobId blobId, ByteSource content)
.then();
}

private Mono<PutObjectResponse> save(BucketName resolvedBucketName, BlobId blobId, InputStream stream, long contentLength) {
private Mono<PutObjectResponse> save(ResolvedBucketName resolvedBucketName, BlobId blobId, InputStream stream, long contentLength) {
int chunkSize = Math.min((int) contentLength, CHUNK_SIZE);

return Mono.fromFuture(() -> client.putObject(builder -> builder
Expand All @@ -333,7 +329,7 @@ private Flux<ByteBuffer> chunkStream(int chunkSize, InputStream stream) {
.subscribeOn(Schedulers.boundedElastic());
}

private RetryBackoffSpec createBucketOnRetry(BucketName bucketName) {
private RetryBackoffSpec createBucketOnRetry(ResolvedBucketName bucketName) {
return RetryBackoffSpec.backoff(MAX_RETRIES, FIRST_BACK_OFF)
.maxAttempts(MAX_RETRIES)
.doBeforeRetryAsync(retrySignal -> {
Expand All @@ -349,7 +345,7 @@ private RetryBackoffSpec createBucketOnRetry(BucketName bucketName) {

@Override
public Mono<Void> delete(BucketName bucketName, BlobId blobId) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return Mono.fromFuture(() ->
client.deleteObject(delete -> delete.bucket(resolvedBucketName.asString()).key(blobId.asString())))
Expand All @@ -360,7 +356,9 @@ public Mono<Void> delete(BucketName bucketName, BlobId blobId) {

@Override
public Publisher<Void> delete(BucketName bucketName, Collection<BlobId> blobIds) {
return deleteObjects(bucketName,
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return deleteObjects(resolvedBucketName,
blobIds.stream()
.map(BlobId::asString)
.map(id -> ObjectIdentifier.builder().key(id).build())
Expand All @@ -370,12 +368,12 @@ public Publisher<Void> delete(BucketName bucketName, Collection<BlobId> blobIds)

@Override
public Mono<Void> deleteBucket(BucketName bucketName) {
BucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);
ResolvedBucketName resolvedBucketName = bucketNameResolver.resolve(bucketName);

return deleteResolvedBucket(resolvedBucketName);
}

private Mono<Void> deleteResolvedBucket(BucketName bucketName) {
private Mono<Void> deleteResolvedBucket(ResolvedBucketName bucketName) {
return emptyBucket(bucketName)
.onErrorResume(t -> Mono.just(bucketName))
.flatMap(ignore -> Mono.fromFuture(() ->
Expand All @@ -385,7 +383,7 @@ private Mono<Void> deleteResolvedBucket(BucketName bucketName) {
.publishOn(Schedulers.parallel());
}

private Mono<BucketName> emptyBucket(BucketName bucketName) {
private Mono<ResolvedBucketName> emptyBucket(ResolvedBucketName bucketName) {
return Flux.from(client.listObjectsV2Paginator(builder -> builder.bucket(bucketName.asString())))
.flatMap(response -> Flux.fromIterable(response.contents())
.window(EMPTY_BUCKET_BATCH_SIZE)
Expand All @@ -401,7 +399,7 @@ private Mono<List<ObjectIdentifier>> buildListForBatch(Flux<S3Object> batch) {
.collect(ImmutableList.toImmutableList());
}

private Mono<DeleteObjectsResponse> deleteObjects(BucketName bucketName, List<ObjectIdentifier> identifiers) {
private Mono<DeleteObjectsResponse> deleteObjects(ResolvedBucketName bucketName, List<ObjectIdentifier> identifiers) {
return Mono.fromFuture(() -> client.deleteObjects(builder ->
builder.bucket(bucketName.asString()).delete(delete -> delete.objects(identifiers))));
}
Expand All @@ -411,7 +409,7 @@ public Mono<Void> deleteAllBuckets() {
return Mono.fromFuture(client::listBuckets)
.publishOn(Schedulers.parallel())
.flatMapIterable(ListBucketsResponse::buckets)
.flatMap(bucket -> deleteResolvedBucket(BucketName.of(bucket.name())), DEFAULT_CONCURRENCY)
.flatMap(bucket -> deleteResolvedBucket(ResolvedBucketName.of(bucket.name())), DEFAULT_CONCURRENCY)
mbaechler marked this conversation as resolved.
Show resolved Hide resolved
.then();
}

Expand All @@ -420,7 +418,7 @@ public Publisher<BucketName> listBuckets() {
return Mono.fromFuture(client::listBuckets)
.flatMapIterable(ListBucketsResponse::buckets)
.map(Bucket::name)
.handle((bucket, sink) -> bucketNameResolver.unresolve(BucketName.of(bucket))
.handle((bucket, sink) -> bucketNameResolver.unresolve(ResolvedBucketName.of(bucket))
mbaechler marked this conversation as resolved.
Show resolved Hide resolved
.ifPresent(sink::next));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import static org.assertj.core.api.Assertions.assertThatThrownBy;

import org.apache.james.blob.api.BucketName;
import org.apache.james.blob.api.ResolvedBucketName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
Expand Down Expand Up @@ -73,7 +74,7 @@ void unresolveShouldReturnPassedValue() {
.namespace(BucketName.of("namespace"))
.build();

assertThat(resolver.unresolve(BucketName.of("bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketName")))
.contains(BucketName.of("bucketName"));
}

Expand All @@ -84,7 +85,7 @@ void unresolveShouldReturnValueWhenNamespace() {
.namespace(BucketName.of("namespace"))
.build();

assertThat(resolver.unresolve(BucketName.of("namespace")))
assertThat(resolver.unresolve(ResolvedBucketName.of("namespace")))
.contains(BucketName.of("namespace"));
}
}
Expand Down Expand Up @@ -122,7 +123,7 @@ void unresolveShouldReturnPassedValueWithPrefix() {
.noNamespace()
.build();

assertThat(resolver.unresolve(BucketName.of("bucketPrefix-bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketPrefix-bucketName")))
.contains(BucketName.of("bucketName"));
}

Expand All @@ -133,7 +134,7 @@ void unresolveShouldFilterValuesWithoutPrefix() {
.noNamespace()
.build();

assertThat(resolver.unresolve(BucketName.of("bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketName")))
.isEmpty();
}
}
Expand Down Expand Up @@ -171,7 +172,7 @@ void unresolveShouldReturnPassedValue() {
.noNamespace()
.build();

assertThat(resolver.unresolve(BucketName.of("bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketName")))
.contains(BucketName.of("bucketName"));
}
}
Expand Down Expand Up @@ -221,7 +222,7 @@ void unresolveShouldFilterValuesWithoutPrefix() {
.namespace(BucketName.of("namespace"))
.build();

assertThat(resolver.unresolve(BucketName.of("bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketName")))
.isEmpty();
}

Expand All @@ -232,7 +233,7 @@ void unresolveShouldRemovePrefix() {
.namespace(BucketName.of("namespace"))
.build();

assertThat(resolver.unresolve(BucketName.of("bucketPrefix-bucketName")))
assertThat(resolver.unresolve(ResolvedBucketName.of("bucketPrefix-bucketName")))
.contains(BucketName.of("bucketName"));
}

Expand All @@ -243,7 +244,7 @@ void unresolveShouldReturnNamespaceWhenPassingNamespace() {
.namespace(BucketName.of("namespace"))
.build();

assertThat(resolver.unresolve(BucketName.of("namespace")))
assertThat(resolver.unresolve(ResolvedBucketName.of("namespace")))
.contains(BucketName.of("namespace"));
}
}
Expand Down