You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can see in pghoard transfer.py call to storage.store_file_object, which then calls multipart_upload_file_object it does not pass the size parameter. This causes Unknown to be used.
Author: Jason Pell <jason@adnuntius.com>
Date: Tue Jul 19 11:08:39 2022 +1000
make sure the s3 size is passed from pghoard
diff --git a/rohmu/object_storage/s3.py b/rohmu/object_storage/s3.py
index 675faac..c0a644f 100644
--- a/rohmu/object_storage/s3.py
+++ b/rohmu/object_storage/s3.py
@@ -297,6 +297,9 @@ class S3Transfer(BaseTransfer):
start_of_multipart_upload = time.monotonic()
bytes_sent = 0
+ if size is None and 'Content-Length' in metadata:
+ size = metadata['Content-Length']
+
chunks = "Unknown"
if size is not None:
chunks = math.ceil(size / self.multipart_chunk_size)
But I do not understand the code enough to know if that is a good fix or not
What happened?
For example the log:
S3Transfer Thread-30 INFO: Uploaded part 1 of Unknown
Whereas in earlier versions of pghoard, I would see the total number of parts to be uploaded. Is there a way to fix this up?
What did you expect to happen?
In earlier versions of pghoard, I would see the total number of parts to be uploaded.
What else do we need to know?
The text was updated successfully, but these errors were encountered: