response exhaust report getting failed due to unable to upload file whose file size more than 2Gb into OCI object storage. #171
Replies: 3 comments 1 reply
-
@shridhar95 can you please share the full log? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @kumarks1122 , please find the attachment containing the full logs. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Thanks @kumarks1122 for your kind response. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Team,
As we are working on the response exhaust report, These reports are getting failed for those file requests whose file size is more than 2 Gb.
we did the analysis and we had tried to optimize the code and tried to change the memory configuration of the big data cluster, but still our job are getting failed. We had tried to changing the Driver memory, driver memory overload, executor memory, executor memory overload with different values like 200Gb, 20Gb, 40Gb and 4Gb Respectively.
we have analyzed, if we tried to write a multiple partitioned files in object storage our job is able to write and the CopyMerge method will also able to merge these multiples files and tried to upload in object storage bucket at that moment the the job getting failed.
Post this we have been gone through the logs and observed that the connector unable to write a large size data into the bucket.
Here we are using the org.jets3t.service connector. also we had tried to change into jets3t.properties files configuration upto 10Gb but still not working.
Requesting you to please help us on same.
For your reference attaching the screenshot
Beta Was this translation helpful? Give feedback.
All reactions