You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the insertion object chunking logic at Splits the inserted objects into arrays of 500 and closures to 1000. It doesn't take into account, the size of the objects in the array.
This results in potentially big ~ 50 MB stringified array, when the insert the values into the DB.
Insertion object this big arriving in a short period of time during a big-ish ( but not unreasonably big) send operation, causes the server to use excessive memory and the garbage collection of knex internal insertion objects is causing CPU bottlenecks to a point, where the whole send operation can time out due to the server not being able to respond in the request timeout (60 seconds).
The batching logic should cut off the batches at ~ 10 MB as an initial target
The text was updated successfully, but these errors were encountered:
Currently, the insertion object chunking logic at Splits the inserted objects into arrays of 500 and closures to 1000. It doesn't take into account, the size of the objects in the array.
This results in potentially big ~ 50 MB stringified array, when the insert the values into the DB.
Insertion object this big arriving in a short period of time during a big-ish ( but not unreasonably big) send operation, causes the server to use excessive memory and the garbage collection of knex internal insertion objects is causing CPU bottlenecks to a point, where the whole send operation can time out due to the server not being able to respond in the request timeout (60 seconds).
The batching logic should cut off the batches at ~ 10 MB as an initial target
The text was updated successfully, but these errors were encountered: