You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Solution:
I trided reducing the number of jobs running in parallel with sct_run_batch -> with the flag -jobs 120, everything went smoothly. I would conitnue this way for future preprocessing of the rest of the Uk biobank dataset. Not sure how to add this to the README; maybe put a warning or something like that because this solution is for this case specifically.
The text was updated successfully, but these errors were encountered:
Right, I also ran into some problem when maxing out the available cores. For the spine-generic pipeline, i used maximum 20 cores (!) to avoid running into problems with the sct_deepseg_sc function, which maxes out the RAM.
Ok thanks for the info! I will remember that for when I generate the segmentations! On Compute Canada, we are creating batches of 32, should we be worried that the RAM will be maxed out also? (I will discover it quickly either way)
When running 127 jobs in parallel on joplin for preprocessing, this error occurs in
gradunwarp
for some subjects:gradunwarp
uses a lot of memory as discussed hereSolution:
I trided reducing the number of jobs running in parallel with
sct_run_batch
-> with the flag-jobs 120
, everything went smoothly. I would conitnue this way for future preprocessing of the rest of the Uk biobank dataset. Not sure how to add this to the README; maybe put a warning or something like that because this solution is for this case specifically.The text was updated successfully, but these errors were encountered: