You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our platform drives the workflows from a VM that mounts the shared space with NFS.
The leader toil downloads the files, which makes this task very slow because it's not performed on the compute infrastructure/worker, which is tuned for high network performance.
Is it possible to implement a --runImportsOnWorkers parameter such as --runLocalJobsOnWorkers to make S3 copies from processing resources?
Thanks
┆Issue is synchronized with this Jira Story
┆Issue Number: TOIL-1619
The text was updated successfully, but these errors were encountered:
Do we need a way to run a workflow with mixed inputs, where some inputs are local file paths only available on the leader filesystem while others are URLs we can fetch from the workers?
I guess we could in that case fetch all local files from the leader and everything else from the worker.
Hello,
TOIL Version 6.1
Python 3.9
We use S3 URIs as inputs to our workflows.
Our platform drives the workflows from a VM that mounts the shared space with NFS.
The leader toil downloads the files, which makes this task very slow because it's not performed on the compute infrastructure/worker, which is tuned for high network performance.
Is it possible to implement a --runImportsOnWorkers parameter such as --runLocalJobsOnWorkers to make S3 copies from processing resources?
Thanks
┆Issue is synchronized with this Jira Story
┆Issue Number: TOIL-1619
The text was updated successfully, but these errors were encountered: