You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have developed a custom event handler that would make ignite code (more specifically: MONAI code) accessible to hyperparamer tuning jobs in GCP Vertex AI. It is an Ignite-tified version of this Link. As you can see from the code in the link, the metrics are simply saved at a certain time, in a certain place with a certain syntax. Once this is possible the Vertex HPO orchestration kicks in. Input arguments are controlled via Vertex.AI custom training jobs and output model performance can be extracted from that output file written at the end of the training.
Why is this useful? With this handler Ignite code can be subject to "outsourced" hyperparameter screening with just adding this handler and a few lines of Vertex config files. I found the outsourcing of HPO to the cloud platform way easier than coding it myself.
If you want I can contribute my solution to the codebase via a PR. Just let me know.
The text was updated successfully, but these errors were encountered:
If you want I can contribute my solution to the codebase via a PR. Just let me know.
Yes, this contribution is very welcome! Technically, we split the code as following:
handlers that require external packages to work: e.g. TensorboardLogger, ClearMLLogger etc, their code goes to ignite.contrib.handlers
other handlers go to ignite.handlers.
I expect that we would need to install and use a python client for vertex ai, so most probably we can put the code into contrib module. Let me know if you need any guidance.
Dear PyTorch team,
I have developed a custom event handler that would make ignite code (more specifically: MONAI code) accessible to hyperparamer tuning jobs in GCP Vertex AI. It is an Ignite-tified version of this Link. As you can see from the code in the link, the metrics are simply saved at a certain time, in a certain place with a certain syntax. Once this is possible the Vertex HPO orchestration kicks in. Input arguments are controlled via Vertex.AI custom training jobs and output model performance can be extracted from that output file written at the end of the training.
Why is this useful? With this handler Ignite code can be subject to "outsourced" hyperparameter screening with just adding this handler and a few lines of Vertex config files. I found the outsourcing of HPO to the cloud platform way easier than coding it myself.
If you want I can contribute my solution to the codebase via a PR. Just let me know.
The text was updated successfully, but these errors were encountered: