-
Notifications
You must be signed in to change notification settings - Fork 518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecated sg_logger_params #1531
Conversation
"save_tensorboard_remote": True, | ||
"save_logs_remote": True, | ||
} | ||
"sg_logger": {"dagshub_sg_logger": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also introduce logger names consistent with class names and deprecate old snake_case names? E.g "DagsHubSGLogger" instead of "dagshub_sg_logger"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about that too actually.
Should I? Then since we don't want to break (not actually breaking since this is a new format) we would also deprecate the old way?
Or should we just leave both options ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be great, add the class names, and deprecate the old names.
save_logs_remote: False # upload log files to s3 | ||
monitor_system: True # Monitor and write to tensorboard the system statistics, such as CPU usage, GPU, ... | ||
sg_logger: | ||
base_sg_logger: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would one "unset" the base_sg_logger
once it is already in defaults?
I feel this may be the problem if someone would like to use non-default logger:
defaults:
training_hyperparams: default_train_params
training_hyperparams:
sg_logger:
wandb_sg_logger:
...
In this case the resolved config would include:
training_hyperparams:
sg_logger:
base_sg_logger:
...
wandb_sg_logger:
...
And that would lead to an exception in factory.
Second use case that would not work is using W&B via Makefile (To keep YAML files intact):
WANDB_PARAMS = training_hyperparams.sg_logger=wandb_sg_logger +training_hyperparams.sg_logger_params.api_server=https://wandb.research.deci.ai +training_hyperparams.sg_logger_params.entity=super-gradients training_hyperparams.sg_logger_params.launch_tensorboard=false training_hyperparams.sg_logger_params.monitor_system=true +training_hyperparams.sg_logger_params.project_name=PoseEstimation training_hyperparams.sg_logger_params.save_checkpoints_remote=true training_hyperparams.sg_logger_params.save_logs_remote=true training_hyperparams.sg_logger_params.save_tensorboard_remote=false training_hyperparams.sg_logger_params.tb_files_user_prompt=false
coco2017_yolo_nas_s:
python src/super_gradients/train_from_recipe.py --config-name=coco2017_yolo_nas_s $(WANDB_PARAMS)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strong point right there...
I will think of something.
No description provided.