How do I use build_experiment with a HPC? #1105
Replies: 2 comments 2 replies
-
Hi! You can use experiment.workon with the Dask backend if you setup a Dask The simplest solution is to scale your jobs the same way you would do with |
Beta Was this translation helpful? Give feedback.
-
For anyone looking for an answer to this, here's my code: def experiment_f(demo: Demo,
log: LoggingManager,
progress_manager: ProgressManager,
**params: Any) -> list[dict[str, Any]]:
solvers = demo.create_solvers(**params)
demo_loss = demo.demo_loss(solvers, log, progress_manager)
for solver in solvers:
solver.delete_cache()
return [{'name': 'demo_loss',
'type': 'objective',
'value': demo_loss}]
def run_demo(mode: VisualizationMode, # noqa: C901
demo: Demo, *,
workers: int,
trials: int,
failures: int,
enable_log: bool
) -> None:
experiment = build_experiment(name=demo.name, space=demo.metaparameter_space(),
max_trials=trials, max_broken=failures,
algorithm={"tpe": {"n_initial_points": 5}})
if mode == VisualizationMode.multi_node:
with solver_context_manager(thread_limit=None):
if workers == 0:
workers = 1
assert workers > 0
for _ in range(workers):
trial = experiment.suggest()
params = trial.params
console_log.info_generic(params)
results = experiment_f(demo, log=console_log,
progress_manager=null_progress_manager, **params)
experiment.observe(trial, results)
return
with solver_context_manager(thread_limit=1):
assert mode == VisualizationMode.multi_task
assert workers > 0
with experiment.tmp_executor("joblib", n_workers=workers):
experiment.workon(experiment_f, workers, demo=demo, log=console_log,
progress_manager=null_progress_manager)
experiment.plot.regret().show() |
Beta Was this translation helpful? Give feedback.
-
The Running on HPC section only shows the
orion hunt
command, but the Parallel Workers section says that using the Python API is possible. How do I do this? Does runningexperiment.workon
on a HPC automatically spawn jobs?Beta Was this translation helpful? Give feedback.
All reactions