Releases: mlr-org/mlr3tuning
Releases · mlr-org/mlr3tuning
mlr3tuning 1.0.1
mlr3tuning 1.0.0
- feat: Introduce asynchronous optimization with the
TunerAsync
andTuningInstanceAsync*
classes. - BREAKING CHANGE: The
Tuner
class isTunerBatch
now. - BREAKING CHANGE: THe
TuningInstanceSingleCrit
andTuningInstanceMultiCrit
classes areTuningInstanceBatchSingleCrit
andTuningInstanceBatchMultiCrit
now. - BREAKING CHANGE: The
CallbackTuning
class isCallbackBatchTuning
now. - BREAKING CHANGE: The
ContextEval
class isContextBatchTuning
now. - refactor: Remove hotstarting from batch optimization due to low performance.
- refactor: The option
evaluate_default
is a callback now.
mlr3tuning 0.20.0
- compatibility: Work with new paradox version 1.0.0
- fix:
TunerIrace
failed with logical parameters and dependencies.
mlr3tuning 0.19.2
- refactor: Change thread limits.
mlr3tuning 0.19.1
- refactor: Speed up the tuning process by minimizing the number of deep clones and parameter checks.
- fix: Set
store_benchmark_result = TRUE
ifstore_models = TRUE
when creating a tuning instance. - fix: Passing a terminator in
tune_nested()
did not work.
mlr3tuning 0.19.0
- fix: Add
$phash()
method toAutoTuner
. - fix: Include
Tuner
in hash ofAutoTuner
. - feat: Add new callback that scores the configurations on additional measures while tuning.
- feat: Add vignette about adding new tuners which was previously part of the mlr3book.
mlr3tuning 0.18.0
- BREAKING CHANGE: The
method
parameter oftune()
,tune_nested()
andauto_tuner()
is renamed totuner
.
OnlyTuner
objects are accepted now.
Arguments to the tuner cannot be passed with...
anymore. - BREAKING CHANGE: The
tuner
parameter ofAutoTuner
is moved to the first position to achieve consistency with the other functions. - docs: Update resources sections.
- docs: Add list of default measures.
- fix: Add
allow_hotstarting
,keep_hotstart_stack
andkeep_models
flags toAutoTuner
andauto_tuner()
.
mlr3tuning 0.17.2
- feat:
AutoTuner
accepts instantiated resamplings now.
TheAutoTuner
checks if all row ids of the inner resampling are present in the outer resampling train set when nested resampling is performed. - fix: Standalone
Tuner
did not create aContextOptimization
.
mlr3tuning 0.17.1
- fix: The
ti()
function did not accept callbacks.
mlr3tuning 0.17.0
- feat: The methods
$importance()
,$selected_features()
,$oob_error()
and$loglik()
are forwarded from the final model to theAutoTuner
now. - refactor: The
AutoTuner
stores the instance and benchmark result ifstore_models = TRUE
. - refactor: The
AutoTuner
stores the instance ifstore_benchmark_result = TRUE
.