Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling early-stopping in cv with proper eval_set #54

Open
Paperone80 opened this issue Jul 8, 2018 · 1 comment
Open

Enabling early-stopping in cv with proper eval_set #54

Paperone80 opened this issue Jul 8, 2018 · 1 comment

Comments

@Paperone80
Copy link

Hi,

is there a way to enable early-stopping as part of EvolutionaryAlgorithmSearchCV when cv=KFold()?

I think I understood, it is not part of the Scikit-learn API because it is not passed onto the fit() function of the estimator.

It would be beneficial for LightGBM and others who provided early_stopping_rounds functionality based on an eval_metric.

Any suggestion for a temporary fix? All it needs is for example to pass fit_params = {
early_stopping_rounds= 1000,
eval_metric= 'auc',
eval_set=[(train_x, train_y), (valid_x, valid_y)]) to fit() with the same fit(train_x, train_y) as in eval_set.

Would a change work in this section?
...
for train, test in cv.split(X, y):
assert len(train) > 0 and len(test) > 0, "Training and/or testing not long enough for evaluation."
_score = _fit_and_score(estimator=individual.est, X=X, y=y, scorer=scorer,
train=train, test=test, verbose=verbose,
parameters=parameters, fit_params=fit_params, error_score=error_score)[0]
...
Something like":
fit_params.update({'eval_set': [(X[train], y[train]),(X[test], y[test])]})

Thanks

@hofesh
Copy link

hofesh commented Mar 11, 2019

👍 yes please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants