Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Added R-Clustering clusterer to aeon #2382

Open
wants to merge 116 commits into
base: main
Choose a base branch
from

Conversation

Ramana-Raja
Copy link

@Ramana-Raja Ramana-Raja commented Nov 22, 2024

Reference Issues/PRs

#2132

What does this implement/fix? Explain your changes.

added R clustering model for aeon

Does your contribution introduce a new dependency? If yes, which one?

no

Any other comments?

PR checklist

For all contributions
  • I've added myself to the list of contributors. Alternatively, you can use the @all-contributors bot to do this for you.
  • The PR title starts with either [ENH], [MNT], [DOC], [BUG], [REF], [DEP] or [GOV] indicating whether the PR topic is related to enhancement, maintenance, documentation, bugs, refactoring, deprecation or governance.
For new estimators and functions
  • I've added the estimator to the online API documentation.
  • (OPTIONAL) I've added myself as a __maintainer__ at the top of relevant files and want to be contacted regarding its maintenance. Unmaintained files may be removed. This is for the full file, and you should not add yourself if you are just making minor changes or do not want to help maintain its contents.
For developers with write access
  • (OPTIONAL) I've updated aeon's CODEOWNERS to receive notifications about future changes to these files.

@aeon-actions-bot aeon-actions-bot bot added clustering Clustering package enhancement New feature, improvement request or other non-bug code enhancement labels Nov 22, 2024
@aeon-actions-bot
Copy link
Contributor

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ $\color{#FEF1BE}{\textsf{enhancement}}$ ].
I have added the following labels to this PR based on the changes made: [ $\color{#4011F3}{\textsf{clustering}}$ ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • Run pre-commit checks for all files
  • Run mypy typecheck tests
  • Run all pytest tests and configurations
  • Run all notebook example tests
  • Run numba-disabled codecov tests
  • Stop automatic pre-commit fixes (always disabled for drafts)
  • Disable numba cache loading
  • Push an empty commit to re-run CI checks

@Ramana-Raja Ramana-Raja changed the title [ENH] Added R-Clustering clusterer to aeon for issue #2132 [ENH] Added R-Clustering clusterer to aeon #2132 Nov 22, 2024
@Ramana-Raja Ramana-Raja changed the title [ENH] Added R-Clustering clusterer to aeon #2132 [ENH] Added R-Clustering clusterer to aeon Nov 22, 2024
@TonyBagnall
Copy link
Contributor

hi, thanks for this but if we include this clusterer we want it to use our version of Rocket transformers which are optimised for numba

@Ramana-Raja
Copy link
Author

hi, thanks for this but if we include this clusterer we want it to use our version of Rocket transformers which are optimised for numba

sure, I will try to reimplement it and use aeon Rocket transformers

@Ramana-Raja
Copy link
Author

@chrisholder While reviewing the code, I identified an architectural issue. The R cluster implementation cannot have separate fit and predict methods because it relies on PCA for dimensionality reduction. For example, if PCA determines the optimal dimension during training to be 13, and we attempt to predict on test data with fewer than 13 dimensions, it results in an error. Even if we create a new PCA and apply fit_transform on the test data, we would need to retrain KMeans, which is what I did. However, I don't believe this is an optimal solution. Could you suggest a better approach to handle this scenario?

@TonyBagnall
Copy link
Contributor

@chrisholder While reviewing the code, I identified an architectural issue. The R cluster implementation cannot have separate fit and predict methods because it relies on PCA for dimensionality reduction. For example, if PCA determines the optimal dimension during training to be 13, and we attempt to predict on test data with fewer than 13 dimensions, it results in an error. Even if we create a new PCA and apply fit_transform on the test data, we would need to retrain KMeans, which is what I did. However, I don't believe this is an optimal solution. Could you suggest a better approach to handle this scenario?

PCA has a separate fit and transform step, so you can PCA fit_transform in fit, save the transform then just PCA transform in transform?

@Ramana-Raja
Copy link
Author

@chrisholder While reviewing the code, I identified an architectural issue. The R cluster implementation cannot have separate fit and predict methods because it relies on PCA for dimensionality reduction. For example, if PCA determines the optimal dimension during training to be 13, and we attempt to predict on test data with fewer than 13 dimensions, it results in an error. Even if we create a new PCA and apply fit_transform on the test data, we would need to retrain KMeans, which is what I did. However, I don't believe this is an optimal solution. Could you suggest a better approach to handle this scenario?

PCA has a separate fit and transform step, so you can PCA fit_transform in fit, save the transform then just PCA transform in transform?

If we transform test data(while predicting that is predicting this test data without fitting it using _predict method) with a number of features less than the n_components of PCA, it will cause an error, So we cannot predict test data having number of features less than that of n_components of PCA.

@MatthewMiddlehurst
Copy link
Member

Please respond to the reviews in text instead of just resolving them unless it is a simple change. Feel free to ask for clarification. There are still issues IMO.

If we transform test data(while predicting that is predicting this test data without fitting it using _predict method) with a number of features less than the n_components of PCA, it will cause an error, So we cannot predict test data having number of features less than that of n_components of PCA.

Why would there be less features in predict?

@Ramana-Raja
Copy link
Author

Please respond to the reviews in text instead of just resolving them unless it is a simple change. Feel free to ask for clarification. There are still issues IMO.

If we transform test data(while predicting that is predicting this test data without fitting it using _predict method) with a number of features less than the n_components of PCA, it will cause an error, So we cannot predict test data having number of features less than that of n_components of PCA.

Why would there be less features in predict?

Considering the case where we train the clusterer on a large dataset but only need to make predictions on a smaller dataset with fewer features

@MatthewMiddlehurst
Copy link
Member

Bit confused on what you mean by fewer features. The algorithm explicitly does not allow unequal length series in the tags.

Also, this appears to be done after the ROCKET transform, so I don't think it would have a different number of features either way?

@Ramana-Raja
Copy link
Author

Ramana-Raja commented Jan 11, 2025

Bit confused on what you mean by fewer features. The algorithm explicitly does not allow unequal length series in the tags.

Also, this appears to be done after the ROCKET transform, so I don't think it would have a different number of features either way?

Yes, I was referring to the data after it has been transformed by ROCKET. If the number of features ends up being lower than PCA's n_component, it can cause some issues. I fixed it by training a new KMeans model, but do you have any suggestions for a better method?(It was one of error caused in testing,when I did not add this exception handling)

@MatthewMiddlehurst
Copy link
Member

ROCKET should not be producing a different amount of features. No fitting should be going on in predict here I feel.

@Ramana-Raja
Copy link
Author

Ramana-Raja commented Jan 12, 2025

ROCKET should not be producing a different amount of features. No fitting should be going on in predict here I feel.

Removing the fitting step in the predict function,results in the following error: ValueError: n_components=9 must be between 0 and min(n_samples, n_features)=5 with svd_solver='full'(in testing as you can see below). I thought this could be resolved by fitting a new KMeans model. However, if you have a better approach, could you please share it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clustering Clustering package enhancement New feature, improvement request or other non-bug code enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants