diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index a02fcf22..97087f53 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -27,7 +27,7 @@ jobs: run: | python -m pip install --upgrade pip pip install flake8 pytest pytest-cov scikit-learn black isort>=5.0.6 - if [ -f requirements.txt ]; then pip install .["dev"]; fi + if [ -f requirements.txt ]; then pip install ."[dev, all]"; fi - name: Lint with flake8 run: | # stop the build if there are Python syntax errors or undefined names diff --git a/CHANGELOG b/CHANGELOG index 9fa8efdb..27b26a15 100644 --- a/CHANGELOG +++ b/CHANGELOG @@ -5,6 +5,30 @@ File for tracking changes in SysIdentPy Changes in SysIdentPy ===================== +v0.3.1 +------ + +CONTRIBUTORS +~~~~~~~~~~~~ + +- wilsonrljr + +CHANGES +~~~~~~~ + +- The update **v0.3.1** has been released with API changes and fixes. + +- API Change: + - MetaMSS was returning the max lag of the final model instead of the maximum lag related to the xlag and ylag. This is not wrong (its related to the issue #55), but this change will be made for all methods at the same time. In this respect, I'm reverted this to return the maximum lag of the xlag and ylag. + +- API Change: Added build_matrix method in BaseMSS. This change improved overall code readability by rewriting if/elif/else clauses in every model structure selection algorithm. + +- API Change: Added bic, aic, fpe, and lilc methods in FROLS. Now the method is selected by using a predefined dictionary with the available options. This change improved overall code readability by rewriting if/elif/else clauses in the FROLS algorithm. + +- TESTS: Added tests for Neural NARX class. The issue with pytorch was fixed and now we have the tests for every model class. + +- Remove unused code and comments. + v0.3.0 ------ diff --git a/docs/changelog/changelog.md b/docs/changelog/changelog.md index ffd29481..7aa4ef65 100644 --- a/docs/changelog/changelog.md +++ b/docs/changelog/changelog.md @@ -5,6 +5,28 @@ template: overrides/main.html # Changes in SysIdentPy +## v0.3.1 + +### CONTRIBUTORS + +- wilsonrljr + +### CHANGES + +- The update **v0.3.1** has been released with API changes and fixes. + +- API Change: + - MetaMSS was returning the max lag of the final model instead of the maximum lag related to the xlag and ylag. This is not wrong (its related to the issue #55), but this change will be made for all methods at the same time. In this respect, I'm reverted this to return the maximum lag of the xlag and ylag. + +- API Change: Added build_matrix method in BaseMSS. This change improved overall code readability by rewriting if/elif/else clauses in every model structure selection algorithm. + +- API Change: Added bic, aic, fpe, and lilc methods in FROLS. Now the method is selected by using a predefined dictionary with the available options. This change improved overall code readability by rewriting if/elif/else clauses in the FROLS algorithm. + +- TESTS: Added tests for Neural NARX class. The issue with pytorch was fixed and now we have the tests for every model class. + +- Remove unused code and comments. + + ## v0.3.0 ### CONTRIBUTORS diff --git a/docs/changelog/changelog/changelog.md b/docs/changelog/changelog/changelog.md index ffd29481..7aa4ef65 100644 --- a/docs/changelog/changelog/changelog.md +++ b/docs/changelog/changelog/changelog.md @@ -5,6 +5,28 @@ template: overrides/main.html # Changes in SysIdentPy +## v0.3.1 + +### CONTRIBUTORS + +- wilsonrljr + +### CHANGES + +- The update **v0.3.1** has been released with API changes and fixes. + +- API Change: + - MetaMSS was returning the max lag of the final model instead of the maximum lag related to the xlag and ylag. This is not wrong (its related to the issue #55), but this change will be made for all methods at the same time. In this respect, I'm reverted this to return the maximum lag of the xlag and ylag. + +- API Change: Added build_matrix method in BaseMSS. This change improved overall code readability by rewriting if/elif/else clauses in every model structure selection algorithm. + +- API Change: Added bic, aic, fpe, and lilc methods in FROLS. Now the method is selected by using a predefined dictionary with the available options. This change improved overall code readability by rewriting if/elif/else clauses in the FROLS algorithm. + +- TESTS: Added tests for Neural NARX class. The issue with pytorch was fixed and now we have the tests for every model class. + +- Remove unused code and comments. + + ## v0.3.0 ### CONTRIBUTORS diff --git a/docs/changelog/changelog/index.html b/docs/changelog/changelog/index.html index 88f6243f..7eac223a 100644 --- a/docs/changelog/changelog/index.html +++ b/docs/changelog/changelog/index.html @@ -10,4 +10,4 @@ body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);} -
Skip to content

Changes in SysIdentPy

v0.3.0

CONTRIBUTORS

  • wilsonrljr
  • gamcorn
  • Gabo-Tor

CHANGES

  • The update v0.3.0 has been released with additional features, API changes and fixes.

  • MAJOR: Estimators support in AOLS

    • Now you can use any SysIdentPy estimator in AOLS model structure selection.
  • API Change:

    • Refactored base class for model structure selection. A refactored base class for model structure selection has been introduced in SysIdentPy. This update aims to enhance the system identification process by preparing the package for new features that are currently in development, like multiobjective parameter estimation, new basis functions and more.

    Several methods within the base class have undergone significant restructuring to improve their functionality and optimize their performance. This reorganization will facilitate the incorporation of advanced model selection techniques in the future, which will enable users to obtain dynamic models with robust dynamic and static performance. - Avoid unnecessary inheritance in every MSS method and improve the readability with better structured classes. - Rewritten methods to avoid code duplication. - Improve overall code readability by rewriting if/elif/else clauses.

  • Breaking Change: X_train and y_train were replaced respectively by X and y in fit method in MetaMSS model structure selection algorithm. X_test and y_test were replaced by X and y in predict method in MetaMSS.

  • API Change: Added BaseBasisFunction class, an abstract base class for implementing basis functions.

  • Enhancement: Added support for python 3.11.

  • Future Deprecation Warning: The user will have to define the estimator and pass it to every model structure selection algorithm instead of using a string to define the Estimator. Currently the estimator is defined like "estimator='least_squares'". In version 0.4.0 the definition will be like "estimator=LeastSquares()"

  • FIX: Issue #96. Fix issue with numpy 1.24.* version. Thanks for the contribution @gamcorn.

  • FIX: Issue #91. Fix r2_score metric issue with 2 dimensional arrays.

  • FIX: Issue #90.

  • FIX: Issue #88 .Fix one step ahead prediction error in SimulateNARMAX class (thanks for pointing out, Lalith).

  • FIX: Fix error in selecting the correct regressors in AOLS.

  • Fix: Fix n step ahead prediction method not returning all values of the defined steps-ahead value when passing only the initial condition.

  • FIX: Fix Visible Deprecation Warning raised in get_max_lag method.

  • FIX: Fix deprecation warning in Extended Least Squares Example

  • DATASET: Added air passengers dataset to SysIdentPy repository.

  • DATASET: Added San Francisco Hospital Load dataset to SysIdentPy repository.

  • DATASET: Added San Francisco PV GHI dataset to SysIdentPy repository.

  • DOC: Improved documentation in Setting Specif Lags page. Now we bring an example of how to set specific lags for MISO models.

  • DOC: Minor additions and grammar fixes.

  • DOC: Improve image visualization using mkdocs-glightbox.

  • Update dev packages versions

v0.2.1

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.2.1 has been released with additional feature, minor API changes and fixes.

  • MAJOR: Neural NARX now support CUDA

    • Now the user can build Neural NARX models with CUDA support. Just add device='cuda' to use the GPU benefits.
    • Updated docs to show how to use the new feature.
  • MAJOR: New documentation website

    • The documentation is now entirely based on Markdown (no rst anymore).
    • We use MkDocs and Material for MkDocs theme now.
    • Dark theme option.
    • The Contribute page have more details to help those who wants to contribute with SysIdentPy.
    • New sections (e.g., Blog, Sponsors, etc.)
    • Many improvements under the hood.
  • MAJOR: Github Sponsor

  • Tests:

    • Now there are test for almost every function.
    • Neural NARX tests are raising numpy issues. It'll be fixed til next update.
  • FIX: NFIR models in General Estimators

    • Fix support for NFIR models using sklearn estimators.
  • The setup is now handled by the pyproject.toml file.

  • Remove unused code.

  • Fix docstring variables.

  • Fix code format issues.

  • Fix minor grammatical and spelling mistakes.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme.

v0.2.0

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.2.0 has been released with additional feature, minor API changes and fixes.

  • MAJOR: Many new features for General Estimators

    • Now the user can build General NARX models with Fourier basis function.
    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Now it is possible to build General NAR models. The user just need to pass model_type="NAR" to build NAR models.
    • Now it is possible to build General NFIR models. The user just need to pass model_type="NFIR" to build NAR models.
    • Now it is possible to run n-steps ahead prediction using General Estimators. Until now only infinity-steps ahead were allowed. Now the users can set any steps they want.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • many under the hood changes.
  • MAJOR: Many new features for NARX Neural Network

    • Now the user can build Neural NARX models with Fourier basis function.
    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Now it is possible to build Neural NAR models. The user just need to pass model_type="NAR" to build NAR models.
    • Now it is possible to build Neural NFIR models. The user just need to pass model_type="NFIR" to build NAR models.
    • Now it is possible to run n-steps ahead prediction using Neural NARX. Until now only infinity-steps ahead were allowed. Now the users can set any steps they want.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • many under the hood changes.
  • Major: Support for old methods removed.

    • Now the old sysidentpy.PolynomialNarmax is not available anymore. All the old features are included in the new API with a lot of new features and performance improvements.
  • API Change (new): sysidentpy.general_estimators.ModelPrediction

    • ModelPrediction class was adapted to support General Estimators as a stand-alone class.
    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change (new): sysidentpy.neural_network.ModelPrediction

    • ModelPrediction class was adapted to support Neural NARX as a stand-alone class.
    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change: Fit method for Neural NARX revamped.

    • No need to convert the data to tensor before calling Fit method anymore.

API Change: Keyword and positional arguments - Now users have to provide parameters with their names, as keyword arguments, instead of positional arguments. This is valid for every model class now.

  • API Change (new): sysidentpy.utils.narmax_tools

    • New functions to help user getting useful information to build model. Now we have the regressor_code helper function to help to build neural NARX models.
  • DOC: Improved Basic Steps notebook with new details about the prediction function.

  • DOC: NARX Neural Network notebook was updated following the new api and showing new features.
  • DOC: General Estimators notebook was updated following the new api and showing new features.
  • DOC: Fixed minor grammatical and spelling mistakes, including Issues #77 and #78.
  • DOC: Fix issues related to html on Jupyter notebooks examples on documentation.

v0.1.9

CONTRIBUTORS

  • wilsonrljr
  • samirmartins

CHANGES

  • The update v0.1.9 has been released with additional feature, minor API changes and fixes of the new features added in v0.1.7.

  • MAJOR: Entropic Regression Algorithm

    • Added the new class ER to build NARX models using the Entropic Regression algorithm.
    • Only the Mutual Information KNN is implemented in this version and it may take too long to run on a high number of regressor, so the user should be careful regarding the number of candidates to put in the model.
  • API: save_load

    • Added a function to save and load models from file.
  • API: Added tests for python 3.9

  • Fix : Change condition for n_info_values in FROLS. Now the value defined by the user is compared against X matrix shape instead of regressor space shape. This fix the Fourier basis function usage with more the 15 regressors in FROLS.

  • DOC: Save and Load models

    • Added a notebook showing how to use the save_load method.
  • DOC: Entropic Regression example

    • Added notebook with a simple example of how to use AOLS
  • DOC: Fourier Basis Function Example

    • Added notebook with a simple example of how to use Fourier Basis Function
  • DOC: PV forecasting benchmark

    • FIX AOLS prediction. The example was using the meta_mss model in prediction, so the results for AOLS were wrong.
  • DOC: Fixed minor grammatical and spelling mistakes.

  • DOC: Fix issues related to html on Jupyter notebooks examples on documentation.

v0.1.8

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.1.8 has been released with additional feature, minor API changes and fixes of the new features added in v0.1.7.

  • MAJOR: Ensemble Basis Functions

    • Now you can use different basis function together. For now we allow to use Fourier combined with Polynomial of different degrees.
  • API change: Add "ensemble" parameter in basis function to combine the features of different basis function.

  • Fix: N-steps ahead prediction for model_type="NAR" is working properly now with different forecast horizon.

  • DOC: Air passenger benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.
  • DOC: Load forecasting benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.
  • DOC: PV forecasting benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.

v0.1.7

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.1.7 has been released with major changes and additional features. There are several API modifications and you will need to change your code to have the new (and upcoming) features. All modifications are meant to make future expansion easier.

  • On the user's side, the changes are not that disruptive, but in the background there are many changes that allowed the inclusion of new features and bug fixes that would be complex to solve without the changes. Check the documentation page <http://sysidentpy.org/notebooks.html>__

  • Many classes were basically rebuild it from scratch, so I suggest to look at the new examples of how to use the new version.

  • I will present the main updates below in order to highlight features and usability and then all API changes will be reported.

  • MAJOR: NARX models with Fourier basis function Issue63 <https://github.com/wilsonrljr/sysidentpy/issues/63>, Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>

    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
  • MAJOR: NAR models Issue58 <https://github.com/wilsonrljr/sysidentpy/issues/58>__

    • It was already possible to build Polynomial NAR models, but with some hacks. Now the user just need to pass model_type="NAR" to build NAR models.
    • The user doesn't need to pass a vector of zeros as input anymore.
    • Works for any model structure selection algorithm (FROLS, AOLS, MetaMSS)
  • Major: NFIR models Issue59 <https://github.com/wilsonrljr/sysidentpy/issues/59>__

    • NFIR models are models where the output depends only on past inputs. It was already possible to build Polynomial NFIR models, but with a lot of code on the user's side (much more than NAR, btw). Now the user just need to pass model_type="NFIR" to build NFIR models.
    • Works for any model structure selection algorithm (FROLS, AOLS, MetaMSS)
  • Major: Select the order for the residues lags to use in Extended Least Squares - elag

    • The user can select the maximum lag of the residues to be used in the Extended Least Squares algorithm. In previous versions sysidentpy used a predefined subset of residual lags.
    • The degree of the lags follows the degree of the basis function
  • Major: Residual analysis methods Issue60 <https://github.com/wilsonrljr/sysidentpy/issues/60>__

    • There are now specific functions to calculate the autocorrelation of the residuals and cross-correlation for the analysis of the residuals. In previous versions the calculation was limited to just two inputs, for example, limiting user usability.
  • Major: Plotting methods Issue61 <https://github.com/wilsonrljr/sysidentpy/issues/61>__

    • The plotting functions are now separated from the models objects, so there are more flexibility regarding what to plot.
    • Residual plots were separated from the forecast plot
  • API Change: sysidentpy.polynomial_basis.PolynomialNarmax is deprecated. Use sysidentpy.model_structure_selection.FROLS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/62>__

    • Now the user doesn't need to pass the number of inputs as a parameter.
    • Added the elag parameter for unbiased_estimator. Now the user can define the number of lags of the residues for parameter estimation using the Extended Least Squares algorithm.
    • model_type parameter: now the user can select the model type to be built. The options are "NARMAX", "NAR" and "NFIR". "NARMAX" is the default. If you want to build a NAR model without any "hack", just set model_type="NAR". The same for "NFIR" models.
  • API Change: sysidentpy.polynomial_basis.MetaMSS is deprecated. Use sysidentpy.model_structure_selection.MetaMSS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Now the user doesn't need to pass the number of inputs as a parameter.
    • Added the elag parameter for unbiased_estimator. Now the user can define the number of lags of the residues for parameter estimation using the Extended Least Squares algorithm.
  • API Change: sysidentpy.polynomial_basis.AOLS is deprecated. Use sysidentpy.model_structure_selection.AOLS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

  • API Change: sysidentpy.polynomial_basis.SimulatePolynomialNarmax is deprecated. Use sysidentpy.simulation.SimulateNARMAX instead.

  • API Change: Introducing sysidentpy.basis_function. Because NARMAX models can be built on different basis function, a new module is added to make easier to implement new basis functions in future updates Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__.

    • Each basis function class must have a fit and predict method to be used in training and prediction respectively.
  • API Change: unbiased_estimator method moved to Estimators class.

    • added elag option
    • change the build_information_matrix method to build_output_matrix
  • API Change (new): sysidentpy.narmax_base

    • This is the new base for building NARMAX models. The classes have been rewritten to make it easier to expand functionality.
  • API Change (new): sysidentpy.narmax_base.GenerateRegressors

    • create_narmax_code: Creates the base coding that allows representation for the NARMAX, NAR, and NFIR models.
    • regressor_space: Creates the encoding representation for the NARMAX, NAR, and NFIR models.
  • API Change (new): sysidentpy.narmax_base.ModelInformation

    • _get_index_from_regressor_code: Get the index of the model code representation in regressor space.
    • _list_output_regressor_code: Create a flattened array of output regressors.
    • _list_input_regressor_code: Create a flattened array of input regressors.
    • _get_lag_from_regressor_code: Get the maximum lag from array of regressors.
    • _get_max_lag_from_model_code: the name says it all.
    • _get_max_lag: Get the maximum lag from ylag and xlag.
  • API Change (new): sysidentpy.narmax_base.InformationMatrix

    • _create_lagged_X: Create a lagged matrix of inputs without combinations.
    • _create_lagged_y: Create a lagged matrix of the output without combinations.
    • build_output_matrix: Build the information matrix of output values.
    • build_input_matrix: Build the information matrix of input values.
    • build_input_output_matrix: Build the information matrix of input and output values.
  • API Change (new): sysidentpy.narmax_base.ModelPrediction

    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change (new): sysidentpy.model_structure_selection.FROLS Issue62 <https://github.com/wilsonrljr/sysidentpy/issues/62>, Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>

    • Based on the old sysidentpy.polynomial_basis.PolynomialNARMAX. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • Add support for new basis functions.
    • The user can choose the residual lags.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.model_structure_selection.MetaMSS Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Based on the old sysidentpy.polynomial_basis.MetaMSS. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • The user can choose the residual lags.
    • Extended Least Squares support.
    • Add support for new basis functions.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.model_structure_selection.AOLS Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Based on the old sysidentpy.polynomial_basis.AOLS. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • Add support for new basis functions.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Change "l" parameter to "L".
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.simulation.SimulateNARMAX

    • Based on the old sysidentpy.polynomial_basis.SimulatePolynomialNarmax. The class has been rebuilt with new functions and optimized code.
    • Fix the Extended Least Squares support.
    • Fix n-steps ahead prediction and 1-step ahead prediction.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • The user can choose the residual lags.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • Do not inherit from the structure selection algorithm anymore, only from narmax_base. Avoid circular import and other issues.
    • many under the hood changes.
  • API Change (new): sysidentpy.residues

    • compute_residues_autocorrelation: the name says it all.
    • calculate_residues: get the residues from y and yhat.
    • get_unnormalized_e_acf: compute the unnormalized autocorrelation of the residues.
    • compute_cross_correlation: compute cross correlation between two signals.
    • _input_ccf
    • _normalized_correlation: compute the normalized correlation between two signals.
  • API Change (new): sysidentpy.utils.plotting

    • plot_results: plot the forecast
    • plot_residues_correlation: the name says it all.
  • API Change (new): sysidentpy.utils.display_results

    • results: return the model regressors, estimated parameter and ERR index of the fitted model in a table.
  • DOC: Air passenger benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with Air passenger forecasting benchmark.
    • We compare SysIdentPy against prophet, neuralprophet, autoarima, tbats and many more.
  • DOC: Load forecasting benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with load forecasting benchmark.
  • DOC: PV forecasting benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with PV forecasting benchmark.
  • DOC: Presenting main functionality

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Multiple Inputs usage

    • Example rewritten following the new api
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Information Criteria - Examples

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Important notes and examples of how to use Extended Least Squares

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Setting specific lags

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Parameter Estimation

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Using the Meta-Model Structure Selection (MetaMSS) algorithm for building Polynomial NARX models

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Using the Accelerated Orthogonal Least-Squares algorithm for building Polynomial NARX models

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Example: F-16 Ground Vibration Test benchmark

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Building NARX Neural Network using Sysidentpy

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Building NARX models using general estimators

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Simulate a Predefined Model

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: System Identification Using Adaptive Filters

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Identification of an electromechanical system

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Example: N-steps-ahead prediction - F-16 Ground Vibration Test benchmark

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Introduction to NARMAX models

    • Fixed grammatical and spelling mistakes.

v0.1.6

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: Meta-Model Structure Selection Algorithm (Meta-MSS).

    • A new method for build NARMAX models based on metaheuristics. The algorithm uses a Binary hybrid Particle Swarm Optimization and Gravitational Search Algorithm with a new cost function to build parsimonious models.
    • New class for the BPSOGSA algorithm. New algorithms can be adapted in the Meta-MSS framework.
    • Future updates will add NARX models for classification and multiobjective model structure selection.
  • MAJOR: Accelerated Orthogonal Least-Squares algorithm.

    • Added the new class AOLS to build NARX models using the Accelerated Orthogonal Least-Squares algorithm.
    • At the best of my knowledge, this is the first time this algorithm is used in the NARMAX framework. The tests I've made are promising, but use it with caution until the results are formalized into a research paper.
  • Added notebook with a simple example of how to use MetaMSS and a simple model comparison of the Electromechanical system.

  • Added notebook with a simple example of how to use AOLS

  • Added ModelInformation class. This class have methods to return model information such as max_lag of a model code.

    • added _list_output_regressor_code
    • added _list_input_regressor_code
    • added _get_lag_from_regressor_code
    • added _get_max_lag_from_model_code
  • Minor performance improvement: added the argument "predefined_regressors" in build_information_matrix function on base.py to improve the performance of the Simulation method.

  • Pytorch is now an optional dependency. Use pip install sysidentpy['full']

  • Fix code format issues.

  • Fixed minor grammatical and spelling mistakes.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

  • Improved descriptions and comments in methods.

  • metaheuristics.bpsogsa (detailed description on code docstring)

    • added evaluate_objective_function
    • added optimize
    • added generate_random_population
    • added mass_calculation
    • added calculate_gravitational_constant
    • added calculate_acceleration
    • added update_velocity_position
  • FIX issue #52

v0.1.5

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: n-steps-ahead prediction.

    • Now you can define the numbers of steps ahead in the predict function.
    • Only for Polynomial models for now. Next update will bring this functionality to Neural NARX and General Estimators.
  • MAJOR: Simulating predefined models.

    • Added the new class SimulatePolynomialNarmax to handle the simulation of known model structures.
    • Now you can simulate predefined models by just passing the model structure codification. Check the notebook examples.
  • Added 4 new notebooks in the example section.

  • Added iterative notebooks. Now you can run the notebooks in Jupyter notebook section of the documentation in Colab.

  • Fix code format issues.

  • Added new tests for SimulatePolynomialNarmax and generate_data.

  • Started changes related to numpy 1.19.4 update. There are still some Deprecation warnings that will be fixed in next update.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

v0.1.4

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: Introducing NARX Neural Network in SysIdentPy.

    • Now you can build NARX Neural Network on SysIdentPy.
    • This feature is built on top of Pytorch. See the docs for more details and examples of how to use.
  • MAJOR: Introducing general estimators in SysIdentPy.

    • Now you are able to use any estimator that have Fit/Predict methods (estimators from Sklearn and Catboost, for example) and build NARX models based on those estimators.
    • We use the core functions of SysIdentPy and keep the Fit/Predict approach from those estimators to keep the process easy to use.
    • More estimators are coming soon like XGboost.
  • Added notebooks to show how to build NARX neural Network.

  • Added notebooks to show how to build NARX models using general estimators.

  • Changed the default parameters of the plot_results function.

  • NOTE: We will keeping improving the Polynomial NARX models (new model structure selection algorithms and multiobjective identification is on our roadmap). These recent modifications will allow us to introduce new NARX models like PWARX models very soon.

  • New template for the documentation site.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

v0.1.3

CONTRIBUTORS

  • wilsonrljr
  • renard162

CHANGES

  • Fixed a bug concerning the xlag and ylag in multiple input scenarios.
  • Refactored predict function. Improved performance up to 87% depending on the number of regressors.
  • You can set lags with different size for each input.
  • Added a new function to get the max value of xlag and ylag. Work with int, list, nested lists.
  • Fixed tests for information criteria.
  • Added SysIdentPy logo.
  • Refactored code of all classes following PEP 8 guidelines to improve readability.
  • Added Citation information on Readme.
  • Changes on information Criteria tests.
  • Added workflow to run the tests when merge branch into master.
  • Added new site domain.
  • Updated docs.
\ No newline at end of file +
Skip to content

Changes in SysIdentPy

v0.3.1

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.3.1 has been released with API changes and fixes.

  • API Change:

    • MetaMSS was returning the max lag of the final model instead of the maximum lag related to the xlag and ylag. This is not wrong (its related to the issue #55), but this change will be made for all methods at the same time. In this respect, I'm reverted this to return the maximum lag of the xlag and ylag.
  • API Change: Added build_matrix method in BaseMSS. This change improved overall code readability by rewriting if/elif/else clauses in every model structure selection algorithm.

  • API Change: Added bic, aic, fpe, and lilc methods in FROLS. Now the method is selected by using a predefined dictionary with the available options. This change improved overall code readability by rewriting if/elif/else clauses in the FROLS algorithm.

  • TESTS: Added tests for Neural NARX class. The issue with pytorch was fixed and now we have the tests for every model class.

  • Remove unused code and comments.

v0.3.0

CONTRIBUTORS

  • wilsonrljr
  • gamcorn
  • Gabo-Tor

CHANGES

  • The update v0.3.0 has been released with additional features, API changes and fixes.

  • MAJOR: Estimators support in AOLS

    • Now you can use any SysIdentPy estimator in AOLS model structure selection.
  • API Change:

    • Refactored base class for model structure selection. A refactored base class for model structure selection has been introduced in SysIdentPy. This update aims to enhance the system identification process by preparing the package for new features that are currently in development, like multiobjective parameter estimation, new basis functions and more.

    Several methods within the base class have undergone significant restructuring to improve their functionality and optimize their performance. This reorganization will facilitate the incorporation of advanced model selection techniques in the future, which will enable users to obtain dynamic models with robust dynamic and static performance. - Avoid unnecessary inheritance in every MSS method and improve the readability with better structured classes. - Rewritten methods to avoid code duplication. - Improve overall code readability by rewriting if/elif/else clauses.

  • Breaking Change: X_train and y_train were replaced respectively by X and y in fit method in MetaMSS model structure selection algorithm. X_test and y_test were replaced by X and y in predict method in MetaMSS.

  • API Change: Added BaseBasisFunction class, an abstract base class for implementing basis functions.

  • Enhancement: Added support for python 3.11.

  • Future Deprecation Warning: The user will have to define the estimator and pass it to every model structure selection algorithm instead of using a string to define the Estimator. Currently the estimator is defined like "estimator='least_squares'". In version 0.4.0 the definition will be like "estimator=LeastSquares()"

  • FIX: Issue #96. Fix issue with numpy 1.24.* version. Thanks for the contribution @gamcorn.

  • FIX: Issue #91. Fix r2_score metric issue with 2 dimensional arrays.

  • FIX: Issue #90.

  • FIX: Issue #88 .Fix one step ahead prediction error in SimulateNARMAX class (thanks for pointing out, Lalith).

  • FIX: Fix error in selecting the correct regressors in AOLS.

  • Fix: Fix n step ahead prediction method not returning all values of the defined steps-ahead value when passing only the initial condition.

  • FIX: Fix Visible Deprecation Warning raised in get_max_lag method.

  • FIX: Fix deprecation warning in Extended Least Squares Example

  • DATASET: Added air passengers dataset to SysIdentPy repository.

  • DATASET: Added San Francisco Hospital Load dataset to SysIdentPy repository.

  • DATASET: Added San Francisco PV GHI dataset to SysIdentPy repository.

  • DOC: Improved documentation in Setting Specif Lags page. Now we bring an example of how to set specific lags for MISO models.

  • DOC: Minor additions and grammar fixes.

  • DOC: Improve image visualization using mkdocs-glightbox.

  • Update dev packages versions

v0.2.1

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.2.1 has been released with additional feature, minor API changes and fixes.

  • MAJOR: Neural NARX now support CUDA

    • Now the user can build Neural NARX models with CUDA support. Just add device='cuda' to use the GPU benefits.
    • Updated docs to show how to use the new feature.
  • MAJOR: New documentation website

    • The documentation is now entirely based on Markdown (no rst anymore).
    • We use MkDocs and Material for MkDocs theme now.
    • Dark theme option.
    • The Contribute page have more details to help those who wants to contribute with SysIdentPy.
    • New sections (e.g., Blog, Sponsors, etc.)
    • Many improvements under the hood.
  • MAJOR: Github Sponsor

  • Tests:

    • Now there are test for almost every function.
    • Neural NARX tests are raising numpy issues. It'll be fixed til next update.
  • FIX: NFIR models in General Estimators

    • Fix support for NFIR models using sklearn estimators.
  • The setup is now handled by the pyproject.toml file.

  • Remove unused code.

  • Fix docstring variables.

  • Fix code format issues.

  • Fix minor grammatical and spelling mistakes.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme.

v0.2.0

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.2.0 has been released with additional feature, minor API changes and fixes.

  • MAJOR: Many new features for General Estimators

    • Now the user can build General NARX models with Fourier basis function.
    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Now it is possible to build General NAR models. The user just need to pass model_type="NAR" to build NAR models.
    • Now it is possible to build General NFIR models. The user just need to pass model_type="NFIR" to build NAR models.
    • Now it is possible to run n-steps ahead prediction using General Estimators. Until now only infinity-steps ahead were allowed. Now the users can set any steps they want.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • many under the hood changes.
  • MAJOR: Many new features for NARX Neural Network

    • Now the user can build Neural NARX models with Fourier basis function.
    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Now it is possible to build Neural NAR models. The user just need to pass model_type="NAR" to build NAR models.
    • Now it is possible to build Neural NFIR models. The user just need to pass model_type="NFIR" to build NAR models.
    • Now it is possible to run n-steps ahead prediction using Neural NARX. Until now only infinity-steps ahead were allowed. Now the users can set any steps they want.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • many under the hood changes.
  • Major: Support for old methods removed.

    • Now the old sysidentpy.PolynomialNarmax is not available anymore. All the old features are included in the new API with a lot of new features and performance improvements.
  • API Change (new): sysidentpy.general_estimators.ModelPrediction

    • ModelPrediction class was adapted to support General Estimators as a stand-alone class.
    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change (new): sysidentpy.neural_network.ModelPrediction

    • ModelPrediction class was adapted to support Neural NARX as a stand-alone class.
    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change: Fit method for Neural NARX revamped.

    • No need to convert the data to tensor before calling Fit method anymore.

API Change: Keyword and positional arguments - Now users have to provide parameters with their names, as keyword arguments, instead of positional arguments. This is valid for every model class now.

  • API Change (new): sysidentpy.utils.narmax_tools

    • New functions to help user getting useful information to build model. Now we have the regressor_code helper function to help to build neural NARX models.
  • DOC: Improved Basic Steps notebook with new details about the prediction function.

  • DOC: NARX Neural Network notebook was updated following the new api and showing new features.
  • DOC: General Estimators notebook was updated following the new api and showing new features.
  • DOC: Fixed minor grammatical and spelling mistakes, including Issues #77 and #78.
  • DOC: Fix issues related to html on Jupyter notebooks examples on documentation.

v0.1.9

CONTRIBUTORS

  • wilsonrljr
  • samirmartins

CHANGES

  • The update v0.1.9 has been released with additional feature, minor API changes and fixes of the new features added in v0.1.7.

  • MAJOR: Entropic Regression Algorithm

    • Added the new class ER to build NARX models using the Entropic Regression algorithm.
    • Only the Mutual Information KNN is implemented in this version and it may take too long to run on a high number of regressor, so the user should be careful regarding the number of candidates to put in the model.
  • API: save_load

    • Added a function to save and load models from file.
  • API: Added tests for python 3.9

  • Fix : Change condition for n_info_values in FROLS. Now the value defined by the user is compared against X matrix shape instead of regressor space shape. This fix the Fourier basis function usage with more the 15 regressors in FROLS.

  • DOC: Save and Load models

    • Added a notebook showing how to use the save_load method.
  • DOC: Entropic Regression example

    • Added notebook with a simple example of how to use AOLS
  • DOC: Fourier Basis Function Example

    • Added notebook with a simple example of how to use Fourier Basis Function
  • DOC: PV forecasting benchmark

    • FIX AOLS prediction. The example was using the meta_mss model in prediction, so the results for AOLS were wrong.
  • DOC: Fixed minor grammatical and spelling mistakes.

  • DOC: Fix issues related to html on Jupyter notebooks examples on documentation.

v0.1.8

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.1.8 has been released with additional feature, minor API changes and fixes of the new features added in v0.1.7.

  • MAJOR: Ensemble Basis Functions

    • Now you can use different basis function together. For now we allow to use Fourier combined with Polynomial of different degrees.
  • API change: Add "ensemble" parameter in basis function to combine the features of different basis function.

  • Fix: N-steps ahead prediction for model_type="NAR" is working properly now with different forecast horizon.

  • DOC: Air passenger benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.
  • DOC: Load forecasting benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.
  • DOC: PV forecasting benchmark

    • Remove unused code.
    • Use default hyperparameter in SysIdentPy models.

v0.1.7

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • The update v0.1.7 has been released with major changes and additional features. There are several API modifications and you will need to change your code to have the new (and upcoming) features. All modifications are meant to make future expansion easier.

  • On the user's side, the changes are not that disruptive, but in the background there are many changes that allowed the inclusion of new features and bug fixes that would be complex to solve without the changes. Check the documentation page <http://sysidentpy.org/notebooks.html>__

  • Many classes were basically rebuild it from scratch, so I suggest to look at the new examples of how to use the new version.

  • I will present the main updates below in order to highlight features and usability and then all API changes will be reported.

  • MAJOR: NARX models with Fourier basis function Issue63 <https://github.com/wilsonrljr/sysidentpy/issues/63>, Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>

    • The user can choose which basis they want by importing it from sysidentpy.basis_function. Check the notebooks with examples of how to use it.
    • Polynomial and Fourier are supported for now. New basis functions will be added in next releases.
  • MAJOR: NAR models Issue58 <https://github.com/wilsonrljr/sysidentpy/issues/58>__

    • It was already possible to build Polynomial NAR models, but with some hacks. Now the user just need to pass model_type="NAR" to build NAR models.
    • The user doesn't need to pass a vector of zeros as input anymore.
    • Works for any model structure selection algorithm (FROLS, AOLS, MetaMSS)
  • Major: NFIR models Issue59 <https://github.com/wilsonrljr/sysidentpy/issues/59>__

    • NFIR models are models where the output depends only on past inputs. It was already possible to build Polynomial NFIR models, but with a lot of code on the user's side (much more than NAR, btw). Now the user just need to pass model_type="NFIR" to build NFIR models.
    • Works for any model structure selection algorithm (FROLS, AOLS, MetaMSS)
  • Major: Select the order for the residues lags to use in Extended Least Squares - elag

    • The user can select the maximum lag of the residues to be used in the Extended Least Squares algorithm. In previous versions sysidentpy used a predefined subset of residual lags.
    • The degree of the lags follows the degree of the basis function
  • Major: Residual analysis methods Issue60 <https://github.com/wilsonrljr/sysidentpy/issues/60>__

    • There are now specific functions to calculate the autocorrelation of the residuals and cross-correlation for the analysis of the residuals. In previous versions the calculation was limited to just two inputs, for example, limiting user usability.
  • Major: Plotting methods Issue61 <https://github.com/wilsonrljr/sysidentpy/issues/61>__

    • The plotting functions are now separated from the models objects, so there are more flexibility regarding what to plot.
    • Residual plots were separated from the forecast plot
  • API Change: sysidentpy.polynomial_basis.PolynomialNarmax is deprecated. Use sysidentpy.model_structure_selection.FROLS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/62>__

    • Now the user doesn't need to pass the number of inputs as a parameter.
    • Added the elag parameter for unbiased_estimator. Now the user can define the number of lags of the residues for parameter estimation using the Extended Least Squares algorithm.
    • model_type parameter: now the user can select the model type to be built. The options are "NARMAX", "NAR" and "NFIR". "NARMAX" is the default. If you want to build a NAR model without any "hack", just set model_type="NAR". The same for "NFIR" models.
  • API Change: sysidentpy.polynomial_basis.MetaMSS is deprecated. Use sysidentpy.model_structure_selection.MetaMSS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Now the user doesn't need to pass the number of inputs as a parameter.
    • Added the elag parameter for unbiased_estimator. Now the user can define the number of lags of the residues for parameter estimation using the Extended Least Squares algorithm.
  • API Change: sysidentpy.polynomial_basis.AOLS is deprecated. Use sysidentpy.model_structure_selection.AOLS instead. Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

  • API Change: sysidentpy.polynomial_basis.SimulatePolynomialNarmax is deprecated. Use sysidentpy.simulation.SimulateNARMAX instead.

  • API Change: Introducing sysidentpy.basis_function. Because NARMAX models can be built on different basis function, a new module is added to make easier to implement new basis functions in future updates Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__.

    • Each basis function class must have a fit and predict method to be used in training and prediction respectively.
  • API Change: unbiased_estimator method moved to Estimators class.

    • added elag option
    • change the build_information_matrix method to build_output_matrix
  • API Change (new): sysidentpy.narmax_base

    • This is the new base for building NARMAX models. The classes have been rewritten to make it easier to expand functionality.
  • API Change (new): sysidentpy.narmax_base.GenerateRegressors

    • create_narmax_code: Creates the base coding that allows representation for the NARMAX, NAR, and NFIR models.
    • regressor_space: Creates the encoding representation for the NARMAX, NAR, and NFIR models.
  • API Change (new): sysidentpy.narmax_base.ModelInformation

    • _get_index_from_regressor_code: Get the index of the model code representation in regressor space.
    • _list_output_regressor_code: Create a flattened array of output regressors.
    • _list_input_regressor_code: Create a flattened array of input regressors.
    • _get_lag_from_regressor_code: Get the maximum lag from array of regressors.
    • _get_max_lag_from_model_code: the name says it all.
    • _get_max_lag: Get the maximum lag from ylag and xlag.
  • API Change (new): sysidentpy.narmax_base.InformationMatrix

    • _create_lagged_X: Create a lagged matrix of inputs without combinations.
    • _create_lagged_y: Create a lagged matrix of the output without combinations.
    • build_output_matrix: Build the information matrix of output values.
    • build_input_matrix: Build the information matrix of input values.
    • build_input_output_matrix: Build the information matrix of input and output values.
  • API Change (new): sysidentpy.narmax_base.ModelPrediction

    • predict: base method for prediction. Support infinity_steps ahead, one-step ahead and n-steps ahead prediction and any basis function.
    • _one_step_ahead_prediction: Perform the 1-step-ahead prediction for any basis function.
    • _n_step_ahead_prediction: Perform the n-step-ahead prediction for polynomial basis.
    • _model_prediction: Perform the infinity-step-ahead prediction for polynomial basis.
    • _narmax_predict: wrapper for NARMAX and NAR models.
    • _nfir_predict: wrapper for NFIR models.
    • _basis_function_predict: Perform the infinity-step-ahead prediction for basis functions other than polynomial.
    • basis_function_n_step_prediction: Perform the n-step-ahead prediction for basis functions other than polynomial.
  • API Change (new): sysidentpy.model_structure_selection.FROLS Issue62 <https://github.com/wilsonrljr/sysidentpy/issues/62>, Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>

    • Based on the old sysidentpy.polynomial_basis.PolynomialNARMAX. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • Add support for new basis functions.
    • The user can choose the residual lags.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.model_structure_selection.MetaMSS Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Based on the old sysidentpy.polynomial_basis.MetaMSS. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • The user can choose the residual lags.
    • Extended Least Squares support.
    • Add support for new basis functions.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.model_structure_selection.AOLS Issue64 <https://github.com/wilsonrljr/sysidentpy/issues/64>__

    • Based on the old sysidentpy.polynomial_basis.AOLS. The class has been rebuilt with new functions and optimized code.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • Add support for new basis functions.
    • No need to pass the number of inputs anymore.
    • Improved docstring.
    • Change "l" parameter to "L".
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • many under the hood changes.
  • API Change (new): sysidentpy.simulation.SimulateNARMAX

    • Based on the old sysidentpy.polynomial_basis.SimulatePolynomialNarmax. The class has been rebuilt with new functions and optimized code.
    • Fix the Extended Least Squares support.
    • Fix n-steps ahead prediction and 1-step ahead prediction.
    • Enforcing keyword-only arguments. This is an effort to promote clear and non-ambiguous use of the library.
    • The user can choose the residual lags.
    • Improved docstring.
    • Fixed minor grammatical and spelling mistakes.
    • New prediction method.
    • Do not inherit from the structure selection algorithm anymore, only from narmax_base. Avoid circular import and other issues.
    • many under the hood changes.
  • API Change (new): sysidentpy.residues

    • compute_residues_autocorrelation: the name says it all.
    • calculate_residues: get the residues from y and yhat.
    • get_unnormalized_e_acf: compute the unnormalized autocorrelation of the residues.
    • compute_cross_correlation: compute cross correlation between two signals.
    • _input_ccf
    • _normalized_correlation: compute the normalized correlation between two signals.
  • API Change (new): sysidentpy.utils.plotting

    • plot_results: plot the forecast
    • plot_residues_correlation: the name says it all.
  • API Change (new): sysidentpy.utils.display_results

    • results: return the model regressors, estimated parameter and ERR index of the fitted model in a table.
  • DOC: Air passenger benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with Air passenger forecasting benchmark.
    • We compare SysIdentPy against prophet, neuralprophet, autoarima, tbats and many more.
  • DOC: Load forecasting benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with load forecasting benchmark.
  • DOC: PV forecasting benchmark Issue65 <https://github.com/wilsonrljr/sysidentpy/issues/65>__

    • Added notebook with PV forecasting benchmark.
  • DOC: Presenting main functionality

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Multiple Inputs usage

    • Example rewritten following the new api
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Information Criteria - Examples

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Important notes and examples of how to use Extended Least Squares

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Setting specific lags

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Parameter Estimation

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Using the Meta-Model Structure Selection (MetaMSS) algorithm for building Polynomial NARX models

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Using the Accelerated Orthogonal Least-Squares algorithm for building Polynomial NARX models

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Example: F-16 Ground Vibration Test benchmark

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Building NARX Neural Network using Sysidentpy

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Building NARX models using general estimators

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Simulate a Predefined Model

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: System Identification Using Adaptive Filters

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Identification of an electromechanical system

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Example: N-steps-ahead prediction - F-16 Ground Vibration Test benchmark

    • Example rewritten following the new api.
    • Fixed minor grammatical and spelling mistakes.
  • DOC: Introduction to NARMAX models

    • Fixed grammatical and spelling mistakes.

v0.1.6

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: Meta-Model Structure Selection Algorithm (Meta-MSS).

    • A new method for build NARMAX models based on metaheuristics. The algorithm uses a Binary hybrid Particle Swarm Optimization and Gravitational Search Algorithm with a new cost function to build parsimonious models.
    • New class for the BPSOGSA algorithm. New algorithms can be adapted in the Meta-MSS framework.
    • Future updates will add NARX models for classification and multiobjective model structure selection.
  • MAJOR: Accelerated Orthogonal Least-Squares algorithm.

    • Added the new class AOLS to build NARX models using the Accelerated Orthogonal Least-Squares algorithm.
    • At the best of my knowledge, this is the first time this algorithm is used in the NARMAX framework. The tests I've made are promising, but use it with caution until the results are formalized into a research paper.
  • Added notebook with a simple example of how to use MetaMSS and a simple model comparison of the Electromechanical system.

  • Added notebook with a simple example of how to use AOLS

  • Added ModelInformation class. This class have methods to return model information such as max_lag of a model code.

    • added _list_output_regressor_code
    • added _list_input_regressor_code
    • added _get_lag_from_regressor_code
    • added _get_max_lag_from_model_code
  • Minor performance improvement: added the argument "predefined_regressors" in build_information_matrix function on base.py to improve the performance of the Simulation method.

  • Pytorch is now an optional dependency. Use pip install sysidentpy['full']

  • Fix code format issues.

  • Fixed minor grammatical and spelling mistakes.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

  • Improved descriptions and comments in methods.

  • metaheuristics.bpsogsa (detailed description on code docstring)

    • added evaluate_objective_function
    • added optimize
    • added generate_random_population
    • added mass_calculation
    • added calculate_gravitational_constant
    • added calculate_acceleration
    • added update_velocity_position
  • FIX issue #52

v0.1.5

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: n-steps-ahead prediction.

    • Now you can define the numbers of steps ahead in the predict function.
    • Only for Polynomial models for now. Next update will bring this functionality to Neural NARX and General Estimators.
  • MAJOR: Simulating predefined models.

    • Added the new class SimulatePolynomialNarmax to handle the simulation of known model structures.
    • Now you can simulate predefined models by just passing the model structure codification. Check the notebook examples.
  • Added 4 new notebooks in the example section.

  • Added iterative notebooks. Now you can run the notebooks in Jupyter notebook section of the documentation in Colab.

  • Fix code format issues.

  • Added new tests for SimulatePolynomialNarmax and generate_data.

  • Started changes related to numpy 1.19.4 update. There are still some Deprecation warnings that will be fixed in next update.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

v0.1.4

CONTRIBUTORS

  • wilsonrljr

CHANGES

  • MAJOR: Introducing NARX Neural Network in SysIdentPy.

    • Now you can build NARX Neural Network on SysIdentPy.
    • This feature is built on top of Pytorch. See the docs for more details and examples of how to use.
  • MAJOR: Introducing general estimators in SysIdentPy.

    • Now you are able to use any estimator that have Fit/Predict methods (estimators from Sklearn and Catboost, for example) and build NARX models based on those estimators.
    • We use the core functions of SysIdentPy and keep the Fit/Predict approach from those estimators to keep the process easy to use.
    • More estimators are coming soon like XGboost.
  • Added notebooks to show how to build NARX neural Network.

  • Added notebooks to show how to build NARX models using general estimators.

  • Changed the default parameters of the plot_results function.

  • NOTE: We will keeping improving the Polynomial NARX models (new model structure selection algorithms and multiobjective identification is on our roadmap). These recent modifications will allow us to introduce new NARX models like PWARX models very soon.

  • New template for the documentation site.

  • Fix issues related to html on Jupyter notebooks examples on documentation.

  • Updated Readme with examples of how to use.

v0.1.3

CONTRIBUTORS

  • wilsonrljr
  • renard162

CHANGES

  • Fixed a bug concerning the xlag and ylag in multiple input scenarios.
  • Refactored predict function. Improved performance up to 87% depending on the number of regressors.
  • You can set lags with different size for each input.
  • Added a new function to get the max value of xlag and ylag. Work with int, list, nested lists.
  • Fixed tests for information criteria.
  • Added SysIdentPy logo.
  • Refactored code of all classes following PEP 8 guidelines to improve readability.
  • Added Citation information on Readme.
  • Changes on information Criteria tests.
  • Added workflow to run the tests when merge branch into master.
  • Added new site domain.
  • Updated docs.
\ No newline at end of file diff --git a/docs/code/aols/index.html b/docs/code/aols/index.html index 413398ad..635f2d80 100644 --- a/docs/code/aols/index.html +++ b/docs/code/aols/index.html @@ -578,22 +578,7 @@ 552 553 554 -555 -556 -557 -558 -559 -560 -561 -562 -563 -564 -565 -566 -567 -568 -569 -570
@deprecated(
+555
@deprecated(
     version="v0.3.0",
     future_version="v0.4.0",
     message=(
@@ -707,447 +692,431 @@
         self.basis_function = basis_function
         self.non_degree = basis_function.degree
         self.model_type = model_type
-        self.xlag = xlag
-        self.ylag = ylag
-        self.max_lag = self._get_max_lag()
-        self.k = k
-        self.L = L
-        self.estimator = estimator
-        self.threshold = threshold
-        super().__init__(
-            lam=lam,
-            delta=delta,
-            offset_covariance=offset_covariance,
-            mu=mu,
-            eps=eps,
-            gama=gama,
-            weight=weight,
-            basis_function=basis_function,
-        )
-        self.ensemble = None
-        self.res = None
-        self.n_inputs = None
-        self.theta = None
-        self.regressor_code = None
-        self.pivv = None
-        self.final_model = None
-        self.n_terms = None
-        self.err = None
-        self._validate_params()
-
-    def _validate_params(self):
-        """Validate input params."""
-        if isinstance(self.ylag, int) and self.ylag < 1:
-            raise ValueError("ylag must be integer and > zero. Got %f" % self.ylag)
-
-        if isinstance(self.xlag, int) and self.xlag < 1:
-            raise ValueError("xlag must be integer and > zero. Got %f" % self.xlag)
-
-        if not isinstance(self.xlag, (int, list)):
-            raise ValueError("xlag must be integer and > zero. Got %f" % self.xlag)
-
-        if not isinstance(self.ylag, (int, list)):
-            raise ValueError("ylag must be integer and > zero. Got %f" % self.ylag)
-
-        if not isinstance(self.k, int) or self.k < 1:
-            raise ValueError("k must be integer and > zero. Got %f" % self.k)
-
-        if not isinstance(self.L, int) or self.L < 1:
-            raise ValueError("k must be integer and > zero. Got %f" % self.L)
-
-        if not isinstance(self.threshold, (int, float)) or self.threshold < 0:
-            raise ValueError(
-                "threshold must be integer and > zero. Got %f" % self.threshold
-            )
-
-    def aols(
-        self, psi: np.ndarray, y: np.ndarray
-    ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
-        """Perform the Accelerated Orthogonal Least-Squares algorithm.
-
-        Parameters
-        ----------
-        y : array-like of shape = n_samples
-            The target data used in the identification process.
-        psi : ndarray of floats
-            The information matrix of the model.
-
-        Returns
-        -------
-        theta : array-like of shape = number_of_model_elements
-            The respective ERR calculated for each regressor.
-        piv : array-like of shape = number_of_model_elements
-            Contains the index to put the regressors in the correct order
-            based on err values.
-        residual_norm : float
-            The final residual norm.
-
-        References
-        ----------
-        - Manuscript: Accelerated Orthogonal Least-Squares for Large-Scale
-           Sparse Reconstruction
-           https://www.sciencedirect.com/science/article/abs/pii/S1051200418305311
-
-        """
-        n, m = psi.shape
-        theta = np.zeros([m, 1])
-        r = y[self.max_lag :].reshape(-1, 1).copy()
-        it = 0
-        max_iter = int(min(self.k, np.floor(n / self.L)))
-        aols_index = np.zeros(max_iter * self.L)
-        U = np.zeros([n, max_iter * self.L])
-        T = psi.copy()
-        while LA.norm(r) > self.threshold and it < max_iter:
-            it = it + 1
-            temp_in = (it - 1) * self.L
-            if it > 1:
-                T = T - U[:, temp_in].reshape(-1, 1) @ (
-                    U[:, temp_in].reshape(-1, 1).T @ psi
-                )
-
-            q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
-            TT = np.sum(T**2, axis=0) * (q**2)
-            sub_ind = list(aols_index[:temp_in].astype(int))
-            TT[sub_ind] = 0
-            sorting_indices = np.argsort(TT)[::-1].ravel()
-            aols_index[temp_in : temp_in + self.L] = sorting_indices[: self.L]
-            for i in range(self.L):
-                TEMP = T[:, sorting_indices[i]].reshape(-1, 1) * q[sorting_indices[i]]
-                U[:, temp_in + i] = (TEMP / np.linalg.norm(TEMP, axis=0)).ravel()
-                r = r - TEMP
-                if i == self.L:
-                    break
-
-                T = T - U[:, temp_in + i].reshape(-1, 1) @ (
-                    U[:, temp_in + i].reshape(-1, 1).T @ psi
-                )
-                q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
-
-        aols_index = aols_index[aols_index > 0].ravel().astype(int)
-        residual_norm = LA.norm(r)
-        theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
-        if self.L > 1:
-            sorting_indices = np.argsort(np.abs(theta))[::-1]
-            aols_index = sorting_indices[: self.k].ravel().astype(int)
-            theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
-            residual_norm = LA.norm(
-                y[self.max_lag :].reshape(-1, 1)
-                - psi[:, aols_index] @ theta[aols_index]
-            )
-
-        pivv = np.argwhere(theta.ravel() != 0).ravel()
-        theta = theta[theta != 0]
-        return theta.reshape(-1, 1), pivv, residual_norm
-
-    def fit(self, *, X=None, y=None):
-        """Fit polynomial NARMAX model using AOLS algorithm.
-
-        The 'fit' function allows a friendly usage by the user.
-        Given two arguments, X and y, fit training data.
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the training process.
-        y : ndarray of floats
-            The output data to be used in the training process.
-
-        Returns
-        -------
-        model : ndarray of int
-            The model code representation.
-        piv : array-like of shape = number_of_model_elements
-            Contains the index to put the regressors in the correct order
-            based on err values.
-        theta : array-like of shape = number_of_model_elements
-            The estimated parameters of the model.
-        err : array-like of shape = number_of_model_elements
-            The respective ERR calculated for each regressor.
-        info_values : array-like of shape = n_regressor
-            Vector with values of akaike's information criterion
-            for models with N terms (where N is the
-            vector position + 1).
-
-        """
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NARMAX":
-            check_X_y(X, y)
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            reg_matrix = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-        else:
-            reg_matrix, self.ensemble = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-
-        if X is not None:
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
-
-        self.regressor_code = self.regressor_space(self.n_inputs)
-
-        (self.theta, self.pivv, self.res) = self.aols(reg_matrix, y)
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            self.final_model = self.regressor_code[self.pivv, :].copy()
-        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-            basis_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-            self.final_model = self.regressor_code[self.pivv, :].copy()
-        else:
-            self.regressor_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.final_model = self.regressor_code[self.pivv, :].copy()
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.xlag = xlag
+        self.ylag = ylag
+        self.max_lag = self._get_max_lag()
+        self.k = k
+        self.L = L
+        self.estimator = estimator
+        self.threshold = threshold
+        super().__init__(
+            lam=lam,
+            delta=delta,
+            offset_covariance=offset_covariance,
+            mu=mu,
+            eps=eps,
+            gama=gama,
+            weight=weight,
+            basis_function=basis_function,
+        )
+        self.ensemble = None
+        self.res = None
+        self.n_inputs = None
+        self.theta = None
+        self.regressor_code = None
+        self.pivv = None
+        self.final_model = None
+        self.n_terms = None
+        self.err = None
+        self._validate_params()
+
+    def _validate_params(self):
+        """Validate input params."""
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if not isinstance(self.k, int) or self.k < 1:
+            raise ValueError(f"k must be integer and > zero. Got {self.k}")
+
+        if not isinstance(self.L, int) or self.L < 1:
+            raise ValueError(f"k must be integer and > zero. Got {self.L}")
+
+        if not isinstance(self.threshold, (int, float)) or self.threshold < 0:
+            raise ValueError(
+                f"threshold must be integer and > zero. Got {self.threshold}"
+            )
+
+    def aols(
+        self, psi: np.ndarray, y: np.ndarray
+    ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
+        """Perform the Accelerated Orthogonal Least-Squares algorithm.
+
+        Parameters
+        ----------
+        y : array-like of shape = n_samples
+            The target data used in the identification process.
+        psi : ndarray of floats
+            The information matrix of the model.
+
+        Returns
+        -------
+        theta : array-like of shape = number_of_model_elements
+            The respective ERR calculated for each regressor.
+        piv : array-like of shape = number_of_model_elements
+            Contains the index to put the regressors in the correct order
+            based on err values.
+        residual_norm : float
+            The final residual norm.
+
+        References
+        ----------
+        - Manuscript: Accelerated Orthogonal Least-Squares for Large-Scale
+           Sparse Reconstruction
+           https://www.sciencedirect.com/science/article/abs/pii/S1051200418305311
+
+        """
+        n, m = psi.shape
+        theta = np.zeros([m, 1])
+        r = y[self.max_lag :].reshape(-1, 1).copy()
+        it = 0
+        max_iter = int(min(self.k, np.floor(n / self.L)))
+        aols_index = np.zeros(max_iter * self.L)
+        U = np.zeros([n, max_iter * self.L])
+        T = psi.copy()
+        while LA.norm(r) > self.threshold and it < max_iter:
+            it = it + 1
+            temp_in = (it - 1) * self.L
+            if it > 1:
+                T = T - U[:, temp_in].reshape(-1, 1) @ (
+                    U[:, temp_in].reshape(-1, 1).T @ psi
+                )
+
+            q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
+            TT = np.sum(T**2, axis=0) * (q**2)
+            sub_ind = list(aols_index[:temp_in].astype(int))
+            TT[sub_ind] = 0
+            sorting_indices = np.argsort(TT)[::-1].ravel()
+            aols_index[temp_in : temp_in + self.L] = sorting_indices[: self.L]
+            for i in range(self.L):
+                TEMP = T[:, sorting_indices[i]].reshape(-1, 1) * q[sorting_indices[i]]
+                U[:, temp_in + i] = (TEMP / np.linalg.norm(TEMP, axis=0)).ravel()
+                r = r - TEMP
+                if i == self.L:
+                    break
+
+                T = T - U[:, temp_in + i].reshape(-1, 1) @ (
+                    U[:, temp_in + i].reshape(-1, 1).T @ psi
+                )
+                q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
+
+        aols_index = aols_index[aols_index > 0].ravel().astype(int)
+        residual_norm = LA.norm(r)
+        theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
+        if self.L > 1:
+            sorting_indices = np.argsort(np.abs(theta))[::-1]
+            aols_index = sorting_indices[: self.k].ravel().astype(int)
+            theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
+            residual_norm = LA.norm(
+                y[self.max_lag :].reshape(-1, 1)
+                - psi[:, aols_index] @ theta[aols_index]
+            )
+
+        pivv = np.argwhere(theta.ravel() != 0).ravel()
+        theta = theta[theta != 0]
+        return theta.reshape(-1, 1), pivv, residual_norm
+
+    def fit(self, *, X=None, y=None):
+        """Fit polynomial NARMAX model using AOLS algorithm.
+
+        The 'fit' function allows a friendly usage by the user.
+        Given two arguments, X and y, fit training data.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the training process.
+        y : ndarray of floats
+            The output data to be used in the training process.
+
+        Returns
+        -------
+        model : ndarray of int
+            The model code representation.
+        piv : array-like of shape = number_of_model_elements
+            Contains the index to put the regressors in the correct order
+            based on err values.
+        theta : array-like of shape = number_of_model_elements
+            The estimated parameters of the model.
+        err : array-like of shape = number_of_model_elements
+            The respective ERR calculated for each regressor.
+        info_values : array-like of shape = n_regressor
+            Vector with values of akaike's information criterion
+            for models with N terms (where N is the
+            vector position + 1).
+
+        """
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X, y)
+
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            reg_matrix = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+        else:
+            reg_matrix, self.ensemble = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+
+        if X is not None:
+            self.n_inputs = _num_features(X)
+        else:
+            self.n_inputs = 1  # just to create the regressor space base
+
+        self.regressor_code = self.regressor_space(self.n_inputs)
+
+        (self.theta, self.pivv, self.res) = self.aols(reg_matrix, y)
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            self.final_model = self.regressor_code[self.pivv, :].copy()
+        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+            basis_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+            self.final_model = self.regressor_code[self.pivv, :].copy()
+        else:
+            self.regressor_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.final_model = self.regressor_code[self.pivv, :].copy()
+
+        self.n_terms = len(
+            self.theta
+        )  # the number of terms we selected (necessary in the 'results' methods)
+        self.err = self.n_terms * [
+            0
+        ]  # just to use the `results` method. Will be changed in next update.
+        return self
+
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted values given an input.
 
-        self.n_terms = len(
-            self.theta
-        )  # the number of terms we selected (necessary in the 'results' methods)
-        self.err = self.n_terms * [
-            0
-        ]  # just to use the `results` method. Will be changed in next update.
-        return self
-
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted values given an input.
-
-        The predict function allows a friendly usage by the user.
-        Given a previously trained model, predict values given
-        a new set of data.
-
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future)
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-        steps_ahead : int (default = None)
-            The user can use free run simulation, one-step ahead prediction
-            and n-step ahead prediction.
-        forecast_horizon : int, default=None
-            The number of predictions over the time.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
+        The predict function allows a friendly usage by the user.
+        Given a previously trained model, predict values given
+        a new set of data.
+
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future)
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+        steps_ahead : int (default = None)
+            The user can use free run simulation, one-step ahead prediction
+            and n-step ahead prediction.
+        forecast_horizon : int, default=None
+            The number of predictions over the time.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
 
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        if steps_ahead is None:
-            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        yhat = self._basis_function_n_step_prediction(
-            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-        )
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        if steps_ahead is None:
+            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        yhat = self._basis_function_n_step_prediction(
+            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+        )
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
 
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            X_base = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-        else:
-            X_base, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-
-        yhat = super()._one_step_ahead_prediction(X_base)
-        return yhat.reshape(-1, 1)
-
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
-        return yhat
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
-
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        if self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-
-        raise Exception(
-            "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-        )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
-        return y_output
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = super()._nfir_predict(X, y_initial)
-        return y_output
-
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        yhat = super()._basis_function_n_step_prediction(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        yhat = super()._basis_function_n_steps_horizon(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-

aols(psi, y)

Perform the Accelerated Orthogonal Least-Squares algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required

Returns:

Name Type Description
theta array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

residual_norm float

The final residual norm.

References
Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
185
-186
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        lagged_data = self.build_matrix(X, y)
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            X_base = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+        else:
+            X_base, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+
+        yhat = super()._one_step_ahead_prediction(X_base)
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
+        return yhat
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
+
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
+        return y_output
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = super()._nfir_predict(X, y_initial)
+        return y_output
+
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        yhat = super()._basis_function_n_step_prediction(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        yhat = super()._basis_function_n_steps_horizon(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
+

aols(psi, y)

Perform the Accelerated Orthogonal Least-Squares algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required

Returns:

Name Type Description
theta array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

residual_norm float

The final residual norm.

References
Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
186
 187
 188
 189
@@ -1223,86 +1192,86 @@
 259
 260
 261
-262
def aols(
-    self, psi: np.ndarray, y: np.ndarray
-) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
-    """Perform the Accelerated Orthogonal Least-Squares algorithm.
-
-    Parameters
-    ----------
-    y : array-like of shape = n_samples
-        The target data used in the identification process.
-    psi : ndarray of floats
-        The information matrix of the model.
-
-    Returns
-    -------
-    theta : array-like of shape = number_of_model_elements
-        The respective ERR calculated for each regressor.
-    piv : array-like of shape = number_of_model_elements
-        Contains the index to put the regressors in the correct order
-        based on err values.
-    residual_norm : float
-        The final residual norm.
-
-    References
-    ----------
-    - Manuscript: Accelerated Orthogonal Least-Squares for Large-Scale
-       Sparse Reconstruction
-       https://www.sciencedirect.com/science/article/abs/pii/S1051200418305311
-
-    """
-    n, m = psi.shape
-    theta = np.zeros([m, 1])
-    r = y[self.max_lag :].reshape(-1, 1).copy()
-    it = 0
-    max_iter = int(min(self.k, np.floor(n / self.L)))
-    aols_index = np.zeros(max_iter * self.L)
-    U = np.zeros([n, max_iter * self.L])
-    T = psi.copy()
-    while LA.norm(r) > self.threshold and it < max_iter:
-        it = it + 1
-        temp_in = (it - 1) * self.L
-        if it > 1:
-            T = T - U[:, temp_in].reshape(-1, 1) @ (
-                U[:, temp_in].reshape(-1, 1).T @ psi
-            )
-
-        q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
-        TT = np.sum(T**2, axis=0) * (q**2)
-        sub_ind = list(aols_index[:temp_in].astype(int))
-        TT[sub_ind] = 0
-        sorting_indices = np.argsort(TT)[::-1].ravel()
-        aols_index[temp_in : temp_in + self.L] = sorting_indices[: self.L]
-        for i in range(self.L):
-            TEMP = T[:, sorting_indices[i]].reshape(-1, 1) * q[sorting_indices[i]]
-            U[:, temp_in + i] = (TEMP / np.linalg.norm(TEMP, axis=0)).ravel()
-            r = r - TEMP
-            if i == self.L:
-                break
-
-            T = T - U[:, temp_in + i].reshape(-1, 1) @ (
-                U[:, temp_in + i].reshape(-1, 1).T @ psi
-            )
-            q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
-
-    aols_index = aols_index[aols_index > 0].ravel().astype(int)
-    residual_norm = LA.norm(r)
-    theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
-    if self.L > 1:
-        sorting_indices = np.argsort(np.abs(theta))[::-1]
-        aols_index = sorting_indices[: self.k].ravel().astype(int)
-        theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
-        residual_norm = LA.norm(
-            y[self.max_lag :].reshape(-1, 1)
-            - psi[:, aols_index] @ theta[aols_index]
-        )
-
-    pivv = np.argwhere(theta.ravel() != 0).ravel()
-    theta = theta[theta != 0]
-    return theta.reshape(-1, 1), pivv, residual_norm
-

fit(*, X=None, y=None)

Fit polynomial NARMAX model using AOLS algorithm.

The 'fit' function allows a friendly usage by the user. Given two arguments, X and y, fit training data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

theta array-like of shape

The estimated parameters of the model.

err array-like of shape

The respective ERR calculated for each regressor.

info_values array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
def aols(
+    self, psi: np.ndarray, y: np.ndarray
+) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
+    """Perform the Accelerated Orthogonal Least-Squares algorithm.
+
+    Parameters
+    ----------
+    y : array-like of shape = n_samples
+        The target data used in the identification process.
+    psi : ndarray of floats
+        The information matrix of the model.
+
+    Returns
+    -------
+    theta : array-like of shape = number_of_model_elements
+        The respective ERR calculated for each regressor.
+    piv : array-like of shape = number_of_model_elements
+        Contains the index to put the regressors in the correct order
+        based on err values.
+    residual_norm : float
+        The final residual norm.
+
+    References
+    ----------
+    - Manuscript: Accelerated Orthogonal Least-Squares for Large-Scale
+       Sparse Reconstruction
+       https://www.sciencedirect.com/science/article/abs/pii/S1051200418305311
+
+    """
+    n, m = psi.shape
+    theta = np.zeros([m, 1])
+    r = y[self.max_lag :].reshape(-1, 1).copy()
+    it = 0
+    max_iter = int(min(self.k, np.floor(n / self.L)))
+    aols_index = np.zeros(max_iter * self.L)
+    U = np.zeros([n, max_iter * self.L])
+    T = psi.copy()
+    while LA.norm(r) > self.threshold and it < max_iter:
+        it = it + 1
+        temp_in = (it - 1) * self.L
+        if it > 1:
+            T = T - U[:, temp_in].reshape(-1, 1) @ (
+                U[:, temp_in].reshape(-1, 1).T @ psi
+            )
+
+        q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
+        TT = np.sum(T**2, axis=0) * (q**2)
+        sub_ind = list(aols_index[:temp_in].astype(int))
+        TT[sub_ind] = 0
+        sorting_indices = np.argsort(TT)[::-1].ravel()
+        aols_index[temp_in : temp_in + self.L] = sorting_indices[: self.L]
+        for i in range(self.L):
+            TEMP = T[:, sorting_indices[i]].reshape(-1, 1) * q[sorting_indices[i]]
+            U[:, temp_in + i] = (TEMP / np.linalg.norm(TEMP, axis=0)).ravel()
+            r = r - TEMP
+            if i == self.L:
+                break
+
+            T = T - U[:, temp_in + i].reshape(-1, 1) @ (
+                U[:, temp_in + i].reshape(-1, 1).T @ psi
+            )
+            q = ((r.T @ psi) / np.sum(psi * T, axis=0)).ravel()
+
+    aols_index = aols_index[aols_index > 0].ravel().astype(int)
+    residual_norm = LA.norm(r)
+    theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
+    if self.L > 1:
+        sorting_indices = np.argsort(np.abs(theta))[::-1]
+        aols_index = sorting_indices[: self.k].ravel().astype(int)
+        theta[aols_index] = getattr(self, self.estimator)(psi[:, aols_index], y)
+        residual_norm = LA.norm(
+            y[self.max_lag :].reshape(-1, 1)
+            - psi[:, aols_index] @ theta[aols_index]
+        )
+
+    pivv = np.argwhere(theta.ravel() != 0).ravel()
+    theta = theta[theta != 0]
+    return theta.reshape(-1, 1), pivv, residual_norm
+

fit(*, X=None, y=None)

Fit polynomial NARMAX model using AOLS algorithm.

The 'fit' function allows a friendly usage by the user. Given two arguments, X and y, fit training data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

theta array-like of shape

The estimated parameters of the model.

err array-like of shape

The respective ERR calculated for each regressor.

info_values array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
265
 266
 267
 268
@@ -1381,9 +1350,87 @@
 341
 342
 343
-344
-345
-346
+344
def fit(self, *, X=None, y=None):
+    """Fit polynomial NARMAX model using AOLS algorithm.
+
+    The 'fit' function allows a friendly usage by the user.
+    Given two arguments, X and y, fit training data.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the training process.
+    y : ndarray of floats
+        The output data to be used in the training process.
+
+    Returns
+    -------
+    model : ndarray of int
+        The model code representation.
+    piv : array-like of shape = number_of_model_elements
+        Contains the index to put the regressors in the correct order
+        based on err values.
+    theta : array-like of shape = number_of_model_elements
+        The estimated parameters of the model.
+    err : array-like of shape = number_of_model_elements
+        The respective ERR calculated for each regressor.
+    info_values : array-like of shape = n_regressor
+        Vector with values of akaike's information criterion
+        for models with N terms (where N is the
+        vector position + 1).
+
+    """
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    self.max_lag = self._get_max_lag()
+    lagged_data = self.build_matrix(X, y)
+
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        reg_matrix = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+    else:
+        reg_matrix, self.ensemble = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+
+    if X is not None:
+        self.n_inputs = _num_features(X)
+    else:
+        self.n_inputs = 1  # just to create the regressor space base
+
+    self.regressor_code = self.regressor_space(self.n_inputs)
+
+    (self.theta, self.pivv, self.res) = self.aols(reg_matrix, y)
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        self.final_model = self.regressor_code[self.pivv, :].copy()
+    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+        basis_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+        self.final_model = self.regressor_code[self.pivv, :].copy()
+    else:
+        self.regressor_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.final_model = self.regressor_code[self.pivv, :].copy()
+
+    self.n_terms = len(
+        self.theta
+    )  # the number of terms we selected (necessary in the 'results' methods)
+    self.err = self.n_terms * [
+        0
+    ]  # just to use the `results` method. Will be changed in next update.
+    return self
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
346
 347
 348
 349
@@ -1392,99 +1439,9 @@
 352
 353
 354
-355
def fit(self, *, X=None, y=None):
-    """Fit polynomial NARMAX model using AOLS algorithm.
-
-    The 'fit' function allows a friendly usage by the user.
-    Given two arguments, X and y, fit training data.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the training process.
-    y : ndarray of floats
-        The output data to be used in the training process.
-
-    Returns
-    -------
-    model : ndarray of int
-        The model code representation.
-    piv : array-like of shape = number_of_model_elements
-        Contains the index to put the regressors in the correct order
-        based on err values.
-    theta : array-like of shape = number_of_model_elements
-        The estimated parameters of the model.
-    err : array-like of shape = number_of_model_elements
-        The respective ERR calculated for each regressor.
-    info_values : array-like of shape = n_regressor
-        Vector with values of akaike's information criterion
-        for models with N terms (where N is the
-        vector position + 1).
-
-    """
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    if self.model_type == "NAR":
-        lagged_data = self.build_output_matrix(y)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NFIR":
-        lagged_data = self.build_input_matrix(X)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NARMAX":
-        check_X_y(X, y)
-        self.max_lag = self._get_max_lag()
-        lagged_data = self.build_input_output_matrix(X, y)
-    else:
-        raise ValueError(
-            "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-        )
-
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        reg_matrix = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-    else:
-        reg_matrix, self.ensemble = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-
-    if X is not None:
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    self.regressor_code = self.regressor_space(self.n_inputs)
-
-    (self.theta, self.pivv, self.res) = self.aols(reg_matrix, y)
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        self.final_model = self.regressor_code[self.pivv, :].copy()
-    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-        basis_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-        self.final_model = self.regressor_code[self.pivv, :].copy()
-    else:
-        self.regressor_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.final_model = self.regressor_code[self.pivv, :].copy()
-
-    self.n_terms = len(
-        self.theta
-    )  # the number of terms we selected (necessary in the 'results' methods)
-    self.err = self.n_terms * [
-        0
-    ]  # just to use the `results` method. Will be changed in next update.
-    return self
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\accelerated_orthogonal_least_squares.py
357
+355
+356
+357
 358
 359
 360
@@ -1529,72 +1486,61 @@
 399
 400
 401
-402
-403
-404
-405
-406
-407
-408
-409
-410
-411
-412
-413
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted values given an input.
-
-    The predict function allows a friendly usage by the user.
-    Given a previously trained model, predict values given
-    a new set of data.
-
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future)
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-    steps_ahead : int (default = None)
-        The user can use free run simulation, one-step ahead prediction
-        and n-step ahead prediction.
-    forecast_horizon : int, default=None
-        The number of predictions over the time.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
+402
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted values given an input.
+
+    The predict function allows a friendly usage by the user.
+    Given a previously trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future)
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+    steps_ahead : int (default = None)
+        The user can use free run simulation, one-step ahead prediction
+        and n-step ahead prediction.
+    forecast_horizon : int, default=None
+        The number of predictions over the time.
+
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
 
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    if steps_ahead is None:
-        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-    if steps_ahead == 1:
-        yhat = self._one_step_ahead_prediction(X, y)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    yhat = self._basis_function_n_step_prediction(
-        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-    )
-    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-    return yhat
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    if steps_ahead is None:
+        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+    if steps_ahead == 1:
+        yhat = self._one_step_ahead_prediction(X, y)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    yhat = self._basis_function_n_step_prediction(
+        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+    )
+    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+    return yhat
 
\ No newline at end of file diff --git a/docs/code/entropic-regression/index.html b/docs/code/entropic-regression/index.html index 682380b1..dae2b5ba 100644 --- a/docs/code/entropic-regression/index.html +++ b/docs/code/entropic-regression/index.html @@ -41,7 +41,12 @@ 0 x1(k-2) 0.9000 0.0 1 y(k-1) 0.1999 0.0 2 x1(k-1)y(k-1) 0.1000 0.0 -

References

Source code in sysidentpy\model_structure_selection\entropic_regression.py
 27
+

References

  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
 22
+ 23
+ 24
+ 25
+ 26
+ 27
  28
  29
  30
@@ -900,905 +905,876 @@
 883
 884
 885
-886
-887
-888
-889
-890
-891
-892
-893
-894
-895
-896
-897
-898
-899
-900
-901
-902
-903
-904
-905
@deprecated(
-    version="v0.3.0",
-    future_version="v0.4.0",
-    message=(
-        "Passing a string to define the estimator will rise an error in v0.4.0."
-        " \n You'll have to use ER(estimator=LeastSquares()) instead. \n The"
-        " only change is that you'll have to define the estimator first instead"
-        " of passing a string like 'least_squares'. \n This change will make"
-        " easier to implement new estimators and it'll improve code"
-        " readability."
-    ),
-)
-class ER(Estimators, BaseMSS):
-    r"""Entropic Regression Algorithm
-
-    Build Polynomial NARMAX model using the Entropic Regression Algorithm ([1]_).
-    This algorithm is based on the Matlab package available on:
-    https://github.com/almomaa/ERFit-Package
-
-    The NARMAX model is described as:
+886
@deprecated(
+    version="v0.3.0",
+    future_version="v0.4.0",
+    message=(
+        "Passing a string to define the estimator will rise an error in v0.4.0."
+        " \n You'll have to use ER(estimator=LeastSquares()) instead. \n The"
+        " only change is that you'll have to define the estimator first instead"
+        " of passing a string like 'least_squares'. \n This change will make"
+        " easier to implement new estimators and it'll improve code"
+        " readability."
+    ),
+)
+class ER(Estimators, BaseMSS):
+    r"""Entropic Regression Algorithm
+
+    Build Polynomial NARMAX model using the Entropic Regression Algorithm ([1]_).
+    This algorithm is based on the Matlab package available on:
+    https://github.com/almomaa/ERFit-Package
+
+    The NARMAX model is described as:
+
+    $$
+        y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x},
+        e_{k-1}, \dotsc, e_{k-n_e}] + e_k
+    $$
 
-    $$
-        y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x},
-        e_{k-1}, \dotsc, e_{k-n_e}] + e_k
-    $$
-
-    where $n_y\in \mathbb{N}^*$, $n_x \in \mathbb{N}$, $n_e \in \mathbb{N}$,
-    are the maximum lags for the system output and input respectively;
-    $x_k \in \mathbb{R}^{n_x}$ is the system input and $y_k \in \mathbb{R}^{n_y}$
-    is the system output at discrete time $k \in \mathbb{N}^n$;
-    $e_k \in \mathbb{R}^{n_e}$ stands for uncertainties and possible noise
-    at discrete time $k$. In this case, $\mathcal{F}^\ell$ is some nonlinear function
-    of the input and output regressors with nonlinearity degree $\ell \in \mathbb{N}$
-    and $d$ is a time delay typically set to $d=1$.
-
-    Parameters
-    ----------
-    ylag : int, default=2
-        The maximum lag of the output.
-    xlag : int, default=2
-        The maximum lag of the input.
-    k : int, default=2
-        The kth nearest neighbor to be used in estimation.
-    q : float, default=0.99
-        Quantile to compute, which must be between 0 and 1 inclusive.
-    p : default=inf,
-        Lp Measure of the distance in Knn estimator.
-    n_perm: int, default=200
-        Number of permutation to be used in shuffle test
-    estimator : str, default="least_squares"
-        The parameter estimation method.
-    skip_forward = bool, default=False
-        To be used for difficult and highly uncertain problems.
-        Skipping the forward selection results in more accurate solution,
-        but comes with higher computational cost.
-    lam : float, default=0.98
-        Forgetting factor of the Recursive Least Squares method.
-    delta : float, default=0.01
-        Normalization factor of the P matrix.
-    offset_covariance : float, default=0.2
-        The offset covariance factor of the affine least mean squares
-        filter.
-    mu : float, default=0.01
-        The convergence coefficient (learning rate) of the filter.
-    eps : float
-        Normalization factor of the normalized filters.
-    gama : float, default=0.2
-        The leakage factor of the Leaky LMS method.
-    weight : float, default=0.02
-        Weight factor to control the proportions of the error norms
-        and offers an extra degree of freedom within the adaptation
-        of the LMS mixed norm method.
-    model_type: str, default="NARMAX"
-        The user can choose "NARMAX", "NAR" and "NFIR" models
-
-    Examples
-    --------
-    >>> import numpy as np
-    >>> import matplotlib.pyplot as plt
-    >>> from sysidentpy.model_structure_selection import ER
-    >>> from sysidentpy.basis_function._basis_function import Polynomial
-    >>> from sysidentpy.utils.display_results import results
-    >>> from sysidentpy.metrics import root_relative_squared_error
-    >>> from sysidentpy.utils.generate_data import get_miso_data, get_siso_data
-    >>> x_train, x_valid, y_train, y_valid = get_siso_data(n=1000,
-    ...                                                    colored_noise=True,
-    ...                                                    sigma=0.2,
-    ...                                                    train_percentage=90)
-    >>> basis_function = Polynomial(degree=2)
-    >>> model = ER(basis_function=basis_function,
-    ...              ylag=2, xlag=2
-    ...              )
-    >>> model.fit(x_train, y_train)
-    >>> yhat = model.predict(x_valid, y_valid)
-    >>> rrse = root_relative_squared_error(y_valid, yhat)
-    >>> print(rrse)
-    0.001993603325328823
-    >>> r = pd.DataFrame(
-    ...     results(
-    ...         model.final_model, model.theta, model.err,
-    ...         model.n_terms, err_precision=8, dtype='sci'
-    ...         ),
-    ...     columns=['Regressors', 'Parameters', 'ERR'])
-    >>> print(r)
-        Regressors Parameters         ERR
-    0        x1(k-2)     0.9000       0.0
-    1         y(k-1)     0.1999       0.0
-    2  x1(k-1)y(k-1)     0.1000       0.0
-
-    References
-    ----------
-    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-        Regression Beats the Outliers Problem in Nonlinear System
-        Identification. Chaos 30, 013107 (2020).
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-
-    """
-
-    def __init__(
-        self,
-        *,
-        ylag: Union[int, list] = 1,
-        xlag: Union[int, list] = 1,
-        q: float = 0.99,
-        estimator: str = "least_squares",
-        extended_least_squares: bool = False,
-        h: float = 0.01,
-        k: int = 2,
-        mutual_information_estimator: str = "mutual_information_knn",
-        n_perm: int = 200,
-        p: Union[float, int] = np.inf,
-        skip_forward: bool = False,
-        lam: float = 0.98,
-        delta: float = 0.01,
-        offset_covariance: float = 0.2,
-        mu: float = 0.01,
-        eps: float = np.finfo(np.float64).eps,
-        gama: float = 0.2,
-        weight: float = 0.02,
-        model_type: str = "NARMAX",
-        basis_function: Union[Polynomial, Fourier] = Polynomial(),
-        random_state: Union[int, None] = None,
-    ):
-        self.basis_function = basis_function
-        self.model_type = model_type
-        self.xlag = xlag
-        self.ylag = ylag
-        self.non_degree = basis_function.degree
-        self.max_lag = self._get_max_lag()
-        self.k = k
-        self.estimator = estimator
-        self.extended_least_squares = extended_least_squares
-        self.q = q
-        self.h = h
-        self.mutual_information_estimator = mutual_information_estimator
-        self.n_perm = n_perm
-        self.p = p
-        self.skip_forward = skip_forward
-        self.random_state = random_state
-        self.rng = check_random_state(random_state)
-        self.tol = None
-        self.ensemble = None
-        self.n_inputs = None
-        self.estimated_tolerance = None
-        self.regressor_code = None
-        self.final_model = None
-        self.theta = None
-        self.n_terms = None
-        self.err = None
-        self.pivv = None
-        self._validate_params()
-        super().__init__(
-            lam=lam,
-            delta=delta,
-            offset_covariance=offset_covariance,
-            mu=mu,
-            eps=eps,
-            gama=gama,
-            weight=weight,
-            basis_function=basis_function,
-        )
-
-    def _validate_params(self):
-        """Validate input params."""
-        if isinstance(self.ylag, int) and self.ylag < 1:
-            raise ValueError("ylag must be integer and > zero. Got %f" % self.ylag)
-
-        if isinstance(self.xlag, int) and self.xlag < 1:
-            raise ValueError("xlag must be integer and > zero. Got %f" % self.xlag)
-
-        if not isinstance(self.xlag, (int, list)):
-            raise ValueError("xlag must be integer and > zero. Got %f" % self.xlag)
-
-        if not isinstance(self.ylag, (int, list)):
-            raise ValueError("ylag must be integer and > zero. Got %f" % self.ylag)
-
-        if not isinstance(self.k, int) or self.k < 1:
-            raise ValueError("k must be integer and > zero. Got %f" % self.k)
-
-        if not isinstance(self.n_perm, int) or self.n_perm < 1:
-            raise ValueError("n_perm must be integer and > zero. Got %f" % self.n_perm)
-
-        if not isinstance(self.q, float) or self.q > 1 or self.q <= 0:
-            raise ValueError(
-                "q must be float and must be between 0 and 1 inclusive. Got %f" % self.q
-            )
-
-        if not isinstance(self.skip_forward, bool):
-            raise TypeError(
-                "skip_forward must be False or True. Got %f" % self.skip_forward
-            )
-
-        if not isinstance(self.extended_least_squares, bool):
-            raise TypeError(
-                "extended_least_squares must be False or True. Got %f"
-                % self.extended_least_squares
-            )
-
-        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
-            raise ValueError(
-                "model_type must be NARMAX, NAR or NFIR. Got %s" % self.model_type
-            )
-
-    def mutual_information_knn(self, y, y_perm):
-        """Finds the mutual information.
+    where $n_y\in \mathbb{N}^*$, $n_x \in \mathbb{N}$, $n_e \in \mathbb{N}$,
+    are the maximum lags for the system output and input respectively;
+    $x_k \in \mathbb{R}^{n_x}$ is the system input and $y_k \in \mathbb{R}^{n_y}$
+    is the system output at discrete time $k \in \mathbb{N}^n$;
+    $e_k \in \mathbb{R}^{n_e}$ stands for uncertainties and possible noise
+    at discrete time $k$. In this case, $\mathcal{F}^\ell$ is some nonlinear function
+    of the input and output regressors with nonlinearity degree $\ell \in \mathbb{N}$
+    and $d$ is a time delay typically set to $d=1$.
+
+    Parameters
+    ----------
+    ylag : int, default=2
+        The maximum lag of the output.
+    xlag : int, default=2
+        The maximum lag of the input.
+    k : int, default=2
+        The kth nearest neighbor to be used in estimation.
+    q : float, default=0.99
+        Quantile to compute, which must be between 0 and 1 inclusive.
+    p : default=inf,
+        Lp Measure of the distance in Knn estimator.
+    n_perm: int, default=200
+        Number of permutation to be used in shuffle test
+    estimator : str, default="least_squares"
+        The parameter estimation method.
+    skip_forward = bool, default=False
+        To be used for difficult and highly uncertain problems.
+        Skipping the forward selection results in more accurate solution,
+        but comes with higher computational cost.
+    lam : float, default=0.98
+        Forgetting factor of the Recursive Least Squares method.
+    delta : float, default=0.01
+        Normalization factor of the P matrix.
+    offset_covariance : float, default=0.2
+        The offset covariance factor of the affine least mean squares
+        filter.
+    mu : float, default=0.01
+        The convergence coefficient (learning rate) of the filter.
+    eps : float
+        Normalization factor of the normalized filters.
+    gama : float, default=0.2
+        The leakage factor of the Leaky LMS method.
+    weight : float, default=0.02
+        Weight factor to control the proportions of the error norms
+        and offers an extra degree of freedom within the adaptation
+        of the LMS mixed norm method.
+    model_type: str, default="NARMAX"
+        The user can choose "NARMAX", "NAR" and "NFIR" models
+
+    Examples
+    --------
+    >>> import numpy as np
+    >>> import matplotlib.pyplot as plt
+    >>> from sysidentpy.model_structure_selection import ER
+    >>> from sysidentpy.basis_function._basis_function import Polynomial
+    >>> from sysidentpy.utils.display_results import results
+    >>> from sysidentpy.metrics import root_relative_squared_error
+    >>> from sysidentpy.utils.generate_data import get_miso_data, get_siso_data
+    >>> x_train, x_valid, y_train, y_valid = get_siso_data(n=1000,
+    ...                                                    colored_noise=True,
+    ...                                                    sigma=0.2,
+    ...                                                    train_percentage=90)
+    >>> basis_function = Polynomial(degree=2)
+    >>> model = ER(basis_function=basis_function,
+    ...              ylag=2, xlag=2
+    ...              )
+    >>> model.fit(x_train, y_train)
+    >>> yhat = model.predict(x_valid, y_valid)
+    >>> rrse = root_relative_squared_error(y_valid, yhat)
+    >>> print(rrse)
+    0.001993603325328823
+    >>> r = pd.DataFrame(
+    ...     results(
+    ...         model.final_model, model.theta, model.err,
+    ...         model.n_terms, err_precision=8, dtype='sci'
+    ...         ),
+    ...     columns=['Regressors', 'Parameters', 'ERR'])
+    >>> print(r)
+        Regressors Parameters         ERR
+    0        x1(k-2)     0.9000       0.0
+    1         y(k-1)     0.1999       0.0
+    2  x1(k-1)y(k-1)     0.1000       0.0
+
+    References
+    ----------
+    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+        Regression Beats the Outliers Problem in Nonlinear System
+        Identification. Chaos 30, 013107 (2020).
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+
+    """
+
+    def __init__(
+        self,
+        *,
+        ylag: Union[int, list] = 1,
+        xlag: Union[int, list] = 1,
+        q: float = 0.99,
+        estimator: str = "least_squares",
+        extended_least_squares: bool = False,
+        h: float = 0.01,
+        k: int = 2,
+        mutual_information_estimator: str = "mutual_information_knn",
+        n_perm: int = 200,
+        p: Union[float, int] = np.inf,
+        skip_forward: bool = False,
+        lam: float = 0.98,
+        delta: float = 0.01,
+        offset_covariance: float = 0.2,
+        mu: float = 0.01,
+        eps: float = np.finfo(np.float64).eps,
+        gama: float = 0.2,
+        weight: float = 0.02,
+        model_type: str = "NARMAX",
+        basis_function: Union[Polynomial, Fourier] = Polynomial(),
+        random_state: Union[int, None] = None,
+    ):
+        self.basis_function = basis_function
+        self.model_type = model_type
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.xlag = xlag
+        self.ylag = ylag
+        self.non_degree = basis_function.degree
+        self.max_lag = self._get_max_lag()
+        self.k = k
+        self.estimator = estimator
+        self.extended_least_squares = extended_least_squares
+        self.q = q
+        self.h = h
+        self.mutual_information_estimator = mutual_information_estimator
+        self.n_perm = n_perm
+        self.p = p
+        self.skip_forward = skip_forward
+        self.random_state = random_state
+        self.rng = check_random_state(random_state)
+        self.tol = None
+        self.ensemble = None
+        self.n_inputs = None
+        self.estimated_tolerance = None
+        self.regressor_code = None
+        self.final_model = None
+        self.theta = None
+        self.n_terms = None
+        self.err = None
+        self.pivv = None
+        self._validate_params()
+        super().__init__(
+            lam=lam,
+            delta=delta,
+            offset_covariance=offset_covariance,
+            mu=mu,
+            eps=eps,
+            gama=gama,
+            weight=weight,
+            basis_function=basis_function,
+        )
+
+    def _validate_params(self):
+        """Validate input params."""
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if not isinstance(self.k, int) or self.k < 1:
+            raise ValueError(f"k must be integer and > zero. Got {self.k}")
+
+        if not isinstance(self.n_perm, int) or self.n_perm < 1:
+            raise ValueError(f"n_perm must be integer and > zero. Got {self.n_perm}")
+
+        if not isinstance(self.q, float) or self.q > 1 or self.q <= 0:
+            raise ValueError(
+                f"q must be float and must be between 0 and 1 inclusive. Got {self.q}"
+            )
+
+        if not isinstance(self.skip_forward, bool):
+            raise TypeError(
+                f"skip_forward must be False or True. Got {self.skip_forward}"
+            )
+
+        if not isinstance(self.extended_least_squares, bool):
+            raise TypeError(
+                "extended_least_squares must be False or True. Got"
+                f" {self.extended_least_squares}"
+            )
+
+        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+            )
+
+    def mutual_information_knn(self, y, y_perm):
+        """Finds the mutual information.
+
+        Finds the mutual information between $x$ and $y$ given $z$.
+
+        This code is based on Matlab Entropic Regression package.
 
-        Finds the mutual information between $x$ and $y$ given $z$.
-
-        This code is based on Matlab Entropic Regression package.
-
-        Parameters
-        ----------
-        y : ndarray of floats
-            The source signal.
-        y_perm : ndarray of floats
-            The destination signal.
-
-        Returns
-        -------
-        ksg_estimation : float
-            The conditioned mutual information.
-
-        References
-        ----------
-        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-            Regression Beats the Outliers Problem in Nonlinear System
-            Identification. Chaos 30, 013107 (2020).
+        Parameters
+        ----------
+        y : ndarray of floats
+            The source signal.
+        y_perm : ndarray of floats
+            The destination signal.
+
+        Returns
+        -------
+        ksg_estimation : float
+            The conditioned mutual information.
+
+        References
+        ----------
+        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+            Regression Beats the Outliers Problem in Nonlinear System
+            Identification. Chaos 30, 013107 (2020).
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
         - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
             Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-
-        """
-        joint_space = np.concatenate([y, y_perm], axis=1)
-        smallest_distance = np.sort(
-            cdist(joint_space, joint_space, "minkowski", p=self.p).T
-        )
-        idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
-        smallest_distance = smallest_distance[:, idx]
-        epsilon = smallest_distance[:, -1].reshape(-1, 1)
-        smallest_distance_y = cdist(y, y, "minkowski", p=self.p)
-        less_than_array_nx = np.array((smallest_distance_y < epsilon)).astype(int)
-        nx = (np.sum(less_than_array_nx, axis=1) - 1).reshape(-1, 1)
-        smallest_distance_y_perm = cdist(y_perm, y_perm, "minkowski", p=self.p)
-        less_than_array_ny = np.array((smallest_distance_y_perm < epsilon)).astype(int)
-        ny = (np.sum(less_than_array_ny, axis=1) - 1).reshape(-1, 1)
-        arr = psi(nx + 1) + psi(ny + 1)
-        ksg_estimation = (
-            psi(self.k) + psi(y.shape[0]) - np.nanmean(arr[np.isfinite(arr)])
-        )
-        return ksg_estimation
-
-    def entropic_regression_backward(self, reg_matrix, y, piv):
-        """Entropic Regression Backward Greedy Feature Elimination.
-
-        This algorithm is based on the Matlab package available on:
-        https://github.com/almomaa/ERFit-Package
-
-        Parameters
-        ----------
-        reg_matrix : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-        piv : ndarray of ints
-            The set of indices to investigate
-
-        Returns
-        -------
-        piv : ndarray of ints
-            The set of remaining indices after the
-            Backward Greedy Feature Elimination.
-
-        """
-        min_value = -np.inf
-        piv = np.array(piv)
-        ix = []
-        while (min_value <= self.tol) and (len(piv) > 1):
-            initial_array = np.full((1, len(piv)), np.inf)
-            for i in range(initial_array.shape[1]):
-                if piv[i] not in []:  # if you want to keep any regressor
-                    rem = np.setdiff1d(piv, piv[i])
-                    f1 = reg_matrix[:, piv] @ LA.pinv(reg_matrix[:, piv]) @ y
-                    f2 = reg_matrix[:, rem] @ LA.pinv(reg_matrix[:, rem]) @ y
-                    initial_array[0, i] = self.conditional_mutual_information(y, f1, f2)
+
+        """
+        joint_space = np.concatenate([y, y_perm], axis=1)
+        smallest_distance = np.sort(
+            cdist(joint_space, joint_space, "minkowski", p=self.p).T
+        )
+        idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
+        smallest_distance = smallest_distance[:, idx]
+        epsilon = smallest_distance[:, -1].reshape(-1, 1)
+        smallest_distance_y = cdist(y, y, "minkowski", p=self.p)
+        less_than_array_nx = np.array((smallest_distance_y < epsilon)).astype(int)
+        nx = (np.sum(less_than_array_nx, axis=1) - 1).reshape(-1, 1)
+        smallest_distance_y_perm = cdist(y_perm, y_perm, "minkowski", p=self.p)
+        less_than_array_ny = np.array((smallest_distance_y_perm < epsilon)).astype(int)
+        ny = (np.sum(less_than_array_ny, axis=1) - 1).reshape(-1, 1)
+        arr = psi(nx + 1) + psi(ny + 1)
+        ksg_estimation = (
+            psi(self.k) + psi(y.shape[0]) - np.nanmean(arr[np.isfinite(arr)])
+        )
+        return ksg_estimation
+
+    def entropic_regression_backward(self, reg_matrix, y, piv):
+        """Entropic Regression Backward Greedy Feature Elimination.
+
+        This algorithm is based on the Matlab package available on:
+        https://github.com/almomaa/ERFit-Package
+
+        Parameters
+        ----------
+        reg_matrix : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+        piv : ndarray of ints
+            The set of indices to investigate
+
+        Returns
+        -------
+        piv : ndarray of ints
+            The set of remaining indices after the
+            Backward Greedy Feature Elimination.
+
+        """
+        min_value = -np.inf
+        piv = np.array(piv)
+        ix = []
+        while (min_value <= self.tol) and (len(piv) > 1):
+            initial_array = np.full((1, len(piv)), np.inf)
+            for i in range(initial_array.shape[1]):
+                if piv[i] not in []:  # if you want to keep any regressor
+                    rem = np.setdiff1d(piv, piv[i])
+                    f1 = reg_matrix[:, piv] @ LA.pinv(reg_matrix[:, piv]) @ y
+                    f2 = reg_matrix[:, rem] @ LA.pinv(reg_matrix[:, rem]) @ y
+                    initial_array[0, i] = self.conditional_mutual_information(y, f1, f2)
+
+            ix = np.argmin(initial_array)
+            min_value = initial_array[0, ix]
+            piv = np.delete(piv, ix)
 
-            ix = np.argmin(initial_array)
-            min_value = initial_array[0, ix]
-            piv = np.delete(piv, ix)
-
-        return piv
-
-    def entropic_regression_forward(self, reg_matrix, y):
-        """Entropic Regression Forward Greedy Feature Selection.
-
-        This algorithm is based on the Matlab package available on:
-        https://github.com/almomaa/ERFit-Package
-
-        Parameters
-        ----------
-        reg_matrix : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-
-        Returns
-        -------
-        selected_terms : ndarray of ints
-            The set of selected regressors after the
-            Forward Greedy Feature Selection.
-        success : boolean
-            Indicate if the forward selection succeed.
-            If high degree of uncertainty is detected, and many parameters are
-            selected, the success flag will be set to false. Then, the
-            backward elimination will be applied for all indices.
-
-        """
-        success = True
-        ix = []
-        selected_terms = []
-        reg_matrix_columns = np.array(list(range(reg_matrix.shape[1])))
-        self.tol = self.tolerance_estimator(y)
-        ksg_max = getattr(self, self.mutual_information_estimator)(
-            y, reg_matrix @ LA.pinv(reg_matrix) @ y
-        )
-        stop_criteria = False
-        while stop_criteria is False:
-            selected_terms = np.ravel(
-                [*selected_terms, *np.array([reg_matrix_columns[ix]])]
-            )
-            if len(selected_terms) != 0:
-                ksg_local = getattr(self, self.mutual_information_estimator)(
-                    y,
-                    reg_matrix[:, selected_terms]
-                    @ LA.pinv(reg_matrix[:, selected_terms])
-                    @ y,
+        return piv
+
+    def entropic_regression_forward(self, reg_matrix, y):
+        """Entropic Regression Forward Greedy Feature Selection.
+
+        This algorithm is based on the Matlab package available on:
+        https://github.com/almomaa/ERFit-Package
+
+        Parameters
+        ----------
+        reg_matrix : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+
+        Returns
+        -------
+        selected_terms : ndarray of ints
+            The set of selected regressors after the
+            Forward Greedy Feature Selection.
+        success : boolean
+            Indicate if the forward selection succeed.
+            If high degree of uncertainty is detected, and many parameters are
+            selected, the success flag will be set to false. Then, the
+            backward elimination will be applied for all indices.
+
+        """
+        success = True
+        ix = []
+        selected_terms = []
+        reg_matrix_columns = np.array(list(range(reg_matrix.shape[1])))
+        self.tol = self.tolerance_estimator(y)
+        ksg_max = getattr(self, self.mutual_information_estimator)(
+            y, reg_matrix @ LA.pinv(reg_matrix) @ y
+        )
+        stop_criteria = False
+        while stop_criteria is False:
+            selected_terms = np.ravel(
+                [*selected_terms, *np.array([reg_matrix_columns[ix]])]
+            )
+            if len(selected_terms) != 0:
+                ksg_local = getattr(self, self.mutual_information_estimator)(
+                    y,
+                    reg_matrix[:, selected_terms]
+                    @ LA.pinv(reg_matrix[:, selected_terms])
+                    @ y,
+                )
+            else:
+                ksg_local = getattr(self, self.mutual_information_estimator)(
+                    y, np.zeros_like(y)
                 )
-            else:
-                ksg_local = getattr(self, self.mutual_information_estimator)(
-                    y, np.zeros_like(y)
-                )
-
-            initial_vector = np.full((1, reg_matrix.shape[1]), -np.inf)
-            for i in range(reg_matrix.shape[1]):
-                if reg_matrix_columns[i] not in selected_terms:
-                    f1 = (
-                        reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
-                        @ LA.pinv(
-                            reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
-                        )
-                        @ y
-                    )
-                    if len(selected_terms) != 0:
-                        f2 = (
-                            reg_matrix[:, selected_terms]
-                            @ LA.pinv(reg_matrix[:, selected_terms])
-                            @ y
-                        )
-                    else:
-                        f2 = np.zeros_like(y)
-                    vp_estimation = self.conditional_mutual_information(y, f1, f2)
-                    initial_vector[0, i] = vp_estimation
-                else:
-                    continue
-
-            ix = np.nanargmax(initial_vector)
-            max_value = initial_vector[0, ix]
-
-            if (ksg_max - ksg_local <= self.tol) or (max_value <= self.tol):
-                stop_criteria = True
-            elif len(selected_terms) > np.max([8, reg_matrix.shape[1] / 2]):
-                success = False
-                stop_criteria = True
-
-        return selected_terms, success
+
+            initial_vector = np.full((1, reg_matrix.shape[1]), -np.inf)
+            for i in range(reg_matrix.shape[1]):
+                if reg_matrix_columns[i] not in selected_terms:
+                    f1 = (
+                        reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
+                        @ LA.pinv(
+                            reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
+                        )
+                        @ y
+                    )
+                    if len(selected_terms) != 0:
+                        f2 = (
+                            reg_matrix[:, selected_terms]
+                            @ LA.pinv(reg_matrix[:, selected_terms])
+                            @ y
+                        )
+                    else:
+                        f2 = np.zeros_like(y)
+                    vp_estimation = self.conditional_mutual_information(y, f1, f2)
+                    initial_vector[0, i] = vp_estimation
+                else:
+                    continue
+
+            ix = np.nanargmax(initial_vector)
+            max_value = initial_vector[0, ix]
+
+            if (ksg_max - ksg_local <= self.tol) or (max_value <= self.tol):
+                stop_criteria = True
+            elif len(selected_terms) > np.max([8, reg_matrix.shape[1] / 2]):
+                success = False
+                stop_criteria = True
+
+        return selected_terms, success
+
+    def conditional_mutual_information(self, y, f1, f2):
+        """Finds the conditional mutual information.
+        Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
 
-    def conditional_mutual_information(self, y, f1, f2):
-        """Finds the conditional mutual information.
-        Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
-
-        This code is based on Matlab Entropic Regression package.
-        https://github.com/almomaa/ERFit-Package
-
-        Parameters
-        ----------
-        y : ndarray of floats
-            The source signal.
-        f1 : ndarray of floats
-            The destination signal.
-        f2 : ndarray of floats
-            The condition set.
-
-        Returns
-        -------
-        vp_estimation : float
-            The conditioned mutual information.
-
-        References
-        ----------
-        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-            Regression Beats the Outliers Problem in Nonlinear System
-            Identification. Chaos 30, 013107 (2020).
+        This code is based on Matlab Entropic Regression package.
+        https://github.com/almomaa/ERFit-Package
+
+        Parameters
+        ----------
+        y : ndarray of floats
+            The source signal.
+        f1 : ndarray of floats
+            The destination signal.
+        f2 : ndarray of floats
+            The condition set.
+
+        Returns
+        -------
+        vp_estimation : float
+            The conditioned mutual information.
+
+        References
+        ----------
+        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+            Regression Beats the Outliers Problem in Nonlinear System
+            Identification. Chaos 30, 013107 (2020).
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
         - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
             Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-
-        """
-        joint_space = np.concatenate([y, f1, f2], axis=1)
-        smallest_distance = np.sort(
-            cdist(joint_space, joint_space, "minkowski", p=self.p).T
-        )
-        idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
-        smallest_distance = smallest_distance[:, idx]
-        epsilon = smallest_distance[:, -1].reshape(-1, 1)
-        # Find number of points from (y,f2), (f1,f2), and (f2,f2) that lies withing the
-        # k^{th} nearest neighbor distance from each point of themselves.
-        smallest_distance_y_f2 = cdist(
-            np.concatenate([y, f2], axis=1),
-            np.concatenate([y, f2], axis=1),
-            "minkowski",
-            p=self.p,
-        )
-        less_than_array_y_f2 = np.array((smallest_distance_y_f2 < epsilon)).astype(int)
-        y_f2 = (np.sum(less_than_array_y_f2, axis=1) - 1).reshape(-1, 1)
-
-        smallest_distance_f1_f2 = cdist(
-            np.concatenate([f1, f2], axis=1),
-            np.concatenate([f1, f2], axis=1),
-            "minkowski",
-            p=self.p,
-        )
-        less_than_array_f1_f2 = np.array((smallest_distance_f1_f2 < epsilon)).astype(
-            int
-        )
-        f1_f2 = (np.sum(less_than_array_f1_f2, axis=1) - 1).reshape(-1, 1)
-
-        smallest_distance_f2 = cdist(f2, f2, "minkowski", p=self.p)
-        less_than_array_f2 = np.array((smallest_distance_f2 < epsilon)).astype(int)
-        f2_f2 = (np.sum(less_than_array_f2, axis=1) - 1).reshape(-1, 1)
-        arr = psi(y_f2 + 1) + psi(f1_f2 + 1) - psi(f2_f2 + 1)
-        vp_estimation = psi(self.k) - np.nanmean(arr[np.isfinite(arr)])
-        return vp_estimation
+
+        """
+        joint_space = np.concatenate([y, f1, f2], axis=1)
+        smallest_distance = np.sort(
+            cdist(joint_space, joint_space, "minkowski", p=self.p).T
+        )
+        idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
+        smallest_distance = smallest_distance[:, idx]
+        epsilon = smallest_distance[:, -1].reshape(-1, 1)
+        # Find number of points from (y,f2), (f1,f2), and (f2,f2) that lies withing the
+        # k^{th} nearest neighbor distance from each point of themselves.
+        smallest_distance_y_f2 = cdist(
+            np.concatenate([y, f2], axis=1),
+            np.concatenate([y, f2], axis=1),
+            "minkowski",
+            p=self.p,
+        )
+        less_than_array_y_f2 = np.array((smallest_distance_y_f2 < epsilon)).astype(int)
+        y_f2 = (np.sum(less_than_array_y_f2, axis=1) - 1).reshape(-1, 1)
+
+        smallest_distance_f1_f2 = cdist(
+            np.concatenate([f1, f2], axis=1),
+            np.concatenate([f1, f2], axis=1),
+            "minkowski",
+            p=self.p,
+        )
+        less_than_array_f1_f2 = np.array((smallest_distance_f1_f2 < epsilon)).astype(
+            int
+        )
+        f1_f2 = (np.sum(less_than_array_f1_f2, axis=1) - 1).reshape(-1, 1)
+
+        smallest_distance_f2 = cdist(f2, f2, "minkowski", p=self.p)
+        less_than_array_f2 = np.array((smallest_distance_f2 < epsilon)).astype(int)
+        f2_f2 = (np.sum(less_than_array_f2, axis=1) - 1).reshape(-1, 1)
+        arr = psi(y_f2 + 1) + psi(f1_f2 + 1) - psi(f2_f2 + 1)
+        vp_estimation = psi(self.k) - np.nanmean(arr[np.isfinite(arr)])
+        return vp_estimation
+
+    def tolerance_estimator(self, y):
+        """Tolerance Estimation for mutual independence test.
+        Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
 
-    def tolerance_estimator(self, y):
-        """Tolerance Estimation for mutual independence test.
-        Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
-
-        This code is based on Matlab Entropic Regression package.
-        https://github.com/almomaa/ERFit-Package
-
-        Parameters
-        ----------
-        y : ndarray of floats
-            The source signal.
-
-        Returns
-        -------
-        tol : float
-            The tolerance value given q.
-
-        References
-        ----------
-        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-            Regression Beats the Outliers Problem in Nonlinear System
-            Identification. Chaos 30, 013107 (2020).
+        This code is based on Matlab Entropic Regression package.
+        https://github.com/almomaa/ERFit-Package
+
+        Parameters
+        ----------
+        y : ndarray of floats
+            The source signal.
+
+        Returns
+        -------
+        tol : float
+            The tolerance value given q.
+
+        References
+        ----------
+        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+            Regression Beats the Outliers Problem in Nonlinear System
+            Identification. Chaos 30, 013107 (2020).
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
         - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
             Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-
-        """
-        ksg_estimation = []
-        # ksg_estimation = [
-        #     getattr(self, self.mutual_information_estimator)(y,
-        # self.rng.permutation(y))
-        #     for i in range(self.n_perm)
-        # ]
-
-        for _ in range(self.n_perm):
-            mutual_information_output = getattr(
-                self, self.mutual_information_estimator
-            )(y, self.rng.permutation(y))
-
-            ksg_estimation.append(mutual_information_output)
+
+        """
+        ksg_estimation = []
+        # ksg_estimation = [
+        #     getattr(self, self.mutual_information_estimator)(y,
+        # self.rng.permutation(y))
+        #     for i in range(self.n_perm)
+        # ]
+
+        for _ in range(self.n_perm):
+            mutual_information_output = getattr(
+                self, self.mutual_information_estimator
+            )(y, self.rng.permutation(y))
+
+            ksg_estimation.append(mutual_information_output)
+
+        ksg_estimation = np.array(ksg_estimation)
+        tol = np.quantile(ksg_estimation, self.q)
+        return tol
 
-        ksg_estimation = np.array(ksg_estimation)
-        tol = np.quantile(ksg_estimation, self.q)
-        return tol
-
-    def fit(self, *, X=None, y=None):
-        """Fit polynomial NARMAX model using AOLS algorithm.
-
-        The 'fit' function allows a friendly usage by the user.
-        Given two arguments, X and y, fit training data.
-
-        The Entropic Regression algorithm is based on the Matlab package available on:
-        https://github.com/almomaa/ERFit-Package
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the training process.
-        y : ndarray of floats
-            The output data to be used in the training process.
-
-        Returns
-        -------
-        model : ndarray of int
-            The model code representation.
-        theta : array-like of shape = number_of_model_elements
-            The estimated parameters of the model.
-
-        References
-        ----------
-        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-            Regression Beats the Outliers Problem in Nonlinear System
-            Identification. Chaos 30, 013107 (2020).
+    def fit(self, *, X=None, y=None):
+        """Fit polynomial NARMAX model using AOLS algorithm.
+
+        The 'fit' function allows a friendly usage by the user.
+        Given two arguments, X and y, fit training data.
+
+        The Entropic Regression algorithm is based on the Matlab package available on:
+        https://github.com/almomaa/ERFit-Package
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the training process.
+        y : ndarray of floats
+            The output data to be used in the training process.
+
+        Returns
+        -------
+        model : ndarray of int
+            The model code representation.
+        theta : array-like of shape = number_of_model_elements
+            The estimated parameters of the model.
+
+        References
+        ----------
+        - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+            Regression Beats the Outliers Problem in Nonlinear System
+            Identification. Chaos 30, 013107 (2020).
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
+        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+            Estimating mutual information. Physical Review E, 69:066-138,2004
         - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
             Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
-        - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-            Estimating mutual information. Physical Review E, 69:066-138,2004
+
+        """
+        if y is None:
+            raise ValueError("y cannot be None")
 
-        """
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NARMAX":
-            check_X_y(X, y)
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X, y)
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X, y)
+
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            reg_matrix = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+        else:
+            reg_matrix, self.ensemble = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+
+        if X is not None:
+            self.n_inputs = _num_features(X)
         else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
+            self.n_inputs = 1  # just to create the regressor space base
+
+        self.regressor_code = self.regressor_space(self.n_inputs)
 
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            reg_matrix = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-        else:
-            reg_matrix, self.ensemble = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-
-        if X is not None:
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
-
-        self.regressor_code = self.regressor_space(self.n_inputs)
-
-        if self.regressor_code.shape[0] > 90:
-            warnings.warn(
-                (
-                    "Given the higher number of possible regressors"
-                    f" ({self.regressor_code.shape[0]}), the Entropic Regression"
-                    " algorithm may take long time to run. Consider reducing the"
-                    " number of regressors "
-                ),
-                stacklevel=2,
-            )
-
-        y_full = y.copy()
-        y = y[self.max_lag :].reshape(-1, 1)
-        self.tol = 0
-        ksg_estimation = []
-        for _ in range(self.n_perm):
-            mutual_information_output = getattr(
-                self, self.mutual_information_estimator
-            )(y, self.rng.permutation(y))
-            ksg_estimation.append(mutual_information_output)
-
-        ksg_estimation = np.array(ksg_estimation).reshape(-1, 1)
-        self.tol = np.quantile(ksg_estimation, self.q)
-        self.estimated_tolerance = self.tol
-        success = False
-        if not self.skip_forward:
-            selected_terms, success = self.entropic_regression_forward(reg_matrix, y)
-
-        if not success or self.skip_forward:
-            selected_terms = np.array(list(range(reg_matrix.shape[1])))
-
-        selected_terms_backward = self.entropic_regression_backward(
-            reg_matrix[:, selected_terms], y, list(range(len(selected_terms)))
-        )
-
-        final_model = selected_terms[selected_terms_backward]
-        # re-check for the constant term (add it to the estimated indices)
-        if 0 not in final_model:
-            final_model = np.array([0, *final_model])
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            self.final_model = self.regressor_code[final_model, :].copy()
-        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-            basis_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-            self.final_model = self.regressor_code[final_model, :].copy()
-        else:
-            self.regressor_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.final_model = self.regressor_code[final_model, :].copy()
-
-        self.theta = getattr(self, self.estimator)(reg_matrix[:, final_model], y_full)
-        if (np.abs(self.theta[0]) < self.h) and (
-            np.sum((self.theta != 0).astype(int)) > 1
-        ):
-            self.theta = self.theta[1:].reshape(-1, 1)
-            self.final_model = self.final_model[1:, :]
-            final_model = final_model[1:]
+        if self.regressor_code.shape[0] > 90:
+            warnings.warn(
+                (
+                    "Given the higher number of possible regressors"
+                    f" ({self.regressor_code.shape[0]}), the Entropic Regression"
+                    " algorithm may take long time to run. Consider reducing the"
+                    " number of regressors "
+                ),
+                stacklevel=2,
+            )
+
+        y_full = y.copy()
+        y = y[self.max_lag :].reshape(-1, 1)
+        self.tol = 0
+        ksg_estimation = []
+        for _ in range(self.n_perm):
+            mutual_information_output = getattr(
+                self, self.mutual_information_estimator
+            )(y, self.rng.permutation(y))
+            ksg_estimation.append(mutual_information_output)
+
+        ksg_estimation = np.array(ksg_estimation).reshape(-1, 1)
+        self.tol = np.quantile(ksg_estimation, self.q)
+        self.estimated_tolerance = self.tol
+        success = False
+        if not self.skip_forward:
+            selected_terms, success = self.entropic_regression_forward(reg_matrix, y)
+
+        if not success or self.skip_forward:
+            selected_terms = np.array(list(range(reg_matrix.shape[1])))
+
+        selected_terms_backward = self.entropic_regression_backward(
+            reg_matrix[:, selected_terms], y, list(range(len(selected_terms)))
+        )
+
+        final_model = selected_terms[selected_terms_backward]
+        # re-check for the constant term (add it to the estimated indices)
+        if 0 not in final_model:
+            final_model = np.array([0, *final_model])
+
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            self.final_model = self.regressor_code[final_model, :].copy()
+        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+            basis_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+            self.final_model = self.regressor_code[final_model, :].copy()
+        else:
+            self.regressor_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.final_model = self.regressor_code[final_model, :].copy()
+
+        self.theta = getattr(self, self.estimator)(reg_matrix[:, final_model], y_full)
+        if (np.abs(self.theta[0]) < self.h) and (
+            np.sum((self.theta != 0).astype(int)) > 1
+        ):
+            self.theta = self.theta[1:].reshape(-1, 1)
+            self.final_model = self.final_model[1:, :]
+            final_model = final_model[1:]
+
+        self.n_terms = len(
+            self.theta
+        )  # the number of terms we selected (necessary in the 'results' methods)
+        self.err = self.n_terms * [
+            0
+        ]  # just to use the `results` method. Will be changed in next update.
+        self.pivv = final_model
+        return self
+
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted values given an input.
+
+        The predict function allows a friendly usage by the user.
+        Given a previously trained model, predict values given
+        a new set of data.
 
-        self.n_terms = len(
-            self.theta
-        )  # the number of terms we selected (necessary in the 'results' methods)
-        self.err = self.n_terms * [
-            0
-        ]  # just to use the `results` method. Will be changed in next update.
-        self.pivv = final_model
-        return self
-
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted values given an input.
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+        steps_ahead : int (default = None)
+            The user can use free run simulation, one-step ahead prediction
+            and n-step ahead prediction.
+        forecast_horizon : int, default=None
+            The number of predictions over the time.
 
-        The predict function allows a friendly usage by the user.
-        Given a previously trained model, predict values given
-        a new set of data.
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-        steps_ahead : int (default = None)
-            The user can use free run simulation, one-step ahead prediction
-            and n-step ahead prediction.
-        forecast_horizon : int, default=None
-            The number of predictions over the time.
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
 
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
 
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        if steps_ahead is None:
-            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        yhat = self._basis_function_n_step_prediction(
-            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-        )
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
+        if steps_ahead is None:
+            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        yhat = self._basis_function_n_step_prediction(
+            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+        )
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
 
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
+        """
+        lagged_data = self.build_matrix(X, y)
 
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            X_base = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+        else:
+            X_base, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
 
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            X_base = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-        else:
-            X_base, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-
-        yhat = super()._one_step_ahead_prediction(X_base)
-        return yhat.reshape(-1, 1)
-
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        yhat = super()._one_step_ahead_prediction(X_base)
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
+        return yhat
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
-        return yhat
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+        elif self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+        else:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+            )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
 
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        elif self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-        else:
-            raise Exception(
-                "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-            )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
+        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
+        return y_output
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = super()._nfir_predict(X, y_initial)
+        return y_output
+
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
+        return yhat.reshape(-1, 1)
 
-        if self.model_type == "NAR":
-            self.n_inputs = 0
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """Perform the n-steps-ahead prediction of a model.
 
-        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
-        return y_output
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = super()._nfir_predict(X, y_initial)
-        return y_output
-
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
 
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        """
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        yhat = super()._basis_function_n_step_prediction(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        yhat = super()._basis_function_n_step_prediction(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        yhat = super()._basis_function_n_steps_horizon(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-

conditional_mutual_information(y, f1, f2)

Finds the conditional mutual information. Finds the conditioned mutual information between \(y\) and \(f1\) given \(f2\).

This code is based on Matlab Entropic Regression package. https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required
f1 ndarray of floats

The destination signal.

required
f2 ndarray of floats

The condition set.

required

Returns:

Name Type Description
vp_estimation float

The conditioned mutual information.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
430
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        yhat = super()._basis_function_n_steps_horizon(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
+

conditional_mutual_information(y, f1, f2)

Finds the conditional mutual information. Finds the conditioned mutual information between \(y\) and \(f1\) given \(f2\).

This code is based on Matlab Entropic Regression package. https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required
f1 ndarray of floats

The destination signal.

required
f2 ndarray of floats

The condition set.

required

Returns:

Name Type Description
vp_estimation float

The conditioned mutual information.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
426
+427
+428
+429
+430
 431
 432
 433
@@ -1862,80 +1838,80 @@
 491
 492
 493
-494
-495
-496
-497
-498
def conditional_mutual_information(self, y, f1, f2):
-    """Finds the conditional mutual information.
-    Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
-
-    This code is based on Matlab Entropic Regression package.
-    https://github.com/almomaa/ERFit-Package
-
-    Parameters
-    ----------
-    y : ndarray of floats
-        The source signal.
-    f1 : ndarray of floats
-        The destination signal.
-    f2 : ndarray of floats
-        The condition set.
-
-    Returns
-    -------
-    vp_estimation : float
-        The conditioned mutual information.
-
-    References
-    ----------
-    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-        Regression Beats the Outliers Problem in Nonlinear System
-        Identification. Chaos 30, 013107 (2020).
+494
def conditional_mutual_information(self, y, f1, f2):
+    """Finds the conditional mutual information.
+    Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
+
+    This code is based on Matlab Entropic Regression package.
+    https://github.com/almomaa/ERFit-Package
+
+    Parameters
+    ----------
+    y : ndarray of floats
+        The source signal.
+    f1 : ndarray of floats
+        The destination signal.
+    f2 : ndarray of floats
+        The condition set.
+
+    Returns
+    -------
+    vp_estimation : float
+        The conditioned mutual information.
+
+    References
+    ----------
+    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+        Regression Beats the Outliers Problem in Nonlinear System
+        Identification. Chaos 30, 013107 (2020).
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
     - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
         Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-
-    """
-    joint_space = np.concatenate([y, f1, f2], axis=1)
-    smallest_distance = np.sort(
-        cdist(joint_space, joint_space, "minkowski", p=self.p).T
-    )
-    idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
-    smallest_distance = smallest_distance[:, idx]
-    epsilon = smallest_distance[:, -1].reshape(-1, 1)
-    # Find number of points from (y,f2), (f1,f2), and (f2,f2) that lies withing the
-    # k^{th} nearest neighbor distance from each point of themselves.
-    smallest_distance_y_f2 = cdist(
-        np.concatenate([y, f2], axis=1),
-        np.concatenate([y, f2], axis=1),
-        "minkowski",
-        p=self.p,
-    )
-    less_than_array_y_f2 = np.array((smallest_distance_y_f2 < epsilon)).astype(int)
-    y_f2 = (np.sum(less_than_array_y_f2, axis=1) - 1).reshape(-1, 1)
-
-    smallest_distance_f1_f2 = cdist(
-        np.concatenate([f1, f2], axis=1),
-        np.concatenate([f1, f2], axis=1),
-        "minkowski",
-        p=self.p,
-    )
-    less_than_array_f1_f2 = np.array((smallest_distance_f1_f2 < epsilon)).astype(
-        int
-    )
-    f1_f2 = (np.sum(less_than_array_f1_f2, axis=1) - 1).reshape(-1, 1)
-
-    smallest_distance_f2 = cdist(f2, f2, "minkowski", p=self.p)
-    less_than_array_f2 = np.array((smallest_distance_f2 < epsilon)).astype(int)
-    f2_f2 = (np.sum(less_than_array_f2, axis=1) - 1).reshape(-1, 1)
-    arr = psi(y_f2 + 1) + psi(f1_f2 + 1) - psi(f2_f2 + 1)
-    vp_estimation = psi(self.k) - np.nanmean(arr[np.isfinite(arr)])
-    return vp_estimation
-

entropic_regression_backward(reg_matrix, y, piv)

Entropic Regression Backward Greedy Feature Elimination.

This algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The input data to be used in the prediction process.

required
y ndarray of floats

The output data to be used in the prediction process.

required
piv ndarray of ints

The set of indices to investigate

required

Returns:

Name Type Description
piv ndarray of ints

The set of remaining indices after the Backward Greedy Feature Elimination.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
306
+
+    """
+    joint_space = np.concatenate([y, f1, f2], axis=1)
+    smallest_distance = np.sort(
+        cdist(joint_space, joint_space, "minkowski", p=self.p).T
+    )
+    idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
+    smallest_distance = smallest_distance[:, idx]
+    epsilon = smallest_distance[:, -1].reshape(-1, 1)
+    # Find number of points from (y,f2), (f1,f2), and (f2,f2) that lies withing the
+    # k^{th} nearest neighbor distance from each point of themselves.
+    smallest_distance_y_f2 = cdist(
+        np.concatenate([y, f2], axis=1),
+        np.concatenate([y, f2], axis=1),
+        "minkowski",
+        p=self.p,
+    )
+    less_than_array_y_f2 = np.array((smallest_distance_y_f2 < epsilon)).astype(int)
+    y_f2 = (np.sum(less_than_array_y_f2, axis=1) - 1).reshape(-1, 1)
+
+    smallest_distance_f1_f2 = cdist(
+        np.concatenate([f1, f2], axis=1),
+        np.concatenate([f1, f2], axis=1),
+        "minkowski",
+        p=self.p,
+    )
+    less_than_array_f1_f2 = np.array((smallest_distance_f1_f2 < epsilon)).astype(
+        int
+    )
+    f1_f2 = (np.sum(less_than_array_f1_f2, axis=1) - 1).reshape(-1, 1)
+
+    smallest_distance_f2 = cdist(f2, f2, "minkowski", p=self.p)
+    less_than_array_f2 = np.array((smallest_distance_f2 < epsilon)).astype(int)
+    f2_f2 = (np.sum(less_than_array_f2, axis=1) - 1).reshape(-1, 1)
+    arr = psi(y_f2 + 1) + psi(f1_f2 + 1) - psi(f2_f2 + 1)
+    vp_estimation = psi(self.k) - np.nanmean(arr[np.isfinite(arr)])
+    return vp_estimation
+

entropic_regression_backward(reg_matrix, y, piv)

Entropic Regression Backward Greedy Feature Elimination.

This algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The input data to be used in the prediction process.

required
y ndarray of floats

The output data to be used in the prediction process.

required
piv ndarray of ints

The set of indices to investigate

required

Returns:

Name Type Description
piv ndarray of ints

The set of remaining indices after the Backward Greedy Feature Elimination.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
302
+303
+304
+305
+306
 307
 308
 309
@@ -1969,50 +1945,50 @@
 337
 338
 339
-340
-341
-342
-343
-344
def entropic_regression_backward(self, reg_matrix, y, piv):
-    """Entropic Regression Backward Greedy Feature Elimination.
-
-    This algorithm is based on the Matlab package available on:
-    https://github.com/almomaa/ERFit-Package
-
-    Parameters
-    ----------
-    reg_matrix : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-    piv : ndarray of ints
-        The set of indices to investigate
-
-    Returns
-    -------
-    piv : ndarray of ints
-        The set of remaining indices after the
-        Backward Greedy Feature Elimination.
-
-    """
-    min_value = -np.inf
-    piv = np.array(piv)
-    ix = []
-    while (min_value <= self.tol) and (len(piv) > 1):
-        initial_array = np.full((1, len(piv)), np.inf)
-        for i in range(initial_array.shape[1]):
-            if piv[i] not in []:  # if you want to keep any regressor
-                rem = np.setdiff1d(piv, piv[i])
-                f1 = reg_matrix[:, piv] @ LA.pinv(reg_matrix[:, piv]) @ y
-                f2 = reg_matrix[:, rem] @ LA.pinv(reg_matrix[:, rem]) @ y
-                initial_array[0, i] = self.conditional_mutual_information(y, f1, f2)
+340
def entropic_regression_backward(self, reg_matrix, y, piv):
+    """Entropic Regression Backward Greedy Feature Elimination.
+
+    This algorithm is based on the Matlab package available on:
+    https://github.com/almomaa/ERFit-Package
+
+    Parameters
+    ----------
+    reg_matrix : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+    piv : ndarray of ints
+        The set of indices to investigate
+
+    Returns
+    -------
+    piv : ndarray of ints
+        The set of remaining indices after the
+        Backward Greedy Feature Elimination.
+
+    """
+    min_value = -np.inf
+    piv = np.array(piv)
+    ix = []
+    while (min_value <= self.tol) and (len(piv) > 1):
+        initial_array = np.full((1, len(piv)), np.inf)
+        for i in range(initial_array.shape[1]):
+            if piv[i] not in []:  # if you want to keep any regressor
+                rem = np.setdiff1d(piv, piv[i])
+                f1 = reg_matrix[:, piv] @ LA.pinv(reg_matrix[:, piv]) @ y
+                f2 = reg_matrix[:, rem] @ LA.pinv(reg_matrix[:, rem]) @ y
+                initial_array[0, i] = self.conditional_mutual_information(y, f1, f2)
+
+        ix = np.argmin(initial_array)
+        min_value = initial_array[0, ix]
+        piv = np.delete(piv, ix)
 
-        ix = np.argmin(initial_array)
-        min_value = initial_array[0, ix]
-        piv = np.delete(piv, ix)
-
-    return piv
-

entropic_regression_forward(reg_matrix, y)

Entropic Regression Forward Greedy Feature Selection.

This algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The input data to be used in the prediction process.

required
y ndarray of floats

The output data to be used in the prediction process.

required

Returns:

Name Type Description
selected_terms ndarray of ints

The set of selected regressors after the Forward Greedy Feature Selection.

success boolean

Indicate if the forward selection succeed. If high degree of uncertainty is detected, and many parameters are selected, the success flag will be set to false. Then, the backward elimination will be applied for all indices.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
346
+    return piv
+

entropic_regression_forward(reg_matrix, y)

Entropic Regression Forward Greedy Feature Selection.

This algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The input data to be used in the prediction process.

required
y ndarray of floats

The output data to be used in the prediction process.

required

Returns:

Name Type Description
selected_terms ndarray of ints

The set of selected regressors after the Forward Greedy Feature Selection.

success boolean

Indicate if the forward selection succeed. If high degree of uncertainty is detected, and many parameters are selected, the success flag will be set to false. Then, the backward elimination will be applied for all indices.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
342
+343
+344
+345
+346
 347
 348
 349
@@ -2090,94 +2066,94 @@
 421
 422
 423
-424
-425
-426
-427
-428
def entropic_regression_forward(self, reg_matrix, y):
-    """Entropic Regression Forward Greedy Feature Selection.
-
-    This algorithm is based on the Matlab package available on:
-    https://github.com/almomaa/ERFit-Package
-
-    Parameters
-    ----------
-    reg_matrix : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-
-    Returns
-    -------
-    selected_terms : ndarray of ints
-        The set of selected regressors after the
-        Forward Greedy Feature Selection.
-    success : boolean
-        Indicate if the forward selection succeed.
-        If high degree of uncertainty is detected, and many parameters are
-        selected, the success flag will be set to false. Then, the
-        backward elimination will be applied for all indices.
-
-    """
-    success = True
-    ix = []
-    selected_terms = []
-    reg_matrix_columns = np.array(list(range(reg_matrix.shape[1])))
-    self.tol = self.tolerance_estimator(y)
-    ksg_max = getattr(self, self.mutual_information_estimator)(
-        y, reg_matrix @ LA.pinv(reg_matrix) @ y
-    )
-    stop_criteria = False
-    while stop_criteria is False:
-        selected_terms = np.ravel(
-            [*selected_terms, *np.array([reg_matrix_columns[ix]])]
-        )
-        if len(selected_terms) != 0:
-            ksg_local = getattr(self, self.mutual_information_estimator)(
-                y,
-                reg_matrix[:, selected_terms]
-                @ LA.pinv(reg_matrix[:, selected_terms])
-                @ y,
+424
def entropic_regression_forward(self, reg_matrix, y):
+    """Entropic Regression Forward Greedy Feature Selection.
+
+    This algorithm is based on the Matlab package available on:
+    https://github.com/almomaa/ERFit-Package
+
+    Parameters
+    ----------
+    reg_matrix : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+
+    Returns
+    -------
+    selected_terms : ndarray of ints
+        The set of selected regressors after the
+        Forward Greedy Feature Selection.
+    success : boolean
+        Indicate if the forward selection succeed.
+        If high degree of uncertainty is detected, and many parameters are
+        selected, the success flag will be set to false. Then, the
+        backward elimination will be applied for all indices.
+
+    """
+    success = True
+    ix = []
+    selected_terms = []
+    reg_matrix_columns = np.array(list(range(reg_matrix.shape[1])))
+    self.tol = self.tolerance_estimator(y)
+    ksg_max = getattr(self, self.mutual_information_estimator)(
+        y, reg_matrix @ LA.pinv(reg_matrix) @ y
+    )
+    stop_criteria = False
+    while stop_criteria is False:
+        selected_terms = np.ravel(
+            [*selected_terms, *np.array([reg_matrix_columns[ix]])]
+        )
+        if len(selected_terms) != 0:
+            ksg_local = getattr(self, self.mutual_information_estimator)(
+                y,
+                reg_matrix[:, selected_terms]
+                @ LA.pinv(reg_matrix[:, selected_terms])
+                @ y,
+            )
+        else:
+            ksg_local = getattr(self, self.mutual_information_estimator)(
+                y, np.zeros_like(y)
             )
-        else:
-            ksg_local = getattr(self, self.mutual_information_estimator)(
-                y, np.zeros_like(y)
-            )
-
-        initial_vector = np.full((1, reg_matrix.shape[1]), -np.inf)
-        for i in range(reg_matrix.shape[1]):
-            if reg_matrix_columns[i] not in selected_terms:
-                f1 = (
-                    reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
-                    @ LA.pinv(
-                        reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
-                    )
-                    @ y
-                )
-                if len(selected_terms) != 0:
-                    f2 = (
-                        reg_matrix[:, selected_terms]
-                        @ LA.pinv(reg_matrix[:, selected_terms])
-                        @ y
-                    )
-                else:
-                    f2 = np.zeros_like(y)
-                vp_estimation = self.conditional_mutual_information(y, f1, f2)
-                initial_vector[0, i] = vp_estimation
-            else:
-                continue
-
-        ix = np.nanargmax(initial_vector)
-        max_value = initial_vector[0, ix]
-
-        if (ksg_max - ksg_local <= self.tol) or (max_value <= self.tol):
-            stop_criteria = True
-        elif len(selected_terms) > np.max([8, reg_matrix.shape[1] / 2]):
-            success = False
-            stop_criteria = True
-
-    return selected_terms, success
-

fit(*, X=None, y=None)

Fit polynomial NARMAX model using AOLS algorithm.

The 'fit' function allows a friendly usage by the user. Given two arguments, X and y, fit training data.

The Entropic Regression algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

theta array-like of shape

The estimated parameters of the model.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
548
+
+        initial_vector = np.full((1, reg_matrix.shape[1]), -np.inf)
+        for i in range(reg_matrix.shape[1]):
+            if reg_matrix_columns[i] not in selected_terms:
+                f1 = (
+                    reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
+                    @ LA.pinv(
+                        reg_matrix[:, [*selected_terms, reg_matrix_columns[i]]]
+                    )
+                    @ y
+                )
+                if len(selected_terms) != 0:
+                    f2 = (
+                        reg_matrix[:, selected_terms]
+                        @ LA.pinv(reg_matrix[:, selected_terms])
+                        @ y
+                    )
+                else:
+                    f2 = np.zeros_like(y)
+                vp_estimation = self.conditional_mutual_information(y, f1, f2)
+                initial_vector[0, i] = vp_estimation
+            else:
+                continue
+
+        ix = np.nanargmax(initial_vector)
+        max_value = initial_vector[0, ix]
+
+        if (ksg_max - ksg_local <= self.tol) or (max_value <= self.tol):
+            stop_criteria = True
+        elif len(selected_terms) > np.max([8, reg_matrix.shape[1] / 2]):
+            success = False
+            stop_criteria = True
+
+    return selected_terms, success
+

fit(*, X=None, y=None)

Fit polynomial NARMAX model using AOLS algorithm.

The 'fit' function allows a friendly usage by the user. Given two arguments, X and y, fit training data.

The Entropic Regression algorithm is based on the Matlab package available on: https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

theta array-like of shape

The estimated parameters of the model.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
544
+545
+546
+547
+548
 549
 550
 551
@@ -2306,169 +2282,145 @@
 674
 675
 676
-677
-678
-679
-680
-681
-682
-683
-684
-685
-686
-687
-688
-689
-690
-691
-692
-693
def fit(self, *, X=None, y=None):
-    """Fit polynomial NARMAX model using AOLS algorithm.
-
-    The 'fit' function allows a friendly usage by the user.
-    Given two arguments, X and y, fit training data.
-
-    The Entropic Regression algorithm is based on the Matlab package available on:
-    https://github.com/almomaa/ERFit-Package
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the training process.
-    y : ndarray of floats
-        The output data to be used in the training process.
-
-    Returns
-    -------
-    model : ndarray of int
-        The model code representation.
-    theta : array-like of shape = number_of_model_elements
-        The estimated parameters of the model.
-
-    References
-    ----------
-    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-        Regression Beats the Outliers Problem in Nonlinear System
-        Identification. Chaos 30, 013107 (2020).
+677
def fit(self, *, X=None, y=None):
+    """Fit polynomial NARMAX model using AOLS algorithm.
+
+    The 'fit' function allows a friendly usage by the user.
+    Given two arguments, X and y, fit training data.
+
+    The Entropic Regression algorithm is based on the Matlab package available on:
+    https://github.com/almomaa/ERFit-Package
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the training process.
+    y : ndarray of floats
+        The output data to be used in the training process.
+
+    Returns
+    -------
+    model : ndarray of int
+        The model code representation.
+    theta : array-like of shape = number_of_model_elements
+        The estimated parameters of the model.
+
+    References
+    ----------
+    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+        Regression Beats the Outliers Problem in Nonlinear System
+        Identification. Chaos 30, 013107 (2020).
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
     - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
         Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
+
+    """
+    if y is None:
+        raise ValueError("y cannot be None")
 
-    """
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    if self.model_type == "NAR":
-        lagged_data = self.build_output_matrix(y)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NFIR":
-        lagged_data = self.build_input_matrix(X)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NARMAX":
-        check_X_y(X, y)
-        self.max_lag = self._get_max_lag()
-        lagged_data = self.build_input_output_matrix(X, y)
+    self.max_lag = self._get_max_lag()
+    lagged_data = self.build_matrix(X, y)
+
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        reg_matrix = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+    else:
+        reg_matrix, self.ensemble = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+
+    if X is not None:
+        self.n_inputs = _num_features(X)
     else:
-        raise ValueError(
-            "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-        )
+        self.n_inputs = 1  # just to create the regressor space base
+
+    self.regressor_code = self.regressor_space(self.n_inputs)
 
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        reg_matrix = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-    else:
-        reg_matrix, self.ensemble = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-
-    if X is not None:
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    self.regressor_code = self.regressor_space(self.n_inputs)
-
-    if self.regressor_code.shape[0] > 90:
-        warnings.warn(
-            (
-                "Given the higher number of possible regressors"
-                f" ({self.regressor_code.shape[0]}), the Entropic Regression"
-                " algorithm may take long time to run. Consider reducing the"
-                " number of regressors "
-            ),
-            stacklevel=2,
-        )
-
-    y_full = y.copy()
-    y = y[self.max_lag :].reshape(-1, 1)
-    self.tol = 0
-    ksg_estimation = []
-    for _ in range(self.n_perm):
-        mutual_information_output = getattr(
-            self, self.mutual_information_estimator
-        )(y, self.rng.permutation(y))
-        ksg_estimation.append(mutual_information_output)
-
-    ksg_estimation = np.array(ksg_estimation).reshape(-1, 1)
-    self.tol = np.quantile(ksg_estimation, self.q)
-    self.estimated_tolerance = self.tol
-    success = False
-    if not self.skip_forward:
-        selected_terms, success = self.entropic_regression_forward(reg_matrix, y)
-
-    if not success or self.skip_forward:
-        selected_terms = np.array(list(range(reg_matrix.shape[1])))
-
-    selected_terms_backward = self.entropic_regression_backward(
-        reg_matrix[:, selected_terms], y, list(range(len(selected_terms)))
-    )
-
-    final_model = selected_terms[selected_terms_backward]
-    # re-check for the constant term (add it to the estimated indices)
-    if 0 not in final_model:
-        final_model = np.array([0, *final_model])
-
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        self.final_model = self.regressor_code[final_model, :].copy()
-    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-        basis_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-        self.final_model = self.regressor_code[final_model, :].copy()
-    else:
-        self.regressor_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.final_model = self.regressor_code[final_model, :].copy()
-
-    self.theta = getattr(self, self.estimator)(reg_matrix[:, final_model], y_full)
-    if (np.abs(self.theta[0]) < self.h) and (
-        np.sum((self.theta != 0).astype(int)) > 1
-    ):
-        self.theta = self.theta[1:].reshape(-1, 1)
-        self.final_model = self.final_model[1:, :]
-        final_model = final_model[1:]
-
-    self.n_terms = len(
-        self.theta
-    )  # the number of terms we selected (necessary in the 'results' methods)
-    self.err = self.n_terms * [
-        0
-    ]  # just to use the `results` method. Will be changed in next update.
-    self.pivv = final_model
-    return self
-

mutual_information_knn(y, y_perm)

Finds the mutual information.

Finds the mutual information between \(x\) and \(y\) given \(z\).

This code is based on Matlab Entropic Regression package.

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required
y_perm ndarray of floats

The destination signal.

required

Returns:

Name Type Description
ksg_estimation float

The conditioned mutual information.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
255
+    if self.regressor_code.shape[0] > 90:
+        warnings.warn(
+            (
+                "Given the higher number of possible regressors"
+                f" ({self.regressor_code.shape[0]}), the Entropic Regression"
+                " algorithm may take long time to run. Consider reducing the"
+                " number of regressors "
+            ),
+            stacklevel=2,
+        )
+
+    y_full = y.copy()
+    y = y[self.max_lag :].reshape(-1, 1)
+    self.tol = 0
+    ksg_estimation = []
+    for _ in range(self.n_perm):
+        mutual_information_output = getattr(
+            self, self.mutual_information_estimator
+        )(y, self.rng.permutation(y))
+        ksg_estimation.append(mutual_information_output)
+
+    ksg_estimation = np.array(ksg_estimation).reshape(-1, 1)
+    self.tol = np.quantile(ksg_estimation, self.q)
+    self.estimated_tolerance = self.tol
+    success = False
+    if not self.skip_forward:
+        selected_terms, success = self.entropic_regression_forward(reg_matrix, y)
+
+    if not success or self.skip_forward:
+        selected_terms = np.array(list(range(reg_matrix.shape[1])))
+
+    selected_terms_backward = self.entropic_regression_backward(
+        reg_matrix[:, selected_terms], y, list(range(len(selected_terms)))
+    )
+
+    final_model = selected_terms[selected_terms_backward]
+    # re-check for the constant term (add it to the estimated indices)
+    if 0 not in final_model:
+        final_model = np.array([0, *final_model])
+
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        self.final_model = self.regressor_code[final_model, :].copy()
+    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+        basis_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+        self.final_model = self.regressor_code[final_model, :].copy()
+    else:
+        self.regressor_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.final_model = self.regressor_code[final_model, :].copy()
+
+    self.theta = getattr(self, self.estimator)(reg_matrix[:, final_model], y_full)
+    if (np.abs(self.theta[0]) < self.h) and (
+        np.sum((self.theta != 0).astype(int)) > 1
+    ):
+        self.theta = self.theta[1:].reshape(-1, 1)
+        self.final_model = self.final_model[1:, :]
+        final_model = final_model[1:]
+
+    self.n_terms = len(
+        self.theta
+    )  # the number of terms we selected (necessary in the 'results' methods)
+    self.err = self.n_terms * [
+        0
+    ]  # just to use the `results` method. Will be changed in next update.
+    self.pivv = final_model
+    return self
+

mutual_information_knn(y, y_perm)

Finds the mutual information.

Finds the mutual information between \(x\) and \(y\) given \(z\).

This code is based on Matlab Entropic Regression package.

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required
y_perm ndarray of floats

The destination signal.

required

Returns:

Name Type Description
ksg_estimation float

The conditioned mutual information.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
251
+252
+253
+254
+255
 256
 257
 258
@@ -2513,61 +2465,73 @@
 297
 298
 299
-300
-301
-302
-303
-304
def mutual_information_knn(self, y, y_perm):
-    """Finds the mutual information.
+300
def mutual_information_knn(self, y, y_perm):
+    """Finds the mutual information.
+
+    Finds the mutual information between $x$ and $y$ given $z$.
+
+    This code is based on Matlab Entropic Regression package.
 
-    Finds the mutual information between $x$ and $y$ given $z$.
-
-    This code is based on Matlab Entropic Regression package.
-
-    Parameters
-    ----------
-    y : ndarray of floats
-        The source signal.
-    y_perm : ndarray of floats
-        The destination signal.
-
-    Returns
-    -------
-    ksg_estimation : float
-        The conditioned mutual information.
-
-    References
-    ----------
-    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-        Regression Beats the Outliers Problem in Nonlinear System
-        Identification. Chaos 30, 013107 (2020).
+    Parameters
+    ----------
+    y : ndarray of floats
+        The source signal.
+    y_perm : ndarray of floats
+        The destination signal.
+
+    Returns
+    -------
+    ksg_estimation : float
+        The conditioned mutual information.
+
+    References
+    ----------
+    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+        Regression Beats the Outliers Problem in Nonlinear System
+        Identification. Chaos 30, 013107 (2020).
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
     - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
         Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-
-    """
-    joint_space = np.concatenate([y, y_perm], axis=1)
-    smallest_distance = np.sort(
-        cdist(joint_space, joint_space, "minkowski", p=self.p).T
-    )
-    idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
-    smallest_distance = smallest_distance[:, idx]
-    epsilon = smallest_distance[:, -1].reshape(-1, 1)
-    smallest_distance_y = cdist(y, y, "minkowski", p=self.p)
-    less_than_array_nx = np.array((smallest_distance_y < epsilon)).astype(int)
-    nx = (np.sum(less_than_array_nx, axis=1) - 1).reshape(-1, 1)
-    smallest_distance_y_perm = cdist(y_perm, y_perm, "minkowski", p=self.p)
-    less_than_array_ny = np.array((smallest_distance_y_perm < epsilon)).astype(int)
-    ny = (np.sum(less_than_array_ny, axis=1) - 1).reshape(-1, 1)
-    arr = psi(nx + 1) + psi(ny + 1)
-    ksg_estimation = (
-        psi(self.k) + psi(y.shape[0]) - np.nanmean(arr[np.isfinite(arr)])
-    )
-    return ksg_estimation
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
695
+
+    """
+    joint_space = np.concatenate([y, y_perm], axis=1)
+    smallest_distance = np.sort(
+        cdist(joint_space, joint_space, "minkowski", p=self.p).T
+    )
+    idx = np.argpartition(smallest_distance[-1, :], self.k + 1)[: self.k + 1]
+    smallest_distance = smallest_distance[:, idx]
+    epsilon = smallest_distance[:, -1].reshape(-1, 1)
+    smallest_distance_y = cdist(y, y, "minkowski", p=self.p)
+    less_than_array_nx = np.array((smallest_distance_y < epsilon)).astype(int)
+    nx = (np.sum(less_than_array_nx, axis=1) - 1).reshape(-1, 1)
+    smallest_distance_y_perm = cdist(y_perm, y_perm, "minkowski", p=self.p)
+    less_than_array_ny = np.array((smallest_distance_y_perm < epsilon)).astype(int)
+    ny = (np.sum(less_than_array_ny, axis=1) - 1).reshape(-1, 1)
+    arr = psi(nx + 1) + psi(ny + 1)
+    ksg_estimation = (
+        psi(self.k) + psi(y.shape[0]) - np.nanmean(arr[np.isfinite(arr)])
+    )
+    return ksg_estimation
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\entropic_regression.py
679
+680
+681
+682
+683
+684
+685
+686
+687
+688
+689
+690
+691
+692
+693
+694
+695
 696
 697
 698
@@ -2604,77 +2568,65 @@
 729
 730
 731
-732
-733
-734
-735
-736
-737
-738
-739
-740
-741
-742
-743
-744
-745
-746
-747
-748
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted values given an input.
+732
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted values given an input.
+
+    The predict function allows a friendly usage by the user.
+    Given a previously trained model, predict values given
+    a new set of data.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+    steps_ahead : int (default = None)
+        The user can use free run simulation, one-step ahead prediction
+        and n-step ahead prediction.
+    forecast_horizon : int, default=None
+        The number of predictions over the time.
 
-    The predict function allows a friendly usage by the user.
-    Given a previously trained model, predict values given
-    a new set of data.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-    steps_ahead : int (default = None)
-        The user can use free run simulation, one-step ahead prediction
-        and n-step ahead prediction.
-    forecast_horizon : int, default=None
-        The number of predictions over the time.
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
 
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
 
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    if steps_ahead is None:
-        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-    if steps_ahead == 1:
-        yhat = self._one_step_ahead_prediction(X, y)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    yhat = self._basis_function_n_step_prediction(
-        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-    )
-    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-    return yhat
-

tolerance_estimator(y)

Tolerance Estimation for mutual independence test. Finds the conditioned mutual information between \(y\) and \(f1\) given \(f2\).

This code is based on Matlab Entropic Regression package. https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required

Returns:

Name Type Description
tol float

The tolerance value given q.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
500
+    if steps_ahead is None:
+        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+    if steps_ahead == 1:
+        yhat = self._one_step_ahead_prediction(X, y)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    yhat = self._basis_function_n_step_prediction(
+        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+    )
+    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+    return yhat
+

tolerance_estimator(y)

Tolerance Estimation for mutual independence test. Finds the conditioned mutual information between \(y\) and \(f1\) given \(f2\).

This code is based on Matlab Entropic Regression package. https://github.com/almomaa/ERFit-Package

Parameters:

Name Type Description Default
y ndarray of floats

The source signal.

required

Returns:

Name Type Description
tol float

The tolerance value given q.

References
  • Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic Regression Beats the Outliers Problem in Nonlinear System Identification. Chaos 30, 013107 (2020).
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
  • Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. Estimating mutual information. Physical Review E, 69:066-138,2004
Source code in sysidentpy\model_structure_selection\entropic_regression.py
496
+497
+498
+499
+500
 501
 502
 503
@@ -2716,55 +2668,51 @@
 539
 540
 541
-542
-543
-544
-545
-546
def tolerance_estimator(self, y):
-    """Tolerance Estimation for mutual independence test.
-    Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
-
-    This code is based on Matlab Entropic Regression package.
-    https://github.com/almomaa/ERFit-Package
-
-    Parameters
-    ----------
-    y : ndarray of floats
-        The source signal.
-
-    Returns
-    -------
-    tol : float
-        The tolerance value given q.
-
-    References
-    ----------
-    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
-        Regression Beats the Outliers Problem in Nonlinear System
-        Identification. Chaos 30, 013107 (2020).
+542
def tolerance_estimator(self, y):
+    """Tolerance Estimation for mutual independence test.
+    Finds the conditioned mutual information between $y$ and $f1$ given $f2$.
+
+    This code is based on Matlab Entropic Regression package.
+    https://github.com/almomaa/ERFit-Package
+
+    Parameters
+    ----------
+    y : ndarray of floats
+        The source signal.
+
+    Returns
+    -------
+    tol : float
+        The tolerance value given q.
+
+    References
+    ----------
+    - Abd AlRahman R. AlMomani, Jie Sun, and Erik Bollt. How Entropic
+        Regression Beats the Outliers Problem in Nonlinear System
+        Identification. Chaos 30, 013107 (2020).
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
+    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
+        Estimating mutual information. Physical Review E, 69:066-138,2004
     - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
         Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-    - Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger.
-        Estimating mutual information. Physical Review E, 69:066-138,2004
-
-    """
-    ksg_estimation = []
-    # ksg_estimation = [
-    #     getattr(self, self.mutual_information_estimator)(y,
-    # self.rng.permutation(y))
-    #     for i in range(self.n_perm)
-    # ]
-
-    for _ in range(self.n_perm):
-        mutual_information_output = getattr(
-            self, self.mutual_information_estimator
-        )(y, self.rng.permutation(y))
-
-        ksg_estimation.append(mutual_information_output)
-
-    ksg_estimation = np.array(ksg_estimation)
-    tol = np.quantile(ksg_estimation, self.q)
-    return tol
+
+    """
+    ksg_estimation = []
+    # ksg_estimation = [
+    #     getattr(self, self.mutual_information_estimator)(y,
+    # self.rng.permutation(y))
+    #     for i in range(self.n_perm)
+    # ]
+
+    for _ in range(self.n_perm):
+        mutual_information_output = getattr(
+            self, self.mutual_information_estimator
+        )(y, self.rng.permutation(y))
+
+        ksg_estimation.append(mutual_information_output)
+
+    ksg_estimation = np.array(ksg_estimation)
+    tol = np.quantile(ksg_estimation, self.q)
+    return tol
 
\ No newline at end of file diff --git a/docs/code/frols/index.html b/docs/code/frols/index.html index d1d7f84b..50e8360a 100644 --- a/docs/code/frols/index.html +++ b/docs/code/frols/index.html @@ -10,7 +10,7 @@ body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);} -

Documentation for FROLS

Build Polynomial NARMAX Models using FROLS algorithm

FROLS

Bases: Estimators, BaseMSS

Forward Regression Orthogonal Least Squares algorithm.

This class uses the FROLS algorithm ([1], [2]) to build NARMAX models. The NARMAX model is described as:

\[ y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x}, e_{k-1}, \dotsc, e_{k-n_e}] + e_k \]

where \(n_y\in \mathbb{N}^*\), \(n_x \in \mathbb{N}\), \(n_e \in \mathbb{N}\), are the maximum lags for the system output and input respectively; \(x_k \in \mathbb{R}^{n_x}\) is the system input and \(y_k \in \mathbb{R}^{n_y}\) is the system output at discrete time \(k \in \mathbb{N}^n\); $e_k \in \mathbb{R}^{n_e}4 stands for uncertainties and possible noise at discrete time \(k\). In this case, \(\mathcal{F}^\ell\) is some nonlinear function of the input and output regressors with nonlinearity degree \(\ell \in \mathbb{N}\) and \(d\) is a time delay typically set to \(d=1\).

Parameters:

Name Type Description Default
ylag int, default

The maximum lag of the output.

2
xlag int, default

The maximum lag of the input.

2
elag int, default

The maximum lag of the residues.

2
order_selection bool

Whether to use information criteria for order selection.

False
info_criteria str, default

The information criteria method to be used.

'aic'
n_terms int, default

The number of the model terms to be selected. Note that n_terms overwrite the information criteria values.

None
n_info_values int, default

The number of iterations of the information criteria method.

10
estimator str, default

The parameter estimation method.

'recursive_least_squares'
extended_least_squares bool, default

Whether to use extended least squares method for parameter estimation. Note that we define a specific set of noise regressors.

False
lam float, default

Forgetting factor of the Recursive Least Squares method.

0.98
delta float, default

Normalization factor of the P matrix.

0.01
offset_covariance float, default

The offset covariance factor of the affine least mean squares filter.

0.2
mu float, default

The convergence coefficient (learning rate) of the filter.

0.01
eps float

Normalization factor of the normalized filters.

np.finfo(np.float64).eps
gama float, default

The leakage factor of the Leaky LMS method.

0.2
weight float, default

Weight factor to control the proportions of the error norms and offers an extra degree of freedom within the adaptation of the LMS mixed norm method.

0.02
model_type str

The user can choose "NARMAX", "NAR" and "NFIR" models

'NARMAX'

Examples:

>>> import numpy as np
+                        

Documentation for FROLS

Build Polynomial NARMAX Models using FROLS algorithm

FROLS

Bases: Estimators, BaseMSS

Forward Regression Orthogonal Least Squares algorithm.

This class uses the FROLS algorithm ([1], [2]) to build NARMAX models. The NARMAX model is described as:

\[ y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x}, e_{k-1}, \dotsc, e_{k-n_e}] + e_k \]

where \(n_y\in \mathbb{N}^*\), \(n_x \in \mathbb{N}\), \(n_e \in \mathbb{N}\), are the maximum lags for the system output and input respectively; \(x_k \in \mathbb{R}^{n_x}\) is the system input and \(y_k \in \mathbb{R}^{n_y}\) is the system output at discrete time \(k \in \mathbb{N}^n\); $e_k \in \mathbb{R}^{n_e}4 stands for uncertainties and possible noise at discrete time \(k\). In this case, \(\mathcal{F}^\ell\) is some nonlinear function of the input and output regressors with nonlinearity degree \(\ell \in \mathbb{N}\) and \(d\) is a time delay typically set to \(d=1\).

Parameters:

Name Type Description Default
ylag int, default

The maximum lag of the output.

2
xlag int, default

The maximum lag of the input.

2
elag int, default

The maximum lag of the residues.

2
order_selection bool

Whether to use information criteria for order selection.

False
info_criteria str, default

The information criteria method to be used.

'aic'
n_terms int, default

The number of the model terms to be selected. Note that n_terms overwrite the information criteria values.

None
n_info_values int, default

The number of iterations of the information criteria method.

10
estimator str, default

The parameter estimation method.

'recursive_least_squares'
extended_least_squares bool, default

Whether to use extended least squares method for parameter estimation. Note that we define a specific set of noise regressors.

False
lam float, default

Forgetting factor of the Recursive Least Squares method.

0.98
delta float, default

Normalization factor of the P matrix.

0.01
offset_covariance float, default

The offset covariance factor of the affine least mean squares filter.

0.2
mu float, default

The convergence coefficient (learning rate) of the filter.

0.01
eps float

Normalization factor of the normalized filters.

np.finfo(np.float64).eps
gama float, default

The leakage factor of the Leaky LMS method.

0.2
weight float, default

Weight factor to control the proportions of the error norms and offers an extra degree of freedom within the adaptation of the LMS mixed norm method.

0.02
model_type str

The user can choose "NARMAX", "NAR" and "NFIR" models

'NARMAX'

Examples:

>>> import numpy as np
 >>> import matplotlib.pyplot as plt
 >>> from sysidentpy.model_structure_selection import FROLS
 >>> from sysidentpy.basis_function._basis_function import Polynomial
@@ -761,7 +761,66 @@
 735
 736
 737
-738
@deprecated(
+738
+739
+740
+741
+742
+743
+744
+745
+746
+747
+748
+749
+750
+751
+752
+753
+754
+755
+756
+757
+758
+759
+760
+761
+762
+763
+764
+765
+766
+767
+768
+769
+770
+771
+772
+773
+774
+775
+776
+777
+778
+779
+780
+781
+782
+783
+784
+785
+786
+787
+788
+789
+790
+791
+792
+793
+794
+795
+796
+797
@deprecated(
     version="v0.3.0",
     future_version="v0.4.0",
     message=(
@@ -916,581 +975,674 @@
         self.xlag = xlag
         self.max_lag = self._get_max_lag()
         self.info_criteria = info_criteria
-        self.n_info_values = n_info_values
-        self.n_terms = n_terms
-        self.estimator = estimator
-        self.extended_least_squares = extended_least_squares
-        self.elag = elag
-        self.model_type = model_type
-        self._validate_params()
-        self.basis_function = basis_function
-        super().__init__(
-            lam=lam,
-            delta=delta,
-            offset_covariance=offset_covariance,
-            mu=mu,
-            eps=eps,
-            gama=gama,
-            weight=weight,
-            basis_function=basis_function,
-        )
-        self.ensemble = None
-        self.n_inputs = None
-        self.regressor_code = None
-        self.info_values = None
-        self.err = None
-        self.final_model = None
-        self.theta = None
-        self.pivv = None
-
-    def _validate_params(self):
-        """Validate input params."""
-        if not isinstance(self.n_info_values, int) or self.n_info_values < 1:
-            raise ValueError(
-                f"n_info_values must be integer and > zero. Got {self.n_info_values}"
-            )
-
-        if isinstance(self.ylag, int) and self.ylag < 1:
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-        if isinstance(self.xlag, int) and self.xlag < 1:
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.xlag, (int, list)):
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.ylag, (int, list)):
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-        if not isinstance(self.order_selection, bool):
-            raise TypeError(
-                f"order_selection must be False or True. Got {self.order_selection}"
-            )
-
-        if not isinstance(self.extended_least_squares, bool):
-            raise TypeError(
-                "extended_least_squares must be False or True. Got"
-                f" {self.extended_least_squares}"
-            )
-
-        if self.info_criteria not in ["aic", "bic", "fpe", "lilc"]:
-            raise ValueError(
-                f"info_criteria must be aic, bic, fpe or lilc. Got {self.info_criteria}"
-            )
-
-        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
-            raise ValueError(
-                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
-            )
-
-        if (
-            not isinstance(self.n_terms, int) or self.n_terms < 1
-        ) and self.n_terms is not None:
-            raise ValueError(f"n_terms must be integer and > zero. Got {self.n_terms}")
-
-    def error_reduction_ratio(self, psi, y, process_term_number):
-        """Perform the Error Reduction Ration algorithm.
-
-        Parameters
-        ----------
-        y : array-like of shape = n_samples
-            The target data used in the identification process.
-        psi : ndarray of floats
-            The information matrix of the model.
-        process_term_number : int
-            Number of Process Terms defined by the user.
-
-        Returns
-        -------
-        err : array-like of shape = number_of_model_elements
-            The respective ERR calculated for each regressor.
-        piv : array-like of shape = number_of_model_elements
-            Contains the index to put the regressors in the correct order
-            based on err values.
-        psi_orthogonal : ndarray of floats
-            The updated and orthogonal information matrix.
-
-        References
-        ----------
-        - Manuscript: Orthogonal least squares methods and their application
-           to non-linear system identification
-           https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
-        - Manuscript (portuguese): Identificação de Sistemas não Lineares
-           Utilizando Modelos NARMAX Polinomiais – Uma Revisão
-           e Novos Resultados
-
-        """
-        squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
-        tmp_psi = psi.copy()
-        y = y[self.max_lag :, 0].reshape(-1, 1)
-        tmp_y = y.copy()
-        dimension = tmp_psi.shape[1]
-        piv = np.arange(dimension)
-        tmp_err = np.zeros(dimension)
-        err = np.zeros(dimension)
-
-        for i in np.arange(0, dimension):
-            for j in np.arange(i, dimension):
-                # Add `eps` in the denominator to omit division by zero if
-                # denominator is zero
-                tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
-                    np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
-                )
-
-            if i == process_term_number:
-                break
-
-            piv_index = np.argmax(tmp_err[i:]) + i
-            err[i] = tmp_err[piv_index]
-            tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
-            piv[[piv_index, i]] = piv[[i, piv_index]]
-
-            v = Orthogonalization().house(tmp_psi[i:, i])
+        self.info_criteria_function = self.get_info_criteria(info_criteria)
+        self.n_info_values = n_info_values
+        self.n_terms = n_terms
+        self.estimator = estimator
+        self.extended_least_squares = extended_least_squares
+        self.elag = elag
+        self.model_type = model_type
+        self.build_matrix = self.get_build_io_method(model_type)
+        self._validate_params()
+        self.basis_function = basis_function
+        super().__init__(
+            lam=lam,
+            delta=delta,
+            offset_covariance=offset_covariance,
+            mu=mu,
+            eps=eps,
+            gama=gama,
+            weight=weight,
+            basis_function=basis_function,
+        )
+        self.ensemble = None
+        self.n_inputs = None
+        self.regressor_code = None
+        self.info_values = None
+        self.err = None
+        self.final_model = None
+        self.theta = None
+        self.pivv = None
+
+    def _validate_params(self):
+        """Validate input params."""
+        if not isinstance(self.n_info_values, int) or self.n_info_values < 1:
+            raise ValueError(
+                f"n_info_values must be integer and > zero. Got {self.n_info_values}"
+            )
+
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if not isinstance(self.order_selection, bool):
+            raise TypeError(
+                f"order_selection must be False or True. Got {self.order_selection}"
+            )
+
+        if not isinstance(self.extended_least_squares, bool):
+            raise TypeError(
+                "extended_least_squares must be False or True. Got"
+                f" {self.extended_least_squares}"
+            )
+
+        if self.info_criteria not in ["aic", "bic", "fpe", "lilc"]:
+            raise ValueError(
+                f"info_criteria must be aic, bic, fpe or lilc. Got {self.info_criteria}"
+            )
+
+        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+            )
+
+        if (
+            not isinstance(self.n_terms, int) or self.n_terms < 1
+        ) and self.n_terms is not None:
+            raise ValueError(f"n_terms must be integer and > zero. Got {self.n_terms}")
+
+    def error_reduction_ratio(self, psi, y, process_term_number):
+        """Perform the Error Reduction Ration algorithm.
+
+        Parameters
+        ----------
+        y : array-like of shape = n_samples
+            The target data used in the identification process.
+        psi : ndarray of floats
+            The information matrix of the model.
+        process_term_number : int
+            Number of Process Terms defined by the user.
+
+        Returns
+        -------
+        err : array-like of shape = number_of_model_elements
+            The respective ERR calculated for each regressor.
+        piv : array-like of shape = number_of_model_elements
+            Contains the index to put the regressors in the correct order
+            based on err values.
+        psi_orthogonal : ndarray of floats
+            The updated and orthogonal information matrix.
+
+        References
+        ----------
+        - Manuscript: Orthogonal least squares methods and their application
+           to non-linear system identification
+           https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
+        - Manuscript (portuguese): Identificação de Sistemas não Lineares
+           Utilizando Modelos NARMAX Polinomiais – Uma Revisão
+           e Novos Resultados
+
+        """
+        squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
+        tmp_psi = psi.copy()
+        y = y[self.max_lag :, 0].reshape(-1, 1)
+        tmp_y = y.copy()
+        dimension = tmp_psi.shape[1]
+        piv = np.arange(dimension)
+        tmp_err = np.zeros(dimension)
+        err = np.zeros(dimension)
+
+        for i in np.arange(0, dimension):
+            for j in np.arange(i, dimension):
+                # Add `eps` in the denominator to omit division by zero if
+                # denominator is zero
+                tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
+                    np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
+                )
+
+            if i == process_term_number:
+                break
+
+            piv_index = np.argmax(tmp_err[i:]) + i
+            err[i] = tmp_err[piv_index]
+            tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
+            piv[[piv_index, i]] = piv[[i, piv_index]]
 
-            row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
+            v = Orthogonalization().house(tmp_psi[i:, i])
 
-            tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
+            row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
 
-            tmp_psi[i:, i:] = np.copy(row_result)
+            tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
 
-        tmp_piv = piv[0:process_term_number]
-        psi_orthogonal = psi[:, tmp_piv]
-        return err, piv, psi_orthogonal
-
-    def information_criterion(self, X_base, y):
-        """Determine the model order.
-
-        This function uses a information criterion to determine the model size.
-        'Akaike'-  Akaike's Information Criterion with
-                   critical value 2 (AIC) (default).
-        'Bayes' -  Bayes Information Criterion (BIC).
-        'FPE'   -  Final Prediction Error (FPE).
-        'LILC'  -  Khundrin’s law ofiterated logarithm criterion (LILC).
-
-        Parameters
-        ----------
-        y : array-like of shape = n_samples
-            Target values of the system.
-        X_base : array-like of shape = n_samples
-            Input system values measured by the user.
-
-        Returns
-        -------
-        output_vector : array-like of shape = n_regressor
-            Vector with values of akaike's information criterion
-            for models with N terms (where N is the
-            vector position + 1).
-
-        """
-        if self.n_info_values is not None and self.n_info_values > X_base.shape[1]:
-            self.n_info_values = X_base.shape[1]
-            warnings.warn(
-                (
-                    "n_info_values is greater than the maximum number of all"
-                    " regressors space considering the chosen y_lag, u_lag, and"
-                    f" non_degree. We set as {X_base.shape[1]}"
-                ),
-                stacklevel=2,
-            )
-
-        output_vector = np.zeros(self.n_info_values)
-        output_vector[:] = np.nan
-
-        n_samples = len(y) - self.max_lag
+            tmp_psi[i:, i:] = np.copy(row_result)
+
+        tmp_piv = piv[0:process_term_number]
+        psi_orthogonal = psi[:, tmp_piv]
+        return err, piv, psi_orthogonal
+
+    def information_criterion(self, X_base, y):
+        """Determine the model order.
+
+        This function uses a information criterion to determine the model size.
+        'Akaike'-  Akaike's Information Criterion with
+                   critical value 2 (AIC) (default).
+        'Bayes' -  Bayes Information Criterion (BIC).
+        'FPE'   -  Final Prediction Error (FPE).
+        'LILC'  -  Khundrin’s law ofiterated logarithm criterion (LILC).
+
+        Parameters
+        ----------
+        y : array-like of shape = n_samples
+            Target values of the system.
+        X_base : array-like of shape = n_samples
+            Input system values measured by the user.
+
+        Returns
+        -------
+        output_vector : array-like of shape = n_regressor
+            Vector with values of akaike's information criterion
+            for models with N terms (where N is the
+            vector position + 1).
+
+        """
+        if self.n_info_values is not None and self.n_info_values > X_base.shape[1]:
+            self.n_info_values = X_base.shape[1]
+            warnings.warn(
+                (
+                    "n_info_values is greater than the maximum number of all"
+                    " regressors space considering the chosen y_lag, u_lag, and"
+                    f" non_degree. We set as {X_base.shape[1]}"
+                ),
+                stacklevel=2,
+            )
+
+        output_vector = np.zeros(self.n_info_values)
+        output_vector[:] = np.nan
 
-        for i in range(0, self.n_info_values):
-            n_theta = i + 1
-            regressor_matrix = self.error_reduction_ratio(X_base, y, n_theta)[2]
-
-            tmp_theta = getattr(self, self.estimator)(regressor_matrix, y)
+        n_samples = len(y) - self.max_lag
+
+        for i in range(0, self.n_info_values):
+            n_theta = i + 1
+            regressor_matrix = self.error_reduction_ratio(X_base, y, n_theta)[2]
 
-            tmp_yhat = np.dot(regressor_matrix, tmp_theta)
-            tmp_residual = y[self.max_lag :] - tmp_yhat
-            e_var = np.var(tmp_residual, ddof=1)
-
-            output_vector[i] = self.compute_info_value(n_theta, n_samples, e_var)
+            tmp_theta = getattr(self, self.estimator)(regressor_matrix, y)
+
+            tmp_yhat = np.dot(regressor_matrix, tmp_theta)
+            tmp_residual = y[self.max_lag :] - tmp_yhat
+            e_var = np.var(tmp_residual, ddof=1)
 
-        return output_vector
-
-    def compute_info_value(self, n_theta, n_samples, e_var):
-        """Compute the information criteria value.
+            # output_vector[i] = self.compute_info_value(n_theta, n_samples, e_var)
+            output_vector[i] = self.info_criteria_function(n_theta, n_samples, e_var)
+
+        return output_vector
 
-        This function returns the information criteria concerning each
-        number of regressor. The information criteria can be AIC, BIC,
-        LILC and FPE.
-
-        Parameters
-        ----------
-        n_theta : int
-            Number of parameters of the model.
-        n_samples : int
-            Number of samples given the maximum lag.
-        e_var : float
-            Variance of the residues
+    def get_info_criteria(self, info_criteria):
+        """get info criteria"""
+        info_criteria_options = {
+            "aic": self.aic,
+            "bic": self.bic,
+            "fpe": self.fpe,
+            "lilc": self.lilc,
+        }
+        return info_criteria_options.get(info_criteria)
+
+    def bic(self, n_theta, n_samples, e_var):
+        """Compute the Bayesian information criteria value.
 
-        Returns
-        -------
-        info_criteria_value : float
-            The computed value given the information criteria selected by the
-            user.
-
-        """
-        if self.info_criteria == "bic":
-            model_factor = n_theta * np.log(n_samples)
-        elif self.info_criteria == "fpe":
-            model_factor = n_samples * np.log(
-                (n_samples + n_theta) / (n_samples - n_theta)
-            )
-        elif self.info_criteria == "lilc":
-            model_factor = 2 * n_theta * np.log(np.log(n_samples))
-        else:  # AIC
-            model_factor = +2 * n_theta
-
-        e_factor = n_samples * np.log(e_var)
-        info_criteria_value = e_factor + model_factor
-
-        return info_criteria_value
-
-    def fit(self, *, X=None, y=None):
-        """Fit polynomial NARMAX model.
-
-        This is an 'alpha' version of the 'fit' function which allows
-        a friendly usage by the user. Given two arguments, X and y, fit
-        training data.
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the training process.
-        y : ndarray of floats
-            The output data to be used in the training process.
-
-        Returns
-        -------
-        model : ndarray of int
-            The model code representation.
-        piv : array-like of shape = number_of_model_elements
-            Contains the index to put the regressors in the correct order
-            based on err values.
-        theta : array-like of shape = number_of_model_elements
-            The estimated parameters of the model.
-        err : array-like of shape = number_of_model_elements
-            The respective ERR calculated for each regressor.
-        info_values : array-like of shape = n_regressor
-            Vector with values of akaike's information criterion
-            for models with N terms (where N is the
-            vector position + 1).
-
-        """
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        if self.model_type == "NARMAX":
-            check_X_y(X, y)
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X, y)
-        elif self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-            self.max_lag = self._get_max_lag()
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
+        Parameters
+        ----------
+        n_theta : int
+            Number of parameters of the model.
+        n_samples : int
+            Number of samples given the maximum lag.
+        e_var : float
+            Variance of the residues
+
+        Returns
+        -------
+        info_criteria_value : float
+            The computed value given the information criteria selected by the
+            user.
+
+        """
+        model_factor = n_theta * np.log(n_samples)
+        e_factor = n_samples * np.log(e_var)
+        info_criteria_value = e_factor + model_factor
+
+        return info_criteria_value
+
+    def aic(self, n_theta, n_samples, e_var):
+        """Compute the Akaike information criteria value.
+
+        Parameters
+        ----------
+        n_theta : int
+            Number of parameters of the model.
+        n_samples : int
+            Number of samples given the maximum lag.
+        e_var : float
+            Variance of the residues
+
+        Returns
+        -------
+        info_criteria_value : float
+            The computed value given the information criteria selected by the
+            user.
+
+        """
+        model_factor = 2 * n_theta
+        e_factor = n_samples * np.log(e_var)
+        info_criteria_value = e_factor + model_factor
+
+        return info_criteria_value
+
+    def fpe(self, n_theta, n_samples, e_var):
+        """Compute the Final Error Prediction value.
+
+        Parameters
+        ----------
+        n_theta : int
+            Number of parameters of the model.
+        n_samples : int
+            Number of samples given the maximum lag.
+        e_var : float
+            Variance of the residues
+
+        Returns
+        -------
+        info_criteria_value : float
+            The computed value given the information criteria selected by the
+            user.
+
+        """
+        model_factor = n_samples * np.log((n_samples + n_theta) / (n_samples - n_theta))
+        e_factor = n_samples * np.log(e_var)
+        info_criteria_value = e_factor + model_factor
+
+        return info_criteria_value
 
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            reg_matrix = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-        else:
-            reg_matrix, self.ensemble = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-
-        if X is not None:
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
-
-        self.regressor_code = self.regressor_space(self.n_inputs)
-
-        if self.order_selection is True:
-            self.info_values = self.information_criterion(reg_matrix, y)
-
-        if self.n_terms is None and self.order_selection is True:
-            model_length = np.where(self.info_values == np.amin(self.info_values))
-            model_length = int(model_length[0] + 1)
-            self.n_terms = model_length
-        elif self.n_terms is None and self.order_selection is not True:
-            raise ValueError(
-                "If order_selection is False, you must define n_terms value."
-            )
-        else:
-            model_length = self.n_terms
-
-        (self.err, self.pivv, psi) = self.error_reduction_ratio(
-            reg_matrix, y, model_length
-        )
-
-        tmp_piv = self.pivv[0:model_length]
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            self.final_model = self.regressor_code[tmp_piv, :].copy()
-        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-            basis_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-            self.final_model = self.regressor_code[tmp_piv, :].copy()
-        else:
-            self.regressor_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.final_model = self.regressor_code[tmp_piv, :].copy()
+    def lilc(self, n_theta, n_samples, e_var):
+        """Compute the Lilc information criteria value.
+
+        Parameters
+        ----------
+        n_theta : int
+            Number of parameters of the model.
+        n_samples : int
+            Number of samples given the maximum lag.
+        e_var : float
+            Variance of the residues
+
+        Returns
+        -------
+        info_criteria_value : float
+            The computed value given the information criteria selected by the
+            user.
+
+        """
+        model_factor = 2 * n_theta * np.log(np.log(n_samples))
+        e_factor = n_samples * np.log(e_var)
+        info_criteria_value = e_factor + model_factor
+
+        return info_criteria_value
+
+    def fit(self, *, X=None, y=None):
+        """Fit polynomial NARMAX model.
+
+        This is an 'alpha' version of the 'fit' function which allows
+        a friendly usage by the user. Given two arguments, X and y, fit
+        training data.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the training process.
+        y : ndarray of floats
+            The output data to be used in the training process.
+
+        Returns
+        -------
+        model : ndarray of int
+            The model code representation.
+        piv : array-like of shape = number_of_model_elements
+            Contains the index to put the regressors in the correct order
+            based on err values.
+        theta : array-like of shape = number_of_model_elements
+            The estimated parameters of the model.
+        err : array-like of shape = number_of_model_elements
+            The respective ERR calculated for each regressor.
+        info_values : array-like of shape = n_regressor
+            Vector with values of akaike's information criterion
+            for models with N terms (where N is the
+            vector position + 1).
 
-        self.theta = getattr(self, self.estimator)(psi, y)
-        if self.extended_least_squares is True:
-            self.theta = self._unbiased_estimator(
-                psi, y, self.theta, self.elag, self.max_lag, self.estimator
-            )
-        return self
+        """
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X, y)
 
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted values given an input.
-
-        The predict function allows a friendly usage by the user.
-        Given a previously trained model, predict values given
-        a new set of data.
-
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future)
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-        steps_ahead : int (default = None)
-            The user can use free run simulation, one-step ahead prediction
-            and n-step ahead prediction.
-        forecast_horizon : int, default=None
-            The number of predictions over the time.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
-
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        if steps_ahead is None:
-            yhat = self._basis_function_predict(X, y, forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        yhat = self._basis_function_n_step_prediction(
-            X, y, steps_ahead, forecast_horizon
-        )
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            reg_matrix = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+        else:
+            reg_matrix, self.ensemble = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+
+        if X is not None:
+            self.n_inputs = _num_features(X)
+        else:
+            self.n_inputs = 1  # just to create the regressor space base
+
+        self.regressor_code = self.regressor_space(self.n_inputs)
+
+        if self.order_selection is True:
+            self.info_values = self.information_criterion(reg_matrix, y)
+
+        if self.n_terms is None and self.order_selection is True:
+            model_length = np.where(self.info_values == np.amin(self.info_values))
+            model_length = int(model_length[0] + 1)
+            self.n_terms = model_length
+        elif self.n_terms is None and self.order_selection is not True:
+            raise ValueError(
+                "If order_selection is False, you must define n_terms value."
+            )
+        else:
+            model_length = self.n_terms
+
+        (self.err, self.pivv, psi) = self.error_reduction_ratio(
+            reg_matrix, y, model_length
+        )
+
+        tmp_piv = self.pivv[0:model_length]
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            self.final_model = self.regressor_code[tmp_piv, :].copy()
+        elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+            basis_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+            self.final_model = self.regressor_code[tmp_piv, :].copy()
+        else:
+            self.regressor_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.final_model = self.regressor_code[tmp_piv, :].copy()
+
+        self.theta = getattr(self, self.estimator)(psi, y)
+        if self.extended_least_squares is True:
+            self.theta = self._unbiased_estimator(
+                psi, y, self.theta, self.elag, self.max_lag, self.estimator
+            )
+        return self
+
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted values given an input.
+
+        The predict function allows a friendly usage by the user.
+        Given a previously trained model, predict values given
+        a new set of data.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            X_base = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-        else:
-            X_base, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-
-        yhat = super()._one_step_ahead_prediction(X_base)
-        return yhat.reshape(-1, 1)
-
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future)
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+        steps_ahead : int (default = None)
+            The user can use free run simulation, one-step ahead prediction
+            and n-step ahead prediction.
+        forecast_horizon : int, default=None
+            The number of predictions over the time.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
 
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
-        return yhat
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=0):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
-
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-
-        if self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-
-        raise Exception(
-            "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-        )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon=0):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
+        if steps_ahead is None:
+            yhat = self._basis_function_predict(X, y, forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        yhat = self._basis_function_n_step_prediction(
+            X, y, steps_ahead, forecast_horizon
+        )
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        lagged_data = self.build_matrix(X, y)
+
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            X_base = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+        else:
+            X_base, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+
+        yhat = super()._one_step_ahead_prediction(X_base)
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
 
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
-        return y_output
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = super()._nfir_predict(X, y_initial)
-        return y_output
-
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
+        return yhat
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=0):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        yhat = super()._basis_function_n_step_prediction(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        yhat = super()._basis_function_n_steps_horizon(
-            X, y, steps_ahead, forecast_horizon
-        )
-        return yhat.reshape(-1, 1)
-

compute_info_value(n_theta, n_samples, e_var)

Compute the information criteria value.

This function returns the information criteria concerning each number of regressor. The information criteria can be AIC, BIC, LILC and FPE.

Parameters:

Name Type Description Default
n_theta int

Number of parameters of the model.

required
n_samples int

Number of samples given the maximum lag.

required
e_var float

Variance of the residues

required

Returns:

Name Type Description
info_criteria_value float

The computed value given the information criteria selected by the user.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
374
-375
-376
-377
-378
-379
-380
-381
-382
-383
-384
-385
-386
-387
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon=0):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
+        return y_output
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = super()._nfir_predict(X, y_initial)
+        return y_output
+
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        yhat = super()._basis_function_predict(X, y_initial, forecast_horizon)
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        yhat = super()._basis_function_n_step_prediction(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        yhat = super()._basis_function_n_steps_horizon(
+            X, y, steps_ahead, forecast_horizon
+        )
+        return yhat.reshape(-1, 1)
+

aic(n_theta, n_samples, e_var)

Compute the Akaike information criteria value.

Parameters:

Name Type Description Default
n_theta int

Number of parameters of the model.

required
n_samples int

Number of samples given the maximum lag.

required
e_var float

Variance of the residues

required

Returns:

Name Type Description
info_criteria_value float

The computed value given the information criteria selected by the user.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def aic(self, n_theta, n_samples, e_var):
+    """Compute the Akaike information criteria value.
+
+    Parameters
+    ----------
+    n_theta : int
+        Number of parameters of the model.
+    n_samples : int
+        Number of samples given the maximum lag.
+    e_var : float
+        Variance of the residues
+
+    Returns
+    -------
+    info_criteria_value : float
+        The computed value given the information criteria selected by the
+        user.
+
+    """
+    model_factor = 2 * n_theta
+    e_factor = n_samples * np.log(e_var)
+    info_criteria_value = e_factor + model_factor
+
+    return info_criteria_value
+

bic(n_theta, n_samples, e_var)

Compute the Bayesian information criteria value.

Parameters:

Name Type Description Default
n_theta int

Number of parameters of the model.

required
n_samples int

Number of samples given the maximum lag.

required
e_var float

Variance of the residues

required

Returns:

Name Type Description
info_criteria_value float

The computed value given the information criteria selected by the user.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
387
 388
 389
 390
@@ -1513,48 +1665,31 @@
 407
 408
 409
-410
-411
def compute_info_value(self, n_theta, n_samples, e_var):
-    """Compute the information criteria value.
-
-    This function returns the information criteria concerning each
-    number of regressor. The information criteria can be AIC, BIC,
-    LILC and FPE.
-
-    Parameters
-    ----------
-    n_theta : int
-        Number of parameters of the model.
-    n_samples : int
-        Number of samples given the maximum lag.
-    e_var : float
-        Variance of the residues
+410
def bic(self, n_theta, n_samples, e_var):
+    """Compute the Bayesian information criteria value.
 
-    Returns
-    -------
-    info_criteria_value : float
-        The computed value given the information criteria selected by the
-        user.
-
-    """
-    if self.info_criteria == "bic":
-        model_factor = n_theta * np.log(n_samples)
-    elif self.info_criteria == "fpe":
-        model_factor = n_samples * np.log(
-            (n_samples + n_theta) / (n_samples - n_theta)
-        )
-    elif self.info_criteria == "lilc":
-        model_factor = 2 * n_theta * np.log(np.log(n_samples))
-    else:  # AIC
-        model_factor = +2 * n_theta
-
-    e_factor = n_samples * np.log(e_var)
-    info_criteria_value = e_factor + model_factor
-
-    return info_criteria_value
-

error_reduction_ratio(psi, y, process_term_number)

Perform the Error Reduction Ration algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required
process_term_number int

Number of Process Terms defined by the user.

required

Returns:

Name Type Description
err array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

psi_orthogonal ndarray of floats

The updated and orthogonal information matrix.

References
  • Manuscript: Orthogonal least squares methods and their application to non-linear system identification https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
  • Manuscript (portuguese): Identificação de Sistemas não Lineares Utilizando Modelos NARMAX Polinomiais – Uma Revisão e Novos Resultados
Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
250
-251
-252
+    Parameters
+    ----------
+    n_theta : int
+        Number of parameters of the model.
+    n_samples : int
+        Number of samples given the maximum lag.
+    e_var : float
+        Variance of the residues
+
+    Returns
+    -------
+    info_criteria_value : float
+        The computed value given the information criteria selected by the
+        user.
+
+    """
+    model_factor = n_theta * np.log(n_samples)
+    e_factor = n_samples * np.log(e_var)
+    info_criteria_value = e_factor + model_factor
+
+    return info_criteria_value
+

error_reduction_ratio(psi, y, process_term_number)

Perform the Error Reduction Ration algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required
process_term_number int

Number of Process Terms defined by the user.

required

Returns:

Name Type Description
err array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

psi_orthogonal ndarray of floats

The updated and orthogonal information matrix.

References
  • Manuscript: Orthogonal least squares methods and their application to non-linear system identification https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
  • Manuscript (portuguese): Identificação de Sistemas não Lineares Utilizando Modelos NARMAX Polinomiais – Uma Revisão e Novos Resultados
Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
252
 253
 254
 255
@@ -1619,149 +1754,77 @@
 314
 315
 316
-317
def error_reduction_ratio(self, psi, y, process_term_number):
-    """Perform the Error Reduction Ration algorithm.
-
-    Parameters
-    ----------
-    y : array-like of shape = n_samples
-        The target data used in the identification process.
-    psi : ndarray of floats
-        The information matrix of the model.
-    process_term_number : int
-        Number of Process Terms defined by the user.
-
-    Returns
-    -------
-    err : array-like of shape = number_of_model_elements
-        The respective ERR calculated for each regressor.
-    piv : array-like of shape = number_of_model_elements
-        Contains the index to put the regressors in the correct order
-        based on err values.
-    psi_orthogonal : ndarray of floats
-        The updated and orthogonal information matrix.
-
-    References
-    ----------
-    - Manuscript: Orthogonal least squares methods and their application
-       to non-linear system identification
-       https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
-    - Manuscript (portuguese): Identificação de Sistemas não Lineares
-       Utilizando Modelos NARMAX Polinomiais – Uma Revisão
-       e Novos Resultados
-
-    """
-    squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
-    tmp_psi = psi.copy()
-    y = y[self.max_lag :, 0].reshape(-1, 1)
-    tmp_y = y.copy()
-    dimension = tmp_psi.shape[1]
-    piv = np.arange(dimension)
-    tmp_err = np.zeros(dimension)
-    err = np.zeros(dimension)
-
-    for i in np.arange(0, dimension):
-        for j in np.arange(i, dimension):
-            # Add `eps` in the denominator to omit division by zero if
-            # denominator is zero
-            tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
-                np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
-            )
-
-        if i == process_term_number:
-            break
-
-        piv_index = np.argmax(tmp_err[i:]) + i
-        err[i] = tmp_err[piv_index]
-        tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
-        piv[[piv_index, i]] = piv[[i, piv_index]]
-
-        v = Orthogonalization().house(tmp_psi[i:, i])
+317
+318
+319
def error_reduction_ratio(self, psi, y, process_term_number):
+    """Perform the Error Reduction Ration algorithm.
+
+    Parameters
+    ----------
+    y : array-like of shape = n_samples
+        The target data used in the identification process.
+    psi : ndarray of floats
+        The information matrix of the model.
+    process_term_number : int
+        Number of Process Terms defined by the user.
+
+    Returns
+    -------
+    err : array-like of shape = number_of_model_elements
+        The respective ERR calculated for each regressor.
+    piv : array-like of shape = number_of_model_elements
+        Contains the index to put the regressors in the correct order
+        based on err values.
+    psi_orthogonal : ndarray of floats
+        The updated and orthogonal information matrix.
+
+    References
+    ----------
+    - Manuscript: Orthogonal least squares methods and their application
+       to non-linear system identification
+       https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
+    - Manuscript (portuguese): Identificação de Sistemas não Lineares
+       Utilizando Modelos NARMAX Polinomiais – Uma Revisão
+       e Novos Resultados
+
+    """
+    squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
+    tmp_psi = psi.copy()
+    y = y[self.max_lag :, 0].reshape(-1, 1)
+    tmp_y = y.copy()
+    dimension = tmp_psi.shape[1]
+    piv = np.arange(dimension)
+    tmp_err = np.zeros(dimension)
+    err = np.zeros(dimension)
+
+    for i in np.arange(0, dimension):
+        for j in np.arange(i, dimension):
+            # Add `eps` in the denominator to omit division by zero if
+            # denominator is zero
+            tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
+                np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
+            )
+
+        if i == process_term_number:
+            break
+
+        piv_index = np.argmax(tmp_err[i:]) + i
+        err[i] = tmp_err[piv_index]
+        tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
+        piv[[piv_index, i]] = piv[[i, piv_index]]
 
-        row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
+        v = Orthogonalization().house(tmp_psi[i:, i])
 
-        tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
+        row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
 
-        tmp_psi[i:, i:] = np.copy(row_result)
+        tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
 
-    tmp_piv = piv[0:process_term_number]
-    psi_orthogonal = psi[:, tmp_piv]
-    return err, piv, psi_orthogonal
-

fit(*, X=None, y=None)

Fit polynomial NARMAX model.

This is an 'alpha' version of the 'fit' function which allows a friendly usage by the user. Given two arguments, X and y, fit training data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

theta array-like of shape

The estimated parameters of the model.

err array-like of shape

The respective ERR calculated for each regressor.

info_values array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
413
-414
-415
-416
-417
-418
-419
-420
-421
-422
-423
-424
-425
-426
-427
-428
-429
-430
-431
-432
-433
-434
-435
-436
-437
-438
-439
-440
-441
-442
-443
-444
-445
-446
-447
-448
-449
-450
-451
-452
-453
-454
-455
-456
-457
-458
-459
-460
-461
-462
-463
-464
-465
-466
-467
-468
-469
-470
-471
-472
-473
-474
-475
-476
-477
-478
-479
-480
-481
-482
-483
-484
-485
-486
-487
+        tmp_psi[i:, i:] = np.copy(row_result)
+
+    tmp_piv = piv[0:process_term_number]
+    psi_orthogonal = psi[:, tmp_piv]
+    return err, piv, psi_orthogonal
+

fit(*, X=None, y=None)

Fit polynomial NARMAX model.

This is an 'alpha' version of the 'fit' function which allows a friendly usage by the user. Given two arguments, X and y, fit training data.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
model ndarray of int

The model code representation.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

theta array-like of shape

The estimated parameters of the model.

err array-like of shape

The respective ERR calculated for each regressor.

info_values array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
487
 488
 489
 490
@@ -1796,224 +1859,9 @@
 519
 520
 521
-522
def fit(self, *, X=None, y=None):
-    """Fit polynomial NARMAX model.
-
-    This is an 'alpha' version of the 'fit' function which allows
-    a friendly usage by the user. Given two arguments, X and y, fit
-    training data.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the training process.
-    y : ndarray of floats
-        The output data to be used in the training process.
-
-    Returns
-    -------
-    model : ndarray of int
-        The model code representation.
-    piv : array-like of shape = number_of_model_elements
-        Contains the index to put the regressors in the correct order
-        based on err values.
-    theta : array-like of shape = number_of_model_elements
-        The estimated parameters of the model.
-    err : array-like of shape = number_of_model_elements
-        The respective ERR calculated for each regressor.
-    info_values : array-like of shape = n_regressor
-        Vector with values of akaike's information criterion
-        for models with N terms (where N is the
-        vector position + 1).
-
-    """
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    if self.model_type == "NARMAX":
-        check_X_y(X, y)
-        self.max_lag = self._get_max_lag()
-        lagged_data = self.build_input_output_matrix(X, y)
-    elif self.model_type == "NAR":
-        lagged_data = self.build_output_matrix(y)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NFIR":
-        lagged_data = self.build_input_matrix(X)
-        self.max_lag = self._get_max_lag()
-    else:
-        raise ValueError(
-            "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-        )
-
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        reg_matrix = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-    else:
-        reg_matrix, self.ensemble = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-
-    if X is not None:
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    self.regressor_code = self.regressor_space(self.n_inputs)
-
-    if self.order_selection is True:
-        self.info_values = self.information_criterion(reg_matrix, y)
-
-    if self.n_terms is None and self.order_selection is True:
-        model_length = np.where(self.info_values == np.amin(self.info_values))
-        model_length = int(model_length[0] + 1)
-        self.n_terms = model_length
-    elif self.n_terms is None and self.order_selection is not True:
-        raise ValueError(
-            "If order_selection is False, you must define n_terms value."
-        )
-    else:
-        model_length = self.n_terms
-
-    (self.err, self.pivv, psi) = self.error_reduction_ratio(
-        reg_matrix, y, model_length
-    )
-
-    tmp_piv = self.pivv[0:model_length]
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        self.final_model = self.regressor_code[tmp_piv, :].copy()
-    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
-        basis_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-        self.final_model = self.regressor_code[tmp_piv, :].copy()
-    else:
-        self.regressor_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.final_model = self.regressor_code[tmp_piv, :].copy()
-
-    self.theta = getattr(self, self.estimator)(psi, y)
-    if self.extended_least_squares is True:
-        self.theta = self._unbiased_estimator(
-            psi, y, self.theta, self.elag, self.max_lag, self.estimator
-        )
-    return self
-

information_criterion(X_base, y)

Determine the model order.

This function uses a information criterion to determine the model size. 'Akaike'- Akaike's Information Criterion with critical value 2 (AIC) (default). 'Bayes' - Bayes Information Criterion (BIC). 'FPE' - Final Prediction Error (FPE). 'LILC' - Khundrin’s law ofiterated logarithm criterion (LILC).

Parameters:

Name Type Description Default
y array-like of shape

Target values of the system.

required
X_base array-like of shape

Input system values measured by the user.

required

Returns:

Name Type Description
output_vector array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def information_criterion(self, X_base, y):
-    """Determine the model order.
-
-    This function uses a information criterion to determine the model size.
-    'Akaike'-  Akaike's Information Criterion with
-               critical value 2 (AIC) (default).
-    'Bayes' -  Bayes Information Criterion (BIC).
-    'FPE'   -  Final Prediction Error (FPE).
-    'LILC'  -  Khundrin’s law ofiterated logarithm criterion (LILC).
-
-    Parameters
-    ----------
-    y : array-like of shape = n_samples
-        Target values of the system.
-    X_base : array-like of shape = n_samples
-        Input system values measured by the user.
-
-    Returns
-    -------
-    output_vector : array-like of shape = n_regressor
-        Vector with values of akaike's information criterion
-        for models with N terms (where N is the
-        vector position + 1).
-
-    """
-    if self.n_info_values is not None and self.n_info_values > X_base.shape[1]:
-        self.n_info_values = X_base.shape[1]
-        warnings.warn(
-            (
-                "n_info_values is greater than the maximum number of all"
-                " regressors space considering the chosen y_lag, u_lag, and"
-                f" non_degree. We set as {X_base.shape[1]}"
-            ),
-            stacklevel=2,
-        )
-
-    output_vector = np.zeros(self.n_info_values)
-    output_vector[:] = np.nan
-
-    n_samples = len(y) - self.max_lag
-
-    for i in range(0, self.n_info_values):
-        n_theta = i + 1
-        regressor_matrix = self.error_reduction_ratio(X_base, y, n_theta)[2]
-
-        tmp_theta = getattr(self, self.estimator)(regressor_matrix, y)
-
-        tmp_yhat = np.dot(regressor_matrix, tmp_theta)
-        tmp_residual = y[self.max_lag :] - tmp_yhat
-        e_var = np.var(tmp_residual, ddof=1)
-
-        output_vector[i] = self.compute_info_value(n_theta, n_samples, e_var)
-
-    return output_vector
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
524
+522
+523
+524
 525
 526
 527
@@ -2069,61 +1917,439 @@
 577
 578
 579
-580
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted values given an input.
-
-    The predict function allows a friendly usage by the user.
-    Given a previously trained model, predict values given
-    a new set of data.
-
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future)
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-    steps_ahead : int (default = None)
-        The user can use free run simulation, one-step ahead prediction
-        and n-step ahead prediction.
-    forecast_horizon : int, default=None
-        The number of predictions over the time.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
-
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    if steps_ahead is None:
-        yhat = self._basis_function_predict(X, y, forecast_horizon)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-    if steps_ahead == 1:
-        yhat = self._one_step_ahead_prediction(X, y)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    yhat = self._basis_function_n_step_prediction(
-        X, y, steps_ahead, forecast_horizon
-    )
-    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-    return yhat
+580
+581
+582
+583
+584
def fit(self, *, X=None, y=None):
+    """Fit polynomial NARMAX model.
+
+    This is an 'alpha' version of the 'fit' function which allows
+    a friendly usage by the user. Given two arguments, X and y, fit
+    training data.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the training process.
+    y : ndarray of floats
+        The output data to be used in the training process.
+
+    Returns
+    -------
+    model : ndarray of int
+        The model code representation.
+    piv : array-like of shape = number_of_model_elements
+        Contains the index to put the regressors in the correct order
+        based on err values.
+    theta : array-like of shape = number_of_model_elements
+        The estimated parameters of the model.
+    err : array-like of shape = number_of_model_elements
+        The respective ERR calculated for each regressor.
+    info_values : array-like of shape = n_regressor
+        Vector with values of akaike's information criterion
+        for models with N terms (where N is the
+        vector position + 1).
+
+    """
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    self.max_lag = self._get_max_lag()
+    lagged_data = self.build_matrix(X, y)
+
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        reg_matrix = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+    else:
+        reg_matrix, self.ensemble = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+
+    if X is not None:
+        self.n_inputs = _num_features(X)
+    else:
+        self.n_inputs = 1  # just to create the regressor space base
+
+    self.regressor_code = self.regressor_space(self.n_inputs)
+
+    if self.order_selection is True:
+        self.info_values = self.information_criterion(reg_matrix, y)
+
+    if self.n_terms is None and self.order_selection is True:
+        model_length = np.where(self.info_values == np.amin(self.info_values))
+        model_length = int(model_length[0] + 1)
+        self.n_terms = model_length
+    elif self.n_terms is None and self.order_selection is not True:
+        raise ValueError(
+            "If order_selection is False, you must define n_terms value."
+        )
+    else:
+        model_length = self.n_terms
+
+    (self.err, self.pivv, psi) = self.error_reduction_ratio(
+        reg_matrix, y, model_length
+    )
+
+    tmp_piv = self.pivv[0:model_length]
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        self.final_model = self.regressor_code[tmp_piv, :].copy()
+    elif self.basis_function.__class__.__name__ != "Polynomial" and self.ensemble:
+        basis_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+        self.final_model = self.regressor_code[tmp_piv, :].copy()
+    else:
+        self.regressor_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.final_model = self.regressor_code[tmp_piv, :].copy()
+
+    self.theta = getattr(self, self.estimator)(psi, y)
+    if self.extended_least_squares is True:
+        self.theta = self._unbiased_estimator(
+            psi, y, self.theta, self.elag, self.max_lag, self.estimator
+        )
+    return self
+

fpe(n_theta, n_samples, e_var)

Compute the Final Error Prediction value.

Parameters:

Name Type Description Default
n_theta int

Number of parameters of the model.

required
n_samples int

Number of samples given the maximum lag.

required
e_var float

Variance of the residues

required

Returns:

Name Type Description
info_criteria_value float

The computed value given the information criteria selected by the user.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def fpe(self, n_theta, n_samples, e_var):
+    """Compute the Final Error Prediction value.
+
+    Parameters
+    ----------
+    n_theta : int
+        Number of parameters of the model.
+    n_samples : int
+        Number of samples given the maximum lag.
+    e_var : float
+        Variance of the residues
+
+    Returns
+    -------
+    info_criteria_value : float
+        The computed value given the information criteria selected by the
+        user.
+
+    """
+    model_factor = n_samples * np.log((n_samples + n_theta) / (n_samples - n_theta))
+    e_factor = n_samples * np.log(e_var)
+    info_criteria_value = e_factor + model_factor
+
+    return info_criteria_value
+

get_info_criteria(info_criteria)

get info criteria

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def get_info_criteria(self, info_criteria):
+    """get info criteria"""
+    info_criteria_options = {
+        "aic": self.aic,
+        "bic": self.bic,
+        "fpe": self.fpe,
+        "lilc": self.lilc,
+    }
+    return info_criteria_options.get(info_criteria)
+

information_criterion(X_base, y)

Determine the model order.

This function uses a information criterion to determine the model size. 'Akaike'- Akaike's Information Criterion with critical value 2 (AIC) (default). 'Bayes' - Bayes Information Criterion (BIC). 'FPE' - Final Prediction Error (FPE). 'LILC' - Khundrin’s law ofiterated logarithm criterion (LILC).

Parameters:

Name Type Description Default
y array-like of shape

Target values of the system.

required
X_base array-like of shape

Input system values measured by the user.

required

Returns:

Name Type Description
output_vector array-like of shape

Vector with values of akaike's information criterion for models with N terms (where N is the vector position + 1).

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def information_criterion(self, X_base, y):
+    """Determine the model order.
+
+    This function uses a information criterion to determine the model size.
+    'Akaike'-  Akaike's Information Criterion with
+               critical value 2 (AIC) (default).
+    'Bayes' -  Bayes Information Criterion (BIC).
+    'FPE'   -  Final Prediction Error (FPE).
+    'LILC'  -  Khundrin’s law ofiterated logarithm criterion (LILC).
+
+    Parameters
+    ----------
+    y : array-like of shape = n_samples
+        Target values of the system.
+    X_base : array-like of shape = n_samples
+        Input system values measured by the user.
+
+    Returns
+    -------
+    output_vector : array-like of shape = n_regressor
+        Vector with values of akaike's information criterion
+        for models with N terms (where N is the
+        vector position + 1).
+
+    """
+    if self.n_info_values is not None and self.n_info_values > X_base.shape[1]:
+        self.n_info_values = X_base.shape[1]
+        warnings.warn(
+            (
+                "n_info_values is greater than the maximum number of all"
+                " regressors space considering the chosen y_lag, u_lag, and"
+                f" non_degree. We set as {X_base.shape[1]}"
+            ),
+            stacklevel=2,
+        )
+
+    output_vector = np.zeros(self.n_info_values)
+    output_vector[:] = np.nan
+
+    n_samples = len(y) - self.max_lag
+
+    for i in range(0, self.n_info_values):
+        n_theta = i + 1
+        regressor_matrix = self.error_reduction_ratio(X_base, y, n_theta)[2]
+
+        tmp_theta = getattr(self, self.estimator)(regressor_matrix, y)
+
+        tmp_yhat = np.dot(regressor_matrix, tmp_theta)
+        tmp_residual = y[self.max_lag :] - tmp_yhat
+        e_var = np.var(tmp_residual, ddof=1)
+
+        # output_vector[i] = self.compute_info_value(n_theta, n_samples, e_var)
+        output_vector[i] = self.info_criteria_function(n_theta, n_samples, e_var)
+
+    return output_vector
+

lilc(n_theta, n_samples, e_var)

Compute the Lilc information criteria value.

Parameters:

Name Type Description Default
n_theta int

Number of parameters of the model.

required
n_samples int

Number of samples given the maximum lag.

required
e_var float

Variance of the residues

required

Returns:

Name Type Description
info_criteria_value float

The computed value given the information criteria selected by the user.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def lilc(self, n_theta, n_samples, e_var):
+    """Compute the Lilc information criteria value.
+
+    Parameters
+    ----------
+    n_theta : int
+        Number of parameters of the model.
+    n_samples : int
+        Number of samples given the maximum lag.
+    e_var : float
+        Variance of the residues
+
+    Returns
+    -------
+    info_criteria_value : float
+        The computed value given the information criteria selected by the
+        user.
+
+    """
+    model_factor = 2 * n_theta * np.log(np.log(n_samples))
+    e_factor = n_samples * np.log(e_var)
+    info_criteria_value = e_factor + model_factor
+
+    return info_criteria_value
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\forward_regression_orthogonal_least_squares.py
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted values given an input.
+
+    The predict function allows a friendly usage by the user.
+    Given a previously trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future)
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+    steps_ahead : int (default = None)
+        The user can use free run simulation, one-step ahead prediction
+        and n-step ahead prediction.
+    forecast_horizon : int, default=None
+        The number of predictions over the time.
+
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    if steps_ahead is None:
+        yhat = self._basis_function_predict(X, y, forecast_horizon)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+    if steps_ahead == 1:
+        yhat = self._one_step_ahead_prediction(X, y)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    yhat = self._basis_function_n_step_prediction(
+        X, y, steps_ahead, forecast_horizon
+    )
+    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+    return yhat
 
\ No newline at end of file diff --git a/docs/code/general-estimators/index.html b/docs/code/general-estimators/index.html index e494da0e..78910c01 100644 --- a/docs/code/general-estimators/index.html +++ b/docs/code/general-estimators/index.html @@ -614,50 +614,7 @@ 589 590 591 -592 -593 -594 -595 -596 -597 -598 -599 -600 -601 -602 -603 -604 -605 -606 -607 -608 -609 -610 -611 -612 -613 -614 -615 -616 -617 -618 -619 -620 -621 -622 -623 -624 -625 -626 -627 -628 -629 -630 -631 -632 -633 -634 -635
class NARX(BaseMSS):
+592
class NARX(BaseMSS):
     """NARX model build on top of general estimators
 
     Currently is possible to use any estimator that have a fit/predict
@@ -731,550 +688,501 @@
     ):
         self.basis_function = basis_function
         self.model_type = model_type
-        self.non_degree = basis_function.degree
-        self.ylag = ylag
-        self.xlag = xlag
-        self.max_lag = self._get_max_lag()
-        self.base_estimator = base_estimator
-        if fit_params is None:
-            fit_params = {}
-
-        self.fit_params = fit_params
-        self.ensemble = None
-        self.n_inputs = None
-        self.regressor_code = None
-        self._validate_params()
-
-    def _validate_params(self):
-        """Validate input params."""
-        if isinstance(self.ylag, int) and self.ylag < 1:
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-        if isinstance(self.xlag, int) and self.xlag < 1:
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.xlag, (int, list)):
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.ylag, (int, list)):
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-    def fit(self, *, X=None, y=None):
-        """Train a NARX Neural Network model.
-
-        This is an training pipeline that allows a friendly usage
-        by the user. All the lagged features are built using the
-        SysIdentPy classes and we use the fit method of the base
-        estimator of the sklearn to fit the model.
-
-        Parameters
-        ----------
-        X : ndarrays of floats
-            The input data to be used in the training process.
-        y : ndarrays of floats
-            The output data to be used in the training process.
-
-        Returns
-        -------
-        base_estimator : sklearn estimator
-            The model fitted.
-
-        """
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NARMAX":
-            check_X_y(X, y)
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            reg_matrix = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.non_degree = basis_function.degree
+        self.ylag = ylag
+        self.xlag = xlag
+        self.max_lag = self._get_max_lag()
+        self.base_estimator = base_estimator
+        if fit_params is None:
+            fit_params = {}
+
+        self.fit_params = fit_params
+        self.ensemble = None
+        self.n_inputs = None
+        self.regressor_code = None
+        self._validate_params()
+
+    def _validate_params(self):
+        """Validate input params."""
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+            )
+
+    def fit(self, *, X=None, y=None):
+        """Train a NARX Neural Network model.
+
+        This is an training pipeline that allows a friendly usage
+        by the user. All the lagged features are built using the
+        SysIdentPy classes and we use the fit method of the base
+        estimator of the sklearn to fit the model.
+
+        Parameters
+        ----------
+        X : ndarrays of floats
+            The input data to be used in the training process.
+        y : ndarrays of floats
+            The output data to be used in the training process.
+
+        Returns
+        -------
+        base_estimator : sklearn estimator
+            The model fitted.
+
+        """
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X, y)
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            reg_matrix = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+        else:
+            reg_matrix, self.ensemble = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+
+        if X is not None:
+            self.n_inputs = _num_features(X)
         else:
-            reg_matrix, self.ensemble = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-
-        if X is not None:
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
+            self.n_inputs = 1  # just to create the regressor space base
+
+        self.regressor_code = self.regressor_space(self.n_inputs)
+        self.final_model = self.regressor_code
+        y = y[self.max_lag :].ravel()
+
+        self.base_estimator.fit(reg_matrix, y, **self.fit_params)
+        return self
 
-        self.regressor_code = self.regressor_space(self.n_inputs)
-        self.final_model = self.regressor_code
-        y = y[self.max_lag :].ravel()
-
-        self.base_estimator.fit(reg_matrix, y, **self.fit_params)
-        return self
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted given an input and initial values.
+
+        The predict function allows a friendly usage by the user.
+        Given a trained model, predict values given
+        a new set of data.
 
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted given an input and initial values.
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future).
 
-        The predict function allows a friendly usage by the user.
-        Given a trained model, predict values given
-        a new set of data.
-
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future).
-
-        Currently we only support infinity-steps-ahead prediction,
-        but run 1-step-ahead prediction manually is straightforward.
+        Currently we only support infinity-steps-ahead prediction,
+        but run 1-step-ahead prediction manually is straightforward.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
 
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
 
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        if steps_ahead is None:
-            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        yhat = self._basis_function_n_step_prediction(
-            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-        )
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        if steps_ahead is None:
+            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        yhat = self._basis_function_n_step_prediction(
+            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+        )
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        lagged_data = self.build_matrix(X, y)
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            X_base = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                # predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+        else:
+            X_base, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                # predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+
+        yhat = self.base_estimator.predict(X_base)
+        # yhat = np.concatenate([y[: self.max_lag].flatten(), yhat])  # delete this one
+        return yhat.reshape(-1, 1)
 
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            X_base = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                # predefined_regressors=self.pivv[: len(self.final_model)],
+    def _nar_step_ahead(self, y, steps_ahead):
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
             )
-        else:
-            X_base, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                # predefined_regressors=self.pivv[: len(self.final_model)],
-            )
+
+        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+        yhat = np.zeros(len(y) + steps_ahead, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        i = self.max_lag
 
-        yhat = self.base_estimator.predict(X_base)
-        # yhat = np.concatenate([y[: self.max_lag].flatten(), yhat])  # delete this one
-        return yhat.reshape(-1, 1)
-
-    def _nar_step_ahead(self, y, steps_ahead):
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
+        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+        if len(steps) > 1:
+            for step in steps[:-1]:
+                yhat[i : i + steps_ahead] = self._model_prediction(
+                    X=None, y_initial=y[step:i], forecast_horizon=steps_ahead
+                )[-steps_ahead:].ravel()
+                i += steps_ahead
 
-        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-        yhat = np.zeros(len(y) + steps_ahead, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-
-        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-        if len(steps) > 1:
-            for step in steps[:-1]:
-                yhat[i : i + steps_ahead] = self._model_prediction(
-                    X=None, y_initial=y[step:i], forecast_horizon=steps_ahead
-                )[-steps_ahead:].ravel()
-                i += steps_ahead
-
-            steps_ahead = np.sum(np.isnan(yhat))
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=None, y_initial=y[steps[-1] : i]
-            )[-steps_ahead:].ravel()
-        else:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=None, y_initial=y[0:i], forecast_horizon=steps_ahead
-            )[-steps_ahead:].ravel()
-
-        yhat = yhat.ravel()[self.max_lag : :]
-        return yhat.reshape(-1, 1)
-
-    def narmax_n_step_ahead(self, X, y, steps_ahead):
-        """n_steps ahead prediction method for NARMAX model"""
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-        X = X.reshape(-1, self.n_inputs)
-        yhat = np.zeros(X.shape[0], dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-        if len(steps) > 1:
-            for step in steps[:-1]:
-                yhat[i : i + steps_ahead] = self._model_prediction(
-                    X=X[step : i + steps_ahead],
-                    y_initial=y[step:i],
-                )[-steps_ahead:].ravel()
-                i += steps_ahead
+            steps_ahead = np.sum(np.isnan(yhat))
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=None, y_initial=y[steps[-1] : i]
+            )[-steps_ahead:].ravel()
+        else:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=None, y_initial=y[0:i], forecast_horizon=steps_ahead
+            )[-steps_ahead:].ravel()
+
+        yhat = yhat.ravel()[self.max_lag : :]
+        return yhat.reshape(-1, 1)
+
+    def narmax_n_step_ahead(self, X, y, steps_ahead):
+        """n_steps ahead prediction method for NARMAX model"""
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+        X = X.reshape(-1, self.n_inputs)
+        yhat = np.zeros(X.shape[0], dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        i = self.max_lag
+        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+        if len(steps) > 1:
+            for step in steps[:-1]:
+                yhat[i : i + steps_ahead] = self._model_prediction(
+                    X=X[step : i + steps_ahead],
+                    y_initial=y[step:i],
+                )[-steps_ahead:].ravel()
+                i += steps_ahead
+
+            steps_ahead = np.sum(np.isnan(yhat))
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[steps[-1] : i + steps_ahead],
+                y_initial=y[steps[-1] : i],
+            )[-steps_ahead:].ravel()
+        else:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[0 : i + steps_ahead],
+                y_initial=y[0:i],
+            )[-steps_ahead:].ravel()
 
-            steps_ahead = np.sum(np.isnan(yhat))
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[steps[-1] : i + steps_ahead],
-                y_initial=y[steps[-1] : i],
-            )[-steps_ahead:].ravel()
-        else:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[0 : i + steps_ahead],
-                y_initial=y[0:i],
-            )[-steps_ahead:].ravel()
-
-        yhat = yhat.ravel()[self.max_lag : :]
-        return yhat.reshape(-1, 1)
+        yhat = yhat.ravel()[self.max_lag : :]
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
 
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        yhat = np.zeros(X.shape[0], dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-        X = X.reshape(-1, self.n_inputs)
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X[k : i + steps_ahead], y[k : i + steps_ahead]
-            )[-steps_ahead:].ravel()
-
-            i += steps_ahead
-
-        yhat = yhat.ravel()
-        return yhat.reshape(-1, 1)
-        """
-        if self.model_type == "NARMAX":
-            return self.narmax_n_step_ahead(X, y, steps_ahead)
-
-        if self.model_type == "NAR":
-            return self._nar_step_ahead(y, steps_ahead)
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
-
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        if self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-
-        raise Exception(
-            "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-        )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        y_output = np.zeros(forecast_horizon, dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
-
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, forecast_horizon):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            raw_regressor[:final] = y_output[k:i]
-            for j in range(self.n_inputs):
-                init += self.max_lag
-                final += self.max_lag
-                raw_regressor[init:final] = X[k:i, j]
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        if self.model_type == "NARMAX":
+            return self.narmax_n_step_ahead(X, y, steps_ahead)
+
+        if self.model_type == "NAR":
+            return self._nar_step_ahead(y, steps_ahead)
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
+
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        y_output = np.zeros(forecast_horizon, dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, forecast_horizon):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            raw_regressor[:final] = y_output[k:i]
+            for j in range(self.n_inputs):
+                init += self.max_lag
+                final += self.max_lag
+                raw_regressor[init:final] = X[k:i, j]
+
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            y_output[i] = self.base_estimator.predict(regressor_value.reshape(1, -1))
+        return y_output[self.max_lag : :].reshape(-1, 1)
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = np.zeros(X.shape[0], dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+        X = X.reshape(-1, self.n_inputs)
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, X.shape[0]):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            raw_regressor[:final] = y_output[k:i]
+            for j in range(self.n_inputs):
+                init += self.max_lag
+                final += self.max_lag
+                raw_regressor[init:final] = X[k:i, j]
+
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            y_output[i] = self.base_estimator.predict(regressor_value.reshape(1, -1))
+        return y_output[self.max_lag : :].reshape(-1, 1)
 
-            y_output[i] = self.base_estimator.predict(regressor_value.reshape(1, -1))
-        return y_output[self.max_lag : :].reshape(-1, 1)
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = np.zeros(X.shape[0], dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
-        X = X.reshape(-1, self.n_inputs)
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, X.shape[0]):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            raw_regressor[:final] = y_output[k:i]
-            for j in range(self.n_inputs):
-                init += self.max_lag
-                final += self.max_lag
-                raw_regressor[init:final] = X[k:i, j]
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
+
+        analyzed_elements_number = self.max_lag + 1
+
+        for i in range(0, forecast_horizon - self.max_lag):
+            lagged_data = self.build_matrix(
+                X[i : i + analyzed_elements_number],
+                yhat[i : i + analyzed_elements_number].reshape(-1, 1),
+            )
+            X_tmp, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                # predefined_regressors=self.pivv[: len(self.final_model)],
+            )
 
-            y_output[i] = self.base_estimator.predict(regressor_value.reshape(1, -1))
-        return y_output[self.max_lag : :].reshape(-1, 1)
+            a = self.base_estimator.predict(X_tmp)
+            yhat[i + self.max_lag] = a[0]
 
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
+        return yhat[self.max_lag :].reshape(-1, 1)
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
 
-        analyzed_elements_number = self.max_lag + 1
-
-        for i in range(0, forecast_horizon - self.max_lag):
-            if self.model_type == "NARMAX":
-                lagged_data = self.build_input_output_matrix(
-                    X[i : i + analyzed_elements_number],
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1),
-                )
-            elif self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1)
-                )
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(
-                    X[i : i + analyzed_elements_number]
-                )
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            X_tmp, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                # predefined_regressors=self.pivv[: len(self.final_model)],
-            )
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+
+        i = self.max_lag
+
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
 
-            a = self.base_estimator.predict(X_tmp)
-            yhat[i + self.max_lag] = a[0]
-
-        return yhat[self.max_lag :].reshape(-1, 1)
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
+            if self.model_type == "NARMAX":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X[k : i + steps_ahead],
+                    y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-steps_ahead:].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-steps_ahead:].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            i += steps_ahead
 
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        return yhat[self.max_lag : :].reshape(-1, 1)
+
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+
+        i = self.max_lag
 
-        i = self.max_lag
-
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead],
-                    y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-steps_ahead:].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-steps_ahead:].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
-
-        return yhat[self.max_lag : :].reshape(-1, 1)
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-
-        i = self.max_lag
-
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead],
-                    y[k : i + steps_ahead],
-                    forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
-
-        yhat = yhat.ravel()
-        return yhat[self.max_lag : :].reshape(-1, 1)
-

fit(*, X=None, y=None)

Train a NARX Neural Network model.

This is an training pipeline that allows a friendly usage by the user. All the lagged features are built using the SysIdentPy classes and we use the fit method of the base estimator of the sklearn to fit the model.

Parameters:

Name Type Description Default
X ndarrays of floats

The input data to be used in the training process.

None
y ndarrays of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
base_estimator sklearn estimator

The model fitted.

Source code in sysidentpy\general_estimators\narx.py
127
-128
-129
-130
-131
-132
-133
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
+
+            if self.model_type == "NARMAX":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X[k : i + steps_ahead],
+                    y[k : i + steps_ahead],
+                    forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            i += steps_ahead
+
+        yhat = yhat.ravel()
+        return yhat[self.max_lag : :].reshape(-1, 1)
+

fit(*, X=None, y=None)

Train a NARX Neural Network model.

This is an training pipeline that allows a friendly usage by the user. All the lagged features are built using the SysIdentPy classes and we use the fit method of the base estimator of the sklearn to fit the model.

Parameters:

Name Type Description Default
X ndarrays of floats

The input data to be used in the training process.

None
y ndarrays of floats

The output data to be used in the training process.

None

Returns:

Name Type Description
base_estimator sklearn estimator

The model fitted.

Source code in sysidentpy\general_estimators\narx.py
133
 134
 135
 136
@@ -1319,73 +1227,67 @@
 175
 176
 177
-178
-179
-180
-181
-182
-183
-184
-185
def fit(self, *, X=None, y=None):
-    """Train a NARX Neural Network model.
-
-    This is an training pipeline that allows a friendly usage
-    by the user. All the lagged features are built using the
-    SysIdentPy classes and we use the fit method of the base
-    estimator of the sklearn to fit the model.
-
-    Parameters
-    ----------
-    X : ndarrays of floats
-        The input data to be used in the training process.
-    y : ndarrays of floats
-        The output data to be used in the training process.
-
-    Returns
-    -------
-    base_estimator : sklearn estimator
-        The model fitted.
-
-    """
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    if self.model_type == "NAR":
-        lagged_data = self.build_output_matrix(y)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NFIR":
-        lagged_data = self.build_input_matrix(X)
-        self.max_lag = self._get_max_lag()
-    elif self.model_type == "NARMAX":
-        check_X_y(X, y)
-        self.max_lag = self._get_max_lag()
-        lagged_data = self.build_input_output_matrix(X, y)
-    else:
-        raise ValueError(
-            "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-        )
-
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        reg_matrix = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
+178
def fit(self, *, X=None, y=None):
+    """Train a NARX Neural Network model.
+
+    This is an training pipeline that allows a friendly usage
+    by the user. All the lagged features are built using the
+    SysIdentPy classes and we use the fit method of the base
+    estimator of the sklearn to fit the model.
+
+    Parameters
+    ----------
+    X : ndarrays of floats
+        The input data to be used in the training process.
+    y : ndarrays of floats
+        The output data to be used in the training process.
+
+    Returns
+    -------
+    base_estimator : sklearn estimator
+        The model fitted.
+
+    """
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    self.max_lag = self._get_max_lag()
+    lagged_data = self.build_matrix(X, y)
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        reg_matrix = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+    else:
+        reg_matrix, self.ensemble = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+
+    if X is not None:
+        self.n_inputs = _num_features(X)
     else:
-        reg_matrix, self.ensemble = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-
-    if X is not None:
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    self.regressor_code = self.regressor_space(self.n_inputs)
-    self.final_model = self.regressor_code
-    y = y[self.max_lag :].ravel()
-
-    self.base_estimator.fit(reg_matrix, y, **self.fit_params)
-    return self
-

narmax_n_step_ahead(X, y, steps_ahead)

n_steps ahead prediction method for NARMAX model

Source code in sysidentpy\general_estimators\narx.py
319
+        self.n_inputs = 1  # just to create the regressor space base
+
+    self.regressor_code = self.regressor_space(self.n_inputs)
+    self.final_model = self.regressor_code
+    y = y[self.max_lag :].ravel()
+
+    self.base_estimator.fit(reg_matrix, y, **self.fit_params)
+    return self
+

narmax_n_step_ahead(X, y, steps_ahead)

n_steps ahead prediction method for NARMAX model

Source code in sysidentpy\general_estimators\narx.py
305
+306
+307
+308
+309
+310
+311
+312
+313
+314
+315
+316
+317
+318
+319
 320
 321
 322
@@ -1406,51 +1308,50 @@
 337
 338
 339
-340
-341
-342
-343
-344
-345
-346
-347
-348
-349
-350
-351
def narmax_n_step_ahead(self, X, y, steps_ahead):
-    """n_steps ahead prediction method for NARMAX model"""
-    if len(y) < self.max_lag:
-        raise Exception("Insufficient initial conditions elements!")
-
-    to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-    X = X.reshape(-1, self.n_inputs)
-    yhat = np.zeros(X.shape[0], dtype=float)
-    yhat.fill(np.nan)
-    yhat[: self.max_lag] = y[: self.max_lag, 0]
-    i = self.max_lag
-    steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-    if len(steps) > 1:
-        for step in steps[:-1]:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[step : i + steps_ahead],
-                y_initial=y[step:i],
-            )[-steps_ahead:].ravel()
-            i += steps_ahead
+340
def narmax_n_step_ahead(self, X, y, steps_ahead):
+    """n_steps ahead prediction method for NARMAX model"""
+    if len(y) < self.max_lag:
+        raise ValueError(
+            "Insufficient initial condition elements! Expected at least"
+            f" {self.max_lag} elements."
+        )
+
+    to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+    X = X.reshape(-1, self.n_inputs)
+    yhat = np.zeros(X.shape[0], dtype=float)
+    yhat.fill(np.nan)
+    yhat[: self.max_lag] = y[: self.max_lag, 0]
+    i = self.max_lag
+    steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+    if len(steps) > 1:
+        for step in steps[:-1]:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[step : i + steps_ahead],
+                y_initial=y[step:i],
+            )[-steps_ahead:].ravel()
+            i += steps_ahead
+
+        steps_ahead = np.sum(np.isnan(yhat))
+        yhat[i : i + steps_ahead] = self._model_prediction(
+            X=X[steps[-1] : i + steps_ahead],
+            y_initial=y[steps[-1] : i],
+        )[-steps_ahead:].ravel()
+    else:
+        yhat[i : i + steps_ahead] = self._model_prediction(
+            X=X[0 : i + steps_ahead],
+            y_initial=y[0:i],
+        )[-steps_ahead:].ravel()
 
-        steps_ahead = np.sum(np.isnan(yhat))
-        yhat[i : i + steps_ahead] = self._model_prediction(
-            X=X[steps[-1] : i + steps_ahead],
-            y_initial=y[steps[-1] : i],
-        )[-steps_ahead:].ravel()
-    else:
-        yhat[i : i + steps_ahead] = self._model_prediction(
-            X=X[0 : i + steps_ahead],
-            y_initial=y[0:i],
-        )[-steps_ahead:].ravel()
-
-    yhat = yhat.ravel()[self.max_lag : :]
-    return yhat.reshape(-1, 1)
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted given an input and initial values.

The predict function allows a friendly usage by the user. Given a trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future).

Currently we only support infinity-steps-ahead prediction, but run 1-step-ahead prediction manually is straightforward.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\general_estimators\narx.py
187
+    yhat = yhat.ravel()[self.max_lag : :]
+    return yhat.reshape(-1, 1)
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted given an input and initial values.

The predict function allows a friendly usage by the user. Given a trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future).

Currently we only support infinity-steps-ahead prediction, but run 1-step-ahead prediction manually is straightforward.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\general_estimators\narx.py
180
+181
+182
+183
+184
+185
+186
+187
 188
 189
 190
@@ -1498,67 +1399,60 @@
 232
 233
 234
-235
-236
-237
-238
-239
-240
-241
-242
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted given an input and initial values.
+235
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted given an input and initial values.
+
+    The predict function allows a friendly usage by the user.
+    Given a trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future).
 
-    The predict function allows a friendly usage by the user.
-    Given a trained model, predict values given
-    a new set of data.
-
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future).
-
-    Currently we only support infinity-steps-ahead prediction,
-    but run 1-step-ahead prediction manually is straightforward.
+    Currently we only support infinity-steps-ahead prediction,
+    but run 1-step-ahead prediction manually is straightforward.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
 
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
 
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    if steps_ahead is None:
-        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-    if steps_ahead == 1:
-        yhat = self._one_step_ahead_prediction(X, y)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    yhat = self._basis_function_n_step_prediction(
-        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-    )
-    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-    return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    if steps_ahead is None:
+        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+    if steps_ahead == 1:
+        yhat = self._one_step_ahead_prediction(X, y)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    yhat = self._basis_function_n_step_prediction(
+        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+    )
+    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+    return yhat
 
\ No newline at end of file diff --git a/docs/code/metamss/index.html b/docs/code/metamss/index.html index 9ccf9f36..fa0ea86e 100644 --- a/docs/code/metamss/index.html +++ b/docs/code/metamss/index.html @@ -51,7 +51,8 @@ 0 x1(k-2) 0.9000 0.0 1 y(k-1) 0.1999 0.0 2 x1(k-1)y(k-1) 0.1000 0.0 -

References

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
 24
+

References

  • Manuscript: Meta-Model Structure Selection: Building Polynomial NARX Model for Regression and Classification https://arxiv.org/pdf/2109.09917.pdf
  • Manuscript (Portuguese): Identificação de Sistemas Não Lineares Utilizando o Algoritmo Híbrido e Binário de Otimização por Enxame de Partículas e Busca Gravitacional DOI: 10.17648/sbai-2019-111317
  • Master thesis: Meta model structure selection: an algorithm for building polynomial NARX models for regression and classification
Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
 23
+ 24
  25
  26
  27
@@ -733,285 +734,274 @@
 703
 704
 705
-706
-707
-708
-709
-710
-711
-712
-713
-714
-715
-716
-717
-718
@deprecated(
-    version="v0.3.0",
-    future_version="v0.4.0",
-    message=(
-        "Passing a string to define the estimator will rise an error in v0.4.0."
-        " \n You'll have to use MetaMSS(estimator=LeastSquares()) instead. \n The"
-        " only change is that you'll have to define the estimator first instead"
-        " of passing a string like 'least_squares'. \n This change will make"
-        " easier to implement new estimators and it'll improve code"
-        " readability."
-    ),
-)
-class MetaMSS(SimulateNARMAX, BPSOGSA):
-    r"""Meta-Model Structure Selection: Building Polynomial NARMAX model
-
-    This class uses the MetaMSS ([1]_, [2]_, [3]_) algorithm to build NARMAX models.
-    The NARMAX model is described as:
-
-    $$
-        y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x},
-        e_{k-1}, \dotsc, e_{k-n_e}] + e_k
-    $$
-
-    where $n_y\in \mathbb{N}^*$, $n_x \in \mathbb{N}$, $n_e \in \mathbb{N}$,
-    are the maximum lags for the system output and input respectively;
-    $x_k \in \mathbb{R}^{n_x}$ is the system input and $y_k \in \mathbb{R}^{n_y}$
-    is the system output at discrete time $k \in \mathbb{N}^n$;
-    $e_k \in \mathbb{R}^{n_e}$ stands for uncertainties and possible noise
-    at discrete time $k$. In this case, $\mathcal{F}^\ell$ is some nonlinear function
-    of the input and output regressors with nonlinearity degree $\ell \in \mathbb{N}$
-    and $d$ is a time delay typically set to $d=1$.
-
-    Parameters
-    ----------
-    ylag : int, default=2
-        The maximum lag of the output.
-    xlag : int, default=2
-        The maximum lag of the input.
-    loss_func : str, default="metamss_loss"
-        The loss function to be minimized.
-    estimator : str, default="least_squares"
-        The parameter estimation method.
-    estimate_parameter : bool, default=True
-        Whether to estimate the model parameters.
-    extended_least_squares : bool, default=False
-        Whether to use extended least squares method
-        for parameter estimation.
-        Note that we define a specific set of noise regressors.
-    lam : float, default=0.98
-        Forgetting factor of the Recursive Least Squares method.
-    delta : float, default=0.01
-        Normalization factor of the P matrix.
-    offset_covariance : float, default=0.2
-        The offset covariance factor of the affine least mean squares
-        filter.
-    mu : float, default=0.01
-        The convergence coefficient (learning rate) of the filter.
-    eps : float
-        Normalization factor of the normalized filters.
-    gama : float, default=0.2
-        The leakage factor of the Leaky LMS method.
-    weight : float, default=0.02
-        Weight factor to control the proportions of the error norms
-        and offers an extra degree of freedom within the adaptation
-        of the LMS mixed norm method.
-    maxiter : int, default=30
-        The maximum number of iterations.
-    alpha : int, default=23
-        The descending coefficient of the gravitational constant.
-    g_zero : int, default=100
-        The initial value of the gravitational constant.
-    k_agents_percent: int, default=2
-        Percent of agents applying force to the others in the last iteration.
-    norm : int, default=-2
-        The information criteria method to be used.
-    power : int, default=2
-        The number of the model terms to be selected.
-        Note that n_terms overwrite the information criteria
-        values.
-    n_agents : int, default=10
-        The number of agents to search the optimal solution.
-    p_zeros : float, default=0.5
-        The probability of getting ones in the construction of the population.
-    p_zeros : float, default=0.5
-        The probability of getting zeros in the construction of the population.
-
-    Examples
-    --------
-    >>> import numpy as np
-    >>> import matplotlib.pyplot as plt
-    >>> from sysidentpy.model_structure_selection import MetaMSS
-    >>> from sysidentpy.metrics import root_relative_squared_error
-    >>> from sysidentpy.basis_function._basis_function import Polynomial
-    >>> from sysidentpy.utils.display_results import results
-    >>> from sysidentpy.utils.generate_data import get_siso_data
-    >>> x_train, x_valid, y_train, y_valid = get_siso_data(n=400,
-    ...                                                    colored_noise=False,
-    ...                                                    sigma=0.001,
-    ...                                                    train_percentage=80)
-    >>> basis_function = Polynomial(degree=2)
-    >>> model = MetaMSS(
-    ...     basis_function=basis_function,
-    ...     norm=-2,
-    ...     xlag=7,
-    ...     ylag=7,
-    ...     estimator="least_squares",
-    ...     k_agents_percent=2,
-    ...     estimate_parameter=True,
-    ...     maxiter=30,
-    ...     n_agents=10,
-    ...     p_value=0.05,
-    ...     loss_func='metamss_loss'
-    ... )
-    >>> model.fit(x_train, y_train, x_valid, y_valid)
-    >>> yhat = model.predict(x_valid, y_valid)
-    >>> rrse = root_relative_squared_error(y_valid, yhat)
-    >>> print(rrse)
-    0.001993603325328823
-    >>> r = pd.DataFrame(
-    ...     results(
-    ...         model.final_model, model.theta, model.err,
-    ...         model.n_terms, err_precision=8, dtype='sci'
-    ...         ),
-    ...     columns=['Regressors', 'Parameters', 'ERR'])
-    >>> print(r)
-        Regressors Parameters         ERR
-    0        x1(k-2)     0.9000       0.0
-    1         y(k-1)     0.1999       0.0
-    2  x1(k-1)y(k-1)     0.1000       0.0
-
-    References
-    ----------
-    - Manuscript: Meta-Model Structure Selection: Building Polynomial NARX Model
-       for Regression and Classification
-       https://arxiv.org/pdf/2109.09917.pdf
-    - Manuscript (Portuguese): Identificação de Sistemas Não Lineares
-       Utilizando o Algoritmo Híbrido e Binário de Otimização por
-       Enxame de Partículas e Busca Gravitacional
-       DOI: 10.17648/sbai-2019-111317
-    - Master thesis: Meta model structure selection: an algorithm for
-       building polynomial NARX models for regression and classification
-
-    """
-
-    def __init__(
-        self,
-        *,
-        maxiter: int = 30,
-        alpha: int = 23,
-        g_zero: int = 100,
-        k_agents_percent: int = 2,
-        norm: Union[int, float] = -2,
-        power: int = 2,
-        n_agents: int = 10,
-        p_zeros: float = 0.5,
-        p_ones: float = 0.5,
-        p_value: float = 0.05,
-        xlag: Union[int, list] = 1,
-        ylag: Union[int, list] = 1,
-        elag: Union[int, list] = 1,
-        estimator: str = "least_squares",
-        extended_least_squares: bool = False,
-        lam: float = 0.98,
-        delta: float = 0.01,
-        offset_covariance: float = 0.2,
-        mu: float = 0.01,
-        eps: np.float64 = np.finfo(np.float64).eps,
-        gama: float = 0.2,
-        weight: float = 0.02,
-        estimate_parameter: bool = True,
-        loss_func: str = "metamss_loss",
-        model_type: str = "NARMAX",
-        basis_function: Polynomial = Polynomial(),
-        steps_ahead: Union[int, None] = None,
-        random_state: Union[int, None] = None,
-    ):
-        super().__init__(
-            estimator=estimator,
-            extended_least_squares=extended_least_squares,
-            lam=lam,
-            delta=delta,
-            offset_covariance=offset_covariance,
-            mu=mu,
-            eps=eps,
-            gama=gama,
-            weight=weight,
-            estimate_parameter=estimate_parameter,
-            model_type=model_type,
-            basis_function=basis_function,
-        )
-
-        BPSOGSA.__init__(
-            self,
-            n_agents=n_agents,
-            maxiter=maxiter,
-            g_zero=g_zero,
-            alpha=alpha,
-            k_agents_percent=k_agents_percent,
-            norm=norm,
-            power=power,
-            p_zeros=p_zeros,
-            p_ones=p_ones,
-        )
-
-        self.xlag = xlag
-        self.ylag = ylag
-        self.elag = elag
-        self.non_degree = basis_function.degree
-        self.p_value = p_value
-        self.estimator = estimator
-        self.estimate_parameter = estimate_parameter
-        self.loss_func = loss_func
-        self.steps_ahead = steps_ahead
-        self.random_state = random_state
-        self.n_inputs = None
-        self.regressor_code = None
-        self.best_model_history = None
-        self.tested_models = None
-        self.final_model = None
-        self._validate_metamss_params()
-
-    def _validate_metamss_params(self):
-        if isinstance(self.ylag, int) and self.ylag < 1:
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-        if isinstance(self.xlag, int) and self.xlag < 1:
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.xlag, (int, list)):
-            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
-
-        if not isinstance(self.ylag, (int, list)):
-            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
-
-    def fit(self, *, X=None, y=None, X_test=None, y_test=None):
-        """Fit the polynomial NARMAX model.
-
-        Parameters
-        ----------
-        X_train : ndarray of floats
-            The input data to be used in the training process.
-        y_train : ndarray of floats
-            The output data to be used in the training process.
-        X_test : ndarray of floats
-            The input data to be used in the prediction process.
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-
-        Returns
-        -------
-        self : returns an instance of self.
-
-        """
-        if self.basis_function.__class__.__name__ != "Polynomial":
-            raise NotImplementedError(
-                "Currently MetaMSS only supports polynomial models."
-            )
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        if X is not None:
-            check_X_y(X, y)
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
-
-        #  self.n_inputs = _num_features(X_train)
+706
@deprecated(
+    version="v0.3.0",
+    future_version="v0.4.0",
+    message=(
+        "Passing a string to define the estimator will rise an error in v0.4.0."
+        " \n You'll have to use MetaMSS(estimator=LeastSquares()) instead. \n The"
+        " only change is that you'll have to define the estimator first instead"
+        " of passing a string like 'least_squares'. \n This change will make"
+        " easier to implement new estimators and it'll improve code"
+        " readability."
+    ),
+)
+class MetaMSS(SimulateNARMAX, BPSOGSA):
+    r"""Meta-Model Structure Selection: Building Polynomial NARMAX model
+
+    This class uses the MetaMSS ([1]_, [2]_, [3]_) algorithm to build NARMAX models.
+    The NARMAX model is described as:
+
+    $$
+        y_k= F^\ell[y_{k-1}, \dotsc, y_{k-n_y},x_{k-d}, x_{k-d-1}, \dotsc, x_{k-d-n_x},
+        e_{k-1}, \dotsc, e_{k-n_e}] + e_k
+    $$
+
+    where $n_y\in \mathbb{N}^*$, $n_x \in \mathbb{N}$, $n_e \in \mathbb{N}$,
+    are the maximum lags for the system output and input respectively;
+    $x_k \in \mathbb{R}^{n_x}$ is the system input and $y_k \in \mathbb{R}^{n_y}$
+    is the system output at discrete time $k \in \mathbb{N}^n$;
+    $e_k \in \mathbb{R}^{n_e}$ stands for uncertainties and possible noise
+    at discrete time $k$. In this case, $\mathcal{F}^\ell$ is some nonlinear function
+    of the input and output regressors with nonlinearity degree $\ell \in \mathbb{N}$
+    and $d$ is a time delay typically set to $d=1$.
+
+    Parameters
+    ----------
+    ylag : int, default=2
+        The maximum lag of the output.
+    xlag : int, default=2
+        The maximum lag of the input.
+    loss_func : str, default="metamss_loss"
+        The loss function to be minimized.
+    estimator : str, default="least_squares"
+        The parameter estimation method.
+    estimate_parameter : bool, default=True
+        Whether to estimate the model parameters.
+    extended_least_squares : bool, default=False
+        Whether to use extended least squares method
+        for parameter estimation.
+        Note that we define a specific set of noise regressors.
+    lam : float, default=0.98
+        Forgetting factor of the Recursive Least Squares method.
+    delta : float, default=0.01
+        Normalization factor of the P matrix.
+    offset_covariance : float, default=0.2
+        The offset covariance factor of the affine least mean squares
+        filter.
+    mu : float, default=0.01
+        The convergence coefficient (learning rate) of the filter.
+    eps : float
+        Normalization factor of the normalized filters.
+    gama : float, default=0.2
+        The leakage factor of the Leaky LMS method.
+    weight : float, default=0.02
+        Weight factor to control the proportions of the error norms
+        and offers an extra degree of freedom within the adaptation
+        of the LMS mixed norm method.
+    maxiter : int, default=30
+        The maximum number of iterations.
+    alpha : int, default=23
+        The descending coefficient of the gravitational constant.
+    g_zero : int, default=100
+        The initial value of the gravitational constant.
+    k_agents_percent: int, default=2
+        Percent of agents applying force to the others in the last iteration.
+    norm : int, default=-2
+        The information criteria method to be used.
+    power : int, default=2
+        The number of the model terms to be selected.
+        Note that n_terms overwrite the information criteria
+        values.
+    n_agents : int, default=10
+        The number of agents to search the optimal solution.
+    p_zeros : float, default=0.5
+        The probability of getting ones in the construction of the population.
+    p_zeros : float, default=0.5
+        The probability of getting zeros in the construction of the population.
+
+    Examples
+    --------
+    >>> import numpy as np
+    >>> import matplotlib.pyplot as plt
+    >>> from sysidentpy.model_structure_selection import MetaMSS
+    >>> from sysidentpy.metrics import root_relative_squared_error
+    >>> from sysidentpy.basis_function._basis_function import Polynomial
+    >>> from sysidentpy.utils.display_results import results
+    >>> from sysidentpy.utils.generate_data import get_siso_data
+    >>> x_train, x_valid, y_train, y_valid = get_siso_data(n=400,
+    ...                                                    colored_noise=False,
+    ...                                                    sigma=0.001,
+    ...                                                    train_percentage=80)
+    >>> basis_function = Polynomial(degree=2)
+    >>> model = MetaMSS(
+    ...     basis_function=basis_function,
+    ...     norm=-2,
+    ...     xlag=7,
+    ...     ylag=7,
+    ...     estimator="least_squares",
+    ...     k_agents_percent=2,
+    ...     estimate_parameter=True,
+    ...     maxiter=30,
+    ...     n_agents=10,
+    ...     p_value=0.05,
+    ...     loss_func='metamss_loss'
+    ... )
+    >>> model.fit(x_train, y_train, x_valid, y_valid)
+    >>> yhat = model.predict(x_valid, y_valid)
+    >>> rrse = root_relative_squared_error(y_valid, yhat)
+    >>> print(rrse)
+    0.001993603325328823
+    >>> r = pd.DataFrame(
+    ...     results(
+    ...         model.final_model, model.theta, model.err,
+    ...         model.n_terms, err_precision=8, dtype='sci'
+    ...         ),
+    ...     columns=['Regressors', 'Parameters', 'ERR'])
+    >>> print(r)
+        Regressors Parameters         ERR
+    0        x1(k-2)     0.9000       0.0
+    1         y(k-1)     0.1999       0.0
+    2  x1(k-1)y(k-1)     0.1000       0.0
+
+    References
+    ----------
+    - Manuscript: Meta-Model Structure Selection: Building Polynomial NARX Model
+       for Regression and Classification
+       https://arxiv.org/pdf/2109.09917.pdf
+    - Manuscript (Portuguese): Identificação de Sistemas Não Lineares
+       Utilizando o Algoritmo Híbrido e Binário de Otimização por
+       Enxame de Partículas e Busca Gravitacional
+       DOI: 10.17648/sbai-2019-111317
+    - Master thesis: Meta model structure selection: an algorithm for
+       building polynomial NARX models for regression and classification
+
+    """
+
+    def __init__(
+        self,
+        *,
+        maxiter: int = 30,
+        alpha: int = 23,
+        g_zero: int = 100,
+        k_agents_percent: int = 2,
+        norm: Union[int, float] = -2,
+        power: int = 2,
+        n_agents: int = 10,
+        p_zeros: float = 0.5,
+        p_ones: float = 0.5,
+        p_value: float = 0.05,
+        xlag: Union[int, list] = 1,
+        ylag: Union[int, list] = 1,
+        elag: Union[int, list] = 1,
+        estimator: str = "least_squares",
+        extended_least_squares: bool = False,
+        lam: float = 0.98,
+        delta: float = 0.01,
+        offset_covariance: float = 0.2,
+        mu: float = 0.01,
+        eps: np.float64 = np.finfo(np.float64).eps,
+        gama: float = 0.2,
+        weight: float = 0.02,
+        estimate_parameter: bool = True,
+        loss_func: str = "metamss_loss",
+        model_type: str = "NARMAX",
+        basis_function: Polynomial = Polynomial(),
+        steps_ahead: Union[int, None] = None,
+        random_state: Union[int, None] = None,
+    ):
+        super().__init__(
+            estimator=estimator,
+            extended_least_squares=extended_least_squares,
+            lam=lam,
+            delta=delta,
+            offset_covariance=offset_covariance,
+            mu=mu,
+            eps=eps,
+            gama=gama,
+            weight=weight,
+            estimate_parameter=estimate_parameter,
+            model_type=model_type,
+            basis_function=basis_function,
+        )
+        BPSOGSA.__init__(
+            self,
+            n_agents=n_agents,
+            maxiter=maxiter,
+            g_zero=g_zero,
+            alpha=alpha,
+            k_agents_percent=k_agents_percent,
+            norm=norm,
+            power=power,
+            p_zeros=p_zeros,
+            p_ones=p_ones,
+        )
+
+        self.xlag = xlag
+        self.ylag = ylag
+        self.elag = elag
+        self.non_degree = basis_function.degree
+        self.p_value = p_value
+        self.estimator = estimator
+        self.estimate_parameter = estimate_parameter
+        self.loss_func = loss_func
+        self.steps_ahead = steps_ahead
+        self.random_state = random_state
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.n_inputs = None
+        self.regressor_code = None
+        self.best_model_history = None
+        self.tested_models = None
+        self.final_model = None
+        self._validate_metamss_params()
+
+    def _validate_metamss_params(self):
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+    def fit(self, *, X=None, y=None, X_test=None, y_test=None):
+        """Fit the polynomial NARMAX model.
+
+        Parameters
+        ----------
+        X_train : ndarray of floats
+            The input data to be used in the training process.
+        y_train : ndarray of floats
+            The output data to be used in the training process.
+        X_test : ndarray of floats
+            The input data to be used in the prediction process.
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+
+        Returns
+        -------
+        self : returns an instance of self.
+
+        """
+        if self.basis_function.__class__.__name__ != "Polynomial":
+            raise NotImplementedError(
+                "Currently MetaMSS only supports polynomial models."
+            )
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        if X is not None:
+            check_X_y(X, y)
+            self.n_inputs = _num_features(X)
+        else:
+            self.n_inputs = 1  # just to create the regressor space base
+
+        #  self.n_inputs = _num_features(X_train)
+        self.max_lag = self._get_max_lag()
         self.regressor_code = self.regressor_space(self.n_inputs)
         self.dimension = self.regressor_code.shape[0]
         velocity = np.zeros([self.dimension, self.n_agents])
@@ -1056,391 +1046,391 @@
             model_code=self.final_model,
             steps_ahead=self.steps_ahead,
         )
-        return self
-
-    def evaluate_objective_function(self, X_train, y_train, X_test, y_test, population):
-        """Fit the polynomial NARMAX model.
-
-        Parameters
-        ----------
-        X_train : ndarray of floats
-            The input data to be used in the training process.
-        y_train : ndarray of floats
-            The output data to be used in the training process.
-        X_test : ndarray of floats
-            The input data to be used in the prediction process.
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-        population : ndarray of zeros and ones
-            The initial population of agents.
-
-        Returns
-        -------
-        fitness_value : ndarray
-            The fitness value of each agent.
-        """
-        fitness = []
-        for agent in population.T:
-            if np.all(agent == 0):
-                fitness.append(30)  # penalty for cases where there is no terms
-                continue
-
-            m = self.regressor_code[agent == 1].copy()
-            yhat = self.simulate(
-                X_train=X_train,
-                y_train=y_train,
-                X_test=X_test,
-                y_test=y_test,
-                model_code=m,
-                steps_ahead=self.steps_ahead,
-            )
-
-            residues = y_test - yhat
-
-            if self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(y_train)
-                self.max_lag = self._get_max_lag()
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(X_train)
-                self.max_lag = self._get_max_lag()
-            elif self.model_type == "NARMAX":
-                self.max_lag = self._get_max_lag()
-                lagged_data = self.build_input_output_matrix(X_train, y_train)
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
+        self.max_lag = self._get_max_lag()
+        return self
+
+    def evaluate_objective_function(self, X_train, y_train, X_test, y_test, population):
+        """Fit the polynomial NARMAX model.
+
+        Parameters
+        ----------
+        X_train : ndarray of floats
+            The input data to be used in the training process.
+        y_train : ndarray of floats
+            The output data to be used in the training process.
+        X_test : ndarray of floats
+            The input data to be used in the prediction process.
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+        population : ndarray of zeros and ones
+            The initial population of agents.
+
+        Returns
+        -------
+        fitness_value : ndarray
+            The fitness value of each agent.
+        """
+        fitness = []
+        for agent in population.T:
+            if np.all(agent == 0):
+                fitness.append(30)  # penalty for cases where there is no terms
+                continue
+
+            m = self.regressor_code[agent == 1].copy()
+            yhat = self.simulate(
+                X_train=X_train,
+                y_train=y_train,
+                X_test=X_test,
+                y_test=y_test,
+                model_code=m,
+                steps_ahead=self.steps_ahead,
+            )
+
+            residues = y_test - yhat
+            self.max_lag = self._get_max_lag()
+            lagged_data = self.build_matrix(X_train, y_train)
+
+            psi = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=self.pivv
+            )
+
+            pos_insignificant_terms, _, _ = self.perform_t_test(
+                psi, self.theta, residues
+            )
+
+            pos_aux = np.where(agent == 1)[0]
+            pos_aux = pos_aux[pos_insignificant_terms]
+            agent[pos_aux] = 0
 
-            psi = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=self.pivv
-            )
-
-            pos_insignificant_terms, _, _ = self.perform_t_test(
-                psi, self.theta, residues
-            )
-
-            pos_aux = np.where(agent == 1)[0]
-            pos_aux = pos_aux[pos_insignificant_terms]
-            agent[pos_aux] = 0
-
-            m = self.regressor_code[agent == 1].copy()
-
-            if np.all(agent == 0):
-                fitness.append(1000)  # just a big number as penalty
-                continue
-
-            yhat = self.simulate(
-                X_train=X_train,
-                y_train=y_train,
-                X_test=X_test,
-                y_test=y_test,
-                model_code=m,
-                steps_ahead=self.steps_ahead,
-            )
-
-            self.final_model = m.copy()
-            self.tested_models.append(m)
-            if len(self.theta) == 0:
-                print(m)
-            d = getattr(self, self.loss_func)(y_test, yhat, len(self.theta))
-            fitness.append(d)
-
-        return fitness
-
-    def perform_t_test(
-        self, psi: np.ndarray, theta: np.ndarray, residues: np.ndarray
-    ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
-        """
-        Perform the t-test given the p-value defined by the user
-
-        Arguments:
-        ----------
-            psi : array
-                the data matrix of regressors
-            theta : array
-                the parameters estimated via least squares algorithm
-            residues : array
-                the identification residues of the solution
-            p_value_confidence : double
-                parameter selected by the user to perform the statistical t-test
-
-        Returns:
-        --------
-            pos_insignificant_terms : array
-                these regressors in the actual candidate solution are removed
-                from the population since they are insignificant
-            t_test : array
-                the values of the p_value of each regressor of the model
-
-        """
-        sum_of_squared_residues = np.sum(residues**2)
-        variance_of_residues = (sum_of_squared_residues) / (
-            len(residues) - psi.shape[1]
-        )
-        if np.isnan(variance_of_residues):
-            variance_of_residues = 4.3645e05
-
-        skk = np.linalg.pinv(psi.T.dot(psi))
-        skk_diag = np.diag(skk)
-        var_e = variance_of_residues * skk_diag
-        se_theta = np.sqrt(var_e)
-        se_theta = se_theta.reshape(-1, 1)
-        t_test = theta / se_theta
-        degree_of_freedom = psi.shape[0] - psi.shape[1]
-
-        tail2P = 2 * t.cdf(-np.abs(t_test), degree_of_freedom)
-
-        pos_insignificant_terms = np.where(tail2P > self.p_value)[0]
-        pos_insignificant_terms = pos_insignificant_terms.reshape(-1, 1).T
-        if pos_insignificant_terms.shape == 0:
-            return np.array([]), t_test, tail2P
-
-        # t_test and tail2P will be returned in future updates
-        return pos_insignificant_terms, t_test, tail2P
+            m = self.regressor_code[agent == 1].copy()
+
+            if np.all(agent == 0):
+                fitness.append(1000)  # just a big number as penalty
+                continue
+
+            yhat = self.simulate(
+                X_train=X_train,
+                y_train=y_train,
+                X_test=X_test,
+                y_test=y_test,
+                model_code=m,
+                steps_ahead=self.steps_ahead,
+            )
+
+            self.final_model = m.copy()
+            self.tested_models.append(m)
+            if len(self.theta) == 0:
+                print(m)
+            d = getattr(self, self.loss_func)(y_test, yhat, len(self.theta))
+            fitness.append(d)
+
+        return fitness
+
+    def perform_t_test(
+        self, psi: np.ndarray, theta: np.ndarray, residues: np.ndarray
+    ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
+        """
+        Perform the t-test given the p-value defined by the user
+
+        Arguments:
+        ----------
+            psi : array
+                the data matrix of regressors
+            theta : array
+                the parameters estimated via least squares algorithm
+            residues : array
+                the identification residues of the solution
+            p_value_confidence : double
+                parameter selected by the user to perform the statistical t-test
+
+        Returns:
+        --------
+            pos_insignificant_terms : array
+                these regressors in the actual candidate solution are removed
+                from the population since they are insignificant
+            t_test : array
+                the values of the p_value of each regressor of the model
+
+        """
+        sum_of_squared_residues = np.sum(residues**2)
+        variance_of_residues = (sum_of_squared_residues) / (
+            len(residues) - psi.shape[1]
+        )
+        if np.isnan(variance_of_residues):
+            variance_of_residues = 4.3645e05
+
+        skk = np.linalg.pinv(psi.T.dot(psi))
+        skk_diag = np.diag(skk)
+        var_e = variance_of_residues * skk_diag
+        se_theta = np.sqrt(var_e)
+        se_theta = se_theta.reshape(-1, 1)
+        t_test = theta / se_theta
+        degree_of_freedom = psi.shape[0] - psi.shape[1]
+
+        tail2P = 2 * t.cdf(-np.abs(t_test), degree_of_freedom)
+
+        pos_insignificant_terms = np.where(tail2P > self.p_value)[0]
+        pos_insignificant_terms = pos_insignificant_terms.reshape(-1, 1).T
+        if pos_insignificant_terms.shape == 0:
+            return np.array([]), t_test, tail2P
+
+        # t_test and tail2P will be returned in future updates
+        return pos_insignificant_terms, t_test, tail2P
+
+    def aic(self, y_test, yhat, n_theta):
+        """Calculate the Akaike Information Criterion
+
+        Parameters
+        ----------
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+        yhat : ndarray of floats
+            The n-steps-ahead predicted values of the model.
+        n_theta : ndarray of floats
+            The number of model parameters.
 
-    def aic(self, y_test, yhat, n_theta):
-        """Calculate the Akaike Information Criterion
-
-        Parameters
-        ----------
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-        yhat : ndarray of floats
-            The n-steps-ahead predicted values of the model.
-        n_theta : ndarray of floats
-            The number of model parameters.
-
-        Returns
-        -------
-        aic : float
-            The Akaike Information Criterion
-
-        """
-        mse = mean_squared_error(y_test, yhat)
-        n = y_test.shape[0]
-        return n * np.log(mse) + 2 * n_theta
+        Returns
+        -------
+        aic : float
+            The Akaike Information Criterion
+
+        """
+        mse = mean_squared_error(y_test, yhat)
+        n = y_test.shape[0]
+        return n * np.log(mse) + 2 * n_theta
+
+    def bic(self, y_test, yhat, n_theta):
+        """Calculate the Bayesian Information Criterion
+
+        Parameters
+        ----------
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+        yhat : ndarray of floats
+            The n-steps-ahead predicted values of the model.
+        n_theta : ndarray of floats
+            The number of model parameters.
 
-    def bic(self, y_test, yhat, n_theta):
-        """Calculate the Bayesian Information Criterion
-
-        Parameters
-        ----------
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-        yhat : ndarray of floats
-            The n-steps-ahead predicted values of the model.
-        n_theta : ndarray of floats
-            The number of model parameters.
-
-        Returns
-        -------
-        bic : float
-            The Bayesian Information Criterion
-
-        """
-        mse = mean_squared_error(y_test, yhat)
-        n = y_test.shape[0]
-        return n * np.log(mse) + n_theta + np.log(n)
+        Returns
+        -------
+        bic : float
+            The Bayesian Information Criterion
+
+        """
+        mse = mean_squared_error(y_test, yhat)
+        n = y_test.shape[0]
+        return n * np.log(mse) + n_theta + np.log(n)
+
+    def metamss_loss(self, y_test, yhat, n_terms):
+        """Calculate the MetaMSS loss function
+
+        Parameters
+        ----------
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+        yhat : ndarray of floats
+            The n-steps-ahead predicted values of the model.
+        n_terms : ndarray of floats
+            The number of model parameters.
 
-    def metamss_loss(self, y_test, yhat, n_terms):
-        """Calculate the MetaMSS loss function
-
-        Parameters
-        ----------
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-        yhat : ndarray of floats
-            The n-steps-ahead predicted values of the model.
-        n_terms : ndarray of floats
-            The number of model parameters.
+        Returns
+        -------
+        metamss_loss : float
+            The MetaMSS loss function
+
+        """
+        penalty_count = np.arange(0, self.dimension)
+        penalty_distribution = (np.log(n_terms + 1) ** (-1)) / self.dimension
+        penalty = self.sigmoid_linear_unit_derivative(
+            penalty_count, self.dimension / 2, penalty_distribution
+        )
 
-        Returns
-        -------
-        metamss_loss : float
-            The MetaMSS loss function
-
-        """
-        penalty_count = np.arange(0, self.dimension)
-        penalty_distribution = (np.log(n_terms + 1) ** (-1)) / self.dimension
-        penalty = self.sigmoid_linear_unit_derivative(
-            penalty_count, self.dimension / 2, penalty_distribution
-        )
-
-        penalty = penalty - np.min(penalty)
-        rmse = root_relative_squared_error(y_test, yhat)
-        fitness = rmse * penalty[n_terms]
-        if np.isnan(fitness):
-            fitness = 30
-
-        return fitness
-
-    def sigmoid_linear_unit_derivative(self, x, c, a):
-        """Calculate the derivative of the Sigmoid Linear Unit function.
+        penalty = penalty - np.min(penalty)
+        rmse = root_relative_squared_error(y_test, yhat)
+        fitness = rmse * penalty[n_terms]
+        if np.isnan(fitness):
+            fitness = 30
+
+        return fitness
+
+    def sigmoid_linear_unit_derivative(self, x, c, a):
+        """Calculate the derivative of the Sigmoid Linear Unit function.
+
+        The derivative of Sigmoid Linear Unit (dSiLU) function can be
+        viewed as a overshooting version of the sigmoid function.
+
+        Parameters
+        ----------
+        x : ndarray
+            The range of the regressors space.
+        a : float
+            The rate of change.
+        c : int
+            Corresponds to the x value where y = 0.5.
 
-        The derivative of Sigmoid Linear Unit (dSiLU) function can be
-        viewed as a overshooting version of the sigmoid function.
-
-        Parameters
-        ----------
-        x : ndarray
-            The range of the regressors space.
-        a : float
-            The rate of change.
-        c : int
-            Corresponds to the x value where y = 0.5.
+        Returns
+        -------
+        penalty : ndarray of floats
+            The values of the penalty function
+
+        """
+        return (
+            1
+            / (1 + np.exp(-a * (x - c)))
+            * (1 + (a * (x - c)) * (1 - 1 / (1 + np.exp(-a * (x - c)))))
+        )
 
-        Returns
-        -------
-        penalty : ndarray of floats
-            The values of the penalty function
-
-        """
-        return (
-            1
-            / (1 + np.exp(-a * (x - c)))
-            * (1 + (a * (x - c)) * (1 - 1 / (1 + np.exp(-a * (x - c)))))
-        )
-
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted values given an input.
-
-        The predict function allows a friendly usage by the user.
-        Given a previously trained model, predict values given
-        a new set of data.
-
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future)
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted values given an input.
+
+        The predict function allows a friendly usage by the user.
+        Given a previously trained model, predict values given
+        a new set of data.
+
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future)
+
+        Parameters
+        ----------
+        X_test : ndarray of floats
+            The input data to be used in the prediction process.
+        y_test : ndarray of floats
+            The output data to be used in the prediction process.
+        steps_ahead : int (default = None)
+            The user can use free run simulation, one-step ahead prediction
+            and n-step ahead prediction.
+        forecast_horizon : int, default=None
+            The number of predictions over the time.
 
-        Parameters
-        ----------
-        X_test : ndarray of floats
-            The input data to be used in the prediction process.
-        y_test : ndarray of floats
-            The output data to be used in the prediction process.
-        steps_ahead : int (default = None)
-            The user can use free run simulation, one-step ahead prediction
-            and n-step ahead prediction.
-        forecast_horizon : int, default=None
-            The number of predictions over the time.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
-
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-                return yhat
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        raise NotImplementedError(
+            "MetaMSS doesn't support basis functions other than polynomial yet.",
+        )
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
 
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        raise NotImplementedError(
-            "MetaMSS doesn't support basis functions other than polynomial yet.",
-        )
-
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        yhat = super()._one_step_ahead_prediction(X, y)
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        yhat = super()._one_step_ahead_prediction(X, y)
-        return yhat.reshape(-1, 1)
-
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
+        return yhat
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
-        return yhat
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
-
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        if self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
+
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
+        return y_output
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = super()._nfir_predict(X, y_initial)
+        return y_output
 
-        raise Exception(
-            "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-        )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
-        return y_output
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = super()._nfir_predict(X, y_initial)
-        return y_output
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        """not implemented"""
+        raise NotImplementedError(
+            "You can only use Polynomial Basis Function in MetaMSS for now."
+        )
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """not implemented"""
+        raise NotImplementedError(
+            "You can only use Polynomial Basis Function in MetaMSS for now."
+        )
 
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
         """not implemented"""
         raise NotImplementedError(
             "You can only use Polynomial Basis Function in MetaMSS for now."
         )
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """not implemented"""
-        raise NotImplementedError(
-            "You can only use Polynomial Basis Function in MetaMSS for now."
-        )
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        """not implemented"""
-        raise NotImplementedError(
-            "You can only use Polynomial Basis Function in MetaMSS for now."
-        )
-

aic(y_test, yhat, n_theta)

Calculate the Akaike Information Criterion

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_theta ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
aic float

The Akaike Information Criterion

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
478
+

aic(y_test, yhat, n_theta)

Calculate the Akaike Information Criterion

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_theta ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
aic float

The Akaike Information Criterion

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
466
+467
+468
+469
+470
+471
+472
+473
+474
+475
+476
+477
+478
 479
 480
 481
@@ -1448,9 +1438,28 @@
 483
 484
 485
-486
-487
-488
+486
def aic(self, y_test, yhat, n_theta):
+    """Calculate the Akaike Information Criterion
+
+    Parameters
+    ----------
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+    yhat : ndarray of floats
+        The n-steps-ahead predicted values of the model.
+    n_theta : ndarray of floats
+        The number of model parameters.
+
+    Returns
+    -------
+    aic : float
+        The Akaike Information Criterion
+
+    """
+    mse = mean_squared_error(y_test, yhat)
+    n = y_test.shape[0]
+    return n * np.log(mse) + 2 * n_theta
+

bic(y_test, yhat, n_theta)

Calculate the Bayesian Information Criterion

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_theta ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
bic float

The Bayesian Information Criterion

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
488
 489
 490
 491
@@ -1460,28 +1469,9 @@
 495
 496
 497
-498
def aic(self, y_test, yhat, n_theta):
-    """Calculate the Akaike Information Criterion
-
-    Parameters
-    ----------
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-    yhat : ndarray of floats
-        The n-steps-ahead predicted values of the model.
-    n_theta : ndarray of floats
-        The number of model parameters.
-
-    Returns
-    -------
-    aic : float
-        The Akaike Information Criterion
-
-    """
-    mse = mean_squared_error(y_test, yhat)
-    n = y_test.shape[0]
-    return n * np.log(mse) + 2 * n_theta
-

bic(y_test, yhat, n_theta)

Calculate the Bayesian Information Criterion

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_theta ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
bic float

The Bayesian Information Criterion

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
500
+498
+499
+500
 501
 502
 503
@@ -1489,41 +1479,28 @@
 505
 506
 507
-508
-509
-510
-511
-512
-513
-514
-515
-516
-517
-518
-519
-520
def bic(self, y_test, yhat, n_theta):
-    """Calculate the Bayesian Information Criterion
-
-    Parameters
-    ----------
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-    yhat : ndarray of floats
-        The n-steps-ahead predicted values of the model.
-    n_theta : ndarray of floats
-        The number of model parameters.
-
-    Returns
-    -------
-    bic : float
-        The Bayesian Information Criterion
-
-    """
-    mse = mean_squared_error(y_test, yhat)
-    n = y_test.shape[0]
-    return n * np.log(mse) + n_theta + np.log(n)
-

evaluate_objective_function(X_train, y_train, X_test, y_test, population)

Fit the polynomial NARMAX model.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

required
y_train ndarray of floats

The output data to be used in the training process.

required
X_test ndarray of floats

The input data to be used in the prediction process.

required
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
population ndarray of zeros and ones

The initial population of agents.

required

Returns:

Name Type Description
fitness_value ndarray

The fitness value of each agent.

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
def bic(self, y_test, yhat, n_theta):
+    """Calculate the Bayesian Information Criterion
+
+    Parameters
+    ----------
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+    yhat : ndarray of floats
+        The n-steps-ahead predicted values of the model.
+    n_theta : ndarray of floats
+        The number of model parameters.
+
+    Returns
+    -------
+    bic : float
+        The Bayesian Information Criterion
+
+    """
+    mse = mean_squared_error(y_test, yhat)
+    n = y_test.shape[0]
+    return n * np.log(mse) + n_theta + np.log(n)
+

evaluate_objective_function(X_train, y_train, X_test, y_test, population)

Fit the polynomial NARMAX model.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

required
y_train ndarray of floats

The output data to be used in the training process.

required
X_test ndarray of floats

The input data to be used in the prediction process.

required
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
population ndarray of zeros and ones

The initial population of agents.

required

Returns:

Name Type Description
fitness_value ndarray

The fitness value of each agent.

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
338
 339
 340
 341
@@ -1598,108 +1575,84 @@
 410
 411
 412
-413
-414
-415
-416
-417
-418
-419
-420
-421
-422
-423
-424
-425
def evaluate_objective_function(self, X_train, y_train, X_test, y_test, population):
-    """Fit the polynomial NARMAX model.
-
-    Parameters
-    ----------
-    X_train : ndarray of floats
-        The input data to be used in the training process.
-    y_train : ndarray of floats
-        The output data to be used in the training process.
-    X_test : ndarray of floats
-        The input data to be used in the prediction process.
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-    population : ndarray of zeros and ones
-        The initial population of agents.
-
-    Returns
-    -------
-    fitness_value : ndarray
-        The fitness value of each agent.
-    """
-    fitness = []
-    for agent in population.T:
-        if np.all(agent == 0):
-            fitness.append(30)  # penalty for cases where there is no terms
-            continue
-
-        m = self.regressor_code[agent == 1].copy()
-        yhat = self.simulate(
-            X_train=X_train,
-            y_train=y_train,
-            X_test=X_test,
-            y_test=y_test,
-            model_code=m,
-            steps_ahead=self.steps_ahead,
-        )
-
-        residues = y_test - yhat
-
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y_train)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X_train)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NARMAX":
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X_train, y_train)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                " NFIR."
-            )
+413
def evaluate_objective_function(self, X_train, y_train, X_test, y_test, population):
+    """Fit the polynomial NARMAX model.
+
+    Parameters
+    ----------
+    X_train : ndarray of floats
+        The input data to be used in the training process.
+    y_train : ndarray of floats
+        The output data to be used in the training process.
+    X_test : ndarray of floats
+        The input data to be used in the prediction process.
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+    population : ndarray of zeros and ones
+        The initial population of agents.
+
+    Returns
+    -------
+    fitness_value : ndarray
+        The fitness value of each agent.
+    """
+    fitness = []
+    for agent in population.T:
+        if np.all(agent == 0):
+            fitness.append(30)  # penalty for cases where there is no terms
+            continue
+
+        m = self.regressor_code[agent == 1].copy()
+        yhat = self.simulate(
+            X_train=X_train,
+            y_train=y_train,
+            X_test=X_test,
+            y_test=y_test,
+            model_code=m,
+            steps_ahead=self.steps_ahead,
+        )
+
+        residues = y_test - yhat
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X_train, y_train)
+
+        psi = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=self.pivv
+        )
+
+        pos_insignificant_terms, _, _ = self.perform_t_test(
+            psi, self.theta, residues
+        )
+
+        pos_aux = np.where(agent == 1)[0]
+        pos_aux = pos_aux[pos_insignificant_terms]
+        agent[pos_aux] = 0
 
-        psi = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=self.pivv
-        )
-
-        pos_insignificant_terms, _, _ = self.perform_t_test(
-            psi, self.theta, residues
-        )
-
-        pos_aux = np.where(agent == 1)[0]
-        pos_aux = pos_aux[pos_insignificant_terms]
-        agent[pos_aux] = 0
-
-        m = self.regressor_code[agent == 1].copy()
-
-        if np.all(agent == 0):
-            fitness.append(1000)  # just a big number as penalty
-            continue
-
-        yhat = self.simulate(
-            X_train=X_train,
-            y_train=y_train,
-            X_test=X_test,
-            y_test=y_test,
-            model_code=m,
-            steps_ahead=self.steps_ahead,
-        )
-
-        self.final_model = m.copy()
-        self.tested_models.append(m)
-        if len(self.theta) == 0:
-            print(m)
-        d = getattr(self, self.loss_func)(y_test, yhat, len(self.theta))
-        fitness.append(d)
-
-    return fitness
-

fit(*, X=None, y=None, X_test=None, y_test=None)

Fit the polynomial NARMAX model.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

required
y_train ndarray of floats

The output data to be used in the training process.

required
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None

Returns:

Name Type Description
self returns an instance of self.
Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
258
+        m = self.regressor_code[agent == 1].copy()
+
+        if np.all(agent == 0):
+            fitness.append(1000)  # just a big number as penalty
+            continue
+
+        yhat = self.simulate(
+            X_train=X_train,
+            y_train=y_train,
+            X_test=X_test,
+            y_test=y_test,
+            model_code=m,
+            steps_ahead=self.steps_ahead,
+        )
+
+        self.final_model = m.copy()
+        self.tested_models.append(m)
+        if len(self.theta) == 0:
+            print(m)
+        d = getattr(self, self.loss_func)(y_test, yhat, len(self.theta))
+        fitness.append(d)
+
+    return fitness
+

fit(*, X=None, y=None, X_test=None, y_test=None)

Fit the polynomial NARMAX model.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

required
y_train ndarray of floats

The output data to be used in the training process.

required
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None

Returns:

Name Type Description
self returns an instance of self.
Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
257
+258
 259
 260
 261
@@ -1776,39 +1729,41 @@
 332
 333
 334
-335
def fit(self, *, X=None, y=None, X_test=None, y_test=None):
-    """Fit the polynomial NARMAX model.
-
-    Parameters
-    ----------
-    X_train : ndarray of floats
-        The input data to be used in the training process.
-    y_train : ndarray of floats
-        The output data to be used in the training process.
-    X_test : ndarray of floats
-        The input data to be used in the prediction process.
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-
-    Returns
-    -------
-    self : returns an instance of self.
-
-    """
-    if self.basis_function.__class__.__name__ != "Polynomial":
-        raise NotImplementedError(
-            "Currently MetaMSS only supports polynomial models."
-        )
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    if X is not None:
-        check_X_y(X, y)
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    #  self.n_inputs = _num_features(X_train)
+335
+336
def fit(self, *, X=None, y=None, X_test=None, y_test=None):
+    """Fit the polynomial NARMAX model.
+
+    Parameters
+    ----------
+    X_train : ndarray of floats
+        The input data to be used in the training process.
+    y_train : ndarray of floats
+        The output data to be used in the training process.
+    X_test : ndarray of floats
+        The input data to be used in the prediction process.
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+
+    Returns
+    -------
+    self : returns an instance of self.
+
+    """
+    if self.basis_function.__class__.__name__ != "Polynomial":
+        raise NotImplementedError(
+            "Currently MetaMSS only supports polynomial models."
+        )
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    if X is not None:
+        check_X_y(X, y)
+        self.n_inputs = _num_features(X)
+    else:
+        self.n_inputs = 1  # just to create the regressor space base
+
+    #  self.n_inputs = _num_features(X_train)
+    self.max_lag = self._get_max_lag()
     self.regressor_code = self.regressor_space(self.n_inputs)
     self.dimension = self.regressor_code.shape[0]
     velocity = np.zeros([self.dimension, self.n_agents])
@@ -1853,8 +1808,21 @@
         model_code=self.final_model,
         steps_ahead=self.steps_ahead,
     )
-    return self
-

metamss_loss(y_test, yhat, n_terms)

Calculate the MetaMSS loss function

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_terms ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
metamss_loss float

The MetaMSS loss function

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
522
+    self.max_lag = self._get_max_lag()
+    return self
+

metamss_loss(y_test, yhat, n_terms)

Calculate the MetaMSS loss function

Parameters:

Name Type Description Default
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

required
yhat ndarray of floats

The n-steps-ahead predicted values of the model.

required
n_terms ndarray of floats

The number of model parameters.

required

Returns:

Name Type Description
metamss_loss float

The MetaMSS loss function

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
510
+511
+512
+513
+514
+515
+516
+517
+518
+519
+520
+521
+522
 523
 524
 525
@@ -1872,49 +1840,37 @@
 537
 538
 539
-540
-541
-542
-543
-544
-545
-546
-547
-548
-549
-550
-551
-552
def metamss_loss(self, y_test, yhat, n_terms):
-    """Calculate the MetaMSS loss function
-
-    Parameters
-    ----------
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-    yhat : ndarray of floats
-        The n-steps-ahead predicted values of the model.
-    n_terms : ndarray of floats
-        The number of model parameters.
+540
def metamss_loss(self, y_test, yhat, n_terms):
+    """Calculate the MetaMSS loss function
+
+    Parameters
+    ----------
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+    yhat : ndarray of floats
+        The n-steps-ahead predicted values of the model.
+    n_terms : ndarray of floats
+        The number of model parameters.
+
+    Returns
+    -------
+    metamss_loss : float
+        The MetaMSS loss function
+
+    """
+    penalty_count = np.arange(0, self.dimension)
+    penalty_distribution = (np.log(n_terms + 1) ** (-1)) / self.dimension
+    penalty = self.sigmoid_linear_unit_derivative(
+        penalty_count, self.dimension / 2, penalty_distribution
+    )
 
-    Returns
-    -------
-    metamss_loss : float
-        The MetaMSS loss function
-
-    """
-    penalty_count = np.arange(0, self.dimension)
-    penalty_distribution = (np.log(n_terms + 1) ** (-1)) / self.dimension
-    penalty = self.sigmoid_linear_unit_derivative(
-        penalty_count, self.dimension / 2, penalty_distribution
-    )
-
-    penalty = penalty - np.min(penalty)
-    rmse = root_relative_squared_error(y_test, yhat)
-    fitness = rmse * penalty[n_terms]
-    if np.isnan(fitness):
-        fitness = 30
-
-    return fitness
+    penalty = penalty - np.min(penalty)
+    rmse = root_relative_squared_error(y_test, yhat)
+    fitness = rmse * penalty[n_terms]
+    if np.isnan(fitness):
+        fitness = 30
+
+    return fitness
 

perform_t_test(psi, theta, residues)

Perform the t-test given the p-value defined by the user

Arguments:
psi : array
     the data matrix of regressors
 theta : array
@@ -1928,7 +1884,19 @@
     from the population since they are insignificant
 t_test : array
     the values of the p_value of each regressor of the model
-
Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
427
+
Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
415
+416
+417
+418
+419
+420
+421
+422
+423
+424
+425
+426
+427
 428
 429
 430
@@ -1965,69 +1933,69 @@
 461
 462
 463
-464
-465
-466
-467
-468
-469
-470
-471
-472
-473
-474
-475
-476
def perform_t_test(
-    self, psi: np.ndarray, theta: np.ndarray, residues: np.ndarray
-) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
-    """
-    Perform the t-test given the p-value defined by the user
-
-    Arguments:
-    ----------
-        psi : array
-            the data matrix of regressors
-        theta : array
-            the parameters estimated via least squares algorithm
-        residues : array
-            the identification residues of the solution
-        p_value_confidence : double
-            parameter selected by the user to perform the statistical t-test
-
-    Returns:
-    --------
-        pos_insignificant_terms : array
-            these regressors in the actual candidate solution are removed
-            from the population since they are insignificant
-        t_test : array
-            the values of the p_value of each regressor of the model
-
-    """
-    sum_of_squared_residues = np.sum(residues**2)
-    variance_of_residues = (sum_of_squared_residues) / (
-        len(residues) - psi.shape[1]
-    )
-    if np.isnan(variance_of_residues):
-        variance_of_residues = 4.3645e05
-
-    skk = np.linalg.pinv(psi.T.dot(psi))
-    skk_diag = np.diag(skk)
-    var_e = variance_of_residues * skk_diag
-    se_theta = np.sqrt(var_e)
-    se_theta = se_theta.reshape(-1, 1)
-    t_test = theta / se_theta
-    degree_of_freedom = psi.shape[0] - psi.shape[1]
-
-    tail2P = 2 * t.cdf(-np.abs(t_test), degree_of_freedom)
-
-    pos_insignificant_terms = np.where(tail2P > self.p_value)[0]
-    pos_insignificant_terms = pos_insignificant_terms.reshape(-1, 1).T
-    if pos_insignificant_terms.shape == 0:
-        return np.array([]), t_test, tail2P
-
-    # t_test and tail2P will be returned in future updates
-    return pos_insignificant_terms, t_test, tail2P
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X_test ndarray of floats

The input data to be used in the prediction process.

required
y_test ndarray of floats

The output data to be used in the prediction process.

required
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
def perform_t_test(
+    self, psi: np.ndarray, theta: np.ndarray, residues: np.ndarray
+) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
+    """
+    Perform the t-test given the p-value defined by the user
+
+    Arguments:
+    ----------
+        psi : array
+            the data matrix of regressors
+        theta : array
+            the parameters estimated via least squares algorithm
+        residues : array
+            the identification residues of the solution
+        p_value_confidence : double
+            parameter selected by the user to perform the statistical t-test
+
+    Returns:
+    --------
+        pos_insignificant_terms : array
+            these regressors in the actual candidate solution are removed
+            from the population since they are insignificant
+        t_test : array
+            the values of the p_value of each regressor of the model
+
+    """
+    sum_of_squared_residues = np.sum(residues**2)
+    variance_of_residues = (sum_of_squared_residues) / (
+        len(residues) - psi.shape[1]
+    )
+    if np.isnan(variance_of_residues):
+        variance_of_residues = 4.3645e05
+
+    skk = np.linalg.pinv(psi.T.dot(psi))
+    skk_diag = np.diag(skk)
+    var_e = variance_of_residues * skk_diag
+    se_theta = np.sqrt(var_e)
+    se_theta = se_theta.reshape(-1, 1)
+    t_test = theta / se_theta
+    degree_of_freedom = psi.shape[0] - psi.shape[1]
+
+    tail2P = 2 * t.cdf(-np.abs(t_test), degree_of_freedom)
+
+    pos_insignificant_terms = np.where(tail2P > self.p_value)[0]
+    pos_insignificant_terms = pos_insignificant_terms.reshape(-1, 1).T
+    if pos_insignificant_terms.shape == 0:
+        return np.array([]), t_test, tail2P
+
+    # t_test and tail2P will be returned in future updates
+    return pos_insignificant_terms, t_test, tail2P
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X_test ndarray of floats

The input data to be used in the prediction process.

required
y_test ndarray of floats

The output data to be used in the prediction process.

required
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
569
+570
+571
+572
+573
+574
+575
+576
+577
+578
+579
+580
+581
 582
 583
 584
@@ -2060,65 +2028,65 @@
 611
 612
 613
-614
-615
-616
-617
-618
-619
-620
-621
-622
-623
-624
-625
-626
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted values given an input.
-
-    The predict function allows a friendly usage by the user.
-    Given a previously trained model, predict values given
-    a new set of data.
-
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future)
+614
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted values given an input.
+
+    The predict function allows a friendly usage by the user.
+    Given a previously trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future)
+
+    Parameters
+    ----------
+    X_test : ndarray of floats
+        The input data to be used in the prediction process.
+    y_test : ndarray of floats
+        The output data to be used in the prediction process.
+    steps_ahead : int (default = None)
+        The user can use free run simulation, one-step ahead prediction
+        and n-step ahead prediction.
+    forecast_horizon : int, default=None
+        The number of predictions over the time.
 
-    Parameters
-    ----------
-    X_test : ndarray of floats
-        The input data to be used in the prediction process.
-    y_test : ndarray of floats
-        The output data to be used in the prediction process.
-    steps_ahead : int (default = None)
-        The user can use free run simulation, one-step ahead prediction
-        and n-step ahead prediction.
-    forecast_horizon : int, default=None
-        The number of predictions over the time.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
-
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
-        return yhat
-
-    raise NotImplementedError(
-        "MetaMSS doesn't support basis functions other than polynomial yet.",
-    )
-

sigmoid_linear_unit_derivative(x, c, a)

Calculate the derivative of the Sigmoid Linear Unit function.

The derivative of Sigmoid Linear Unit (dSiLU) function can be viewed as a overshooting version of the sigmoid function.

Parameters:

Name Type Description Default
x ndarray

The range of the regressors space.

required
a float

The rate of change.

required
c int

Corresponds to the x value where y = 0.5.

required

Returns:

Name Type Description
penalty ndarray of floats

The values of the penalty function

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
554
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    raise NotImplementedError(
+        "MetaMSS doesn't support basis functions other than polynomial yet.",
+    )
+

sigmoid_linear_unit_derivative(x, c, a)

Calculate the derivative of the Sigmoid Linear Unit function.

The derivative of Sigmoid Linear Unit (dSiLU) function can be viewed as a overshooting version of the sigmoid function.

Parameters:

Name Type Description Default
x ndarray

The range of the regressors space.

required
a float

The rate of change.

required
c int

Corresponds to the x value where y = 0.5.

required

Returns:

Name Type Description
penalty ndarray of floats

The values of the penalty function

Source code in sysidentpy\model_structure_selection\meta_model_structure_selection.py
542
+543
+544
+545
+546
+547
+548
+549
+550
+551
+552
+553
+554
 555
 556
 557
@@ -2131,42 +2099,30 @@
 564
 565
 566
-567
-568
-569
-570
-571
-572
-573
-574
-575
-576
-577
-578
-579
def sigmoid_linear_unit_derivative(self, x, c, a):
-    """Calculate the derivative of the Sigmoid Linear Unit function.
+567
def sigmoid_linear_unit_derivative(self, x, c, a):
+    """Calculate the derivative of the Sigmoid Linear Unit function.
+
+    The derivative of Sigmoid Linear Unit (dSiLU) function can be
+    viewed as a overshooting version of the sigmoid function.
+
+    Parameters
+    ----------
+    x : ndarray
+        The range of the regressors space.
+    a : float
+        The rate of change.
+    c : int
+        Corresponds to the x value where y = 0.5.
 
-    The derivative of Sigmoid Linear Unit (dSiLU) function can be
-    viewed as a overshooting version of the sigmoid function.
-
-    Parameters
-    ----------
-    x : ndarray
-        The range of the regressors space.
-    a : float
-        The rate of change.
-    c : int
-        Corresponds to the x value where y = 0.5.
-
-    Returns
-    -------
-    penalty : ndarray of floats
-        The values of the penalty function
-
-    """
-    return (
-        1
-        / (1 + np.exp(-a * (x - c)))
-        * (1 + (a * (x - c)) * (1 - 1 / (1 + np.exp(-a * (x - c)))))
-    )
+    Returns
+    -------
+    penalty : ndarray of floats
+        The values of the penalty function
+
+    """
+    return (
+        1
+        / (1 + np.exp(-a * (x - c)))
+        * (1 + (a * (x - c)) * (1 - 1 / (1 + np.exp(-a * (x - c)))))
+    )
 
\ No newline at end of file diff --git a/docs/code/narmax-base/index.html b/docs/code/narmax-base/index.html index 751526d1..7ae50e16 100644 --- a/docs/code/narmax-base/index.html +++ b/docs/code/narmax-base/index.html @@ -10,18 +10,7 @@ body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);} -

Documentation for narmax-base

Base classes for NARMAX estimator.

BaseMSS

Bases: RegressorDictionary

Base class for Model Structure Selection

Source code in sysidentpy\narmax_base.py
575
-576
-577
-578
-579
-580
-581
-582
-583
-584
-585
-586
+                        

Documentation for narmax-base

Base classes for NARMAX estimator.

BaseMSS

Bases: RegressorDictionary

Base class for Model Structure Selection

Source code in sysidentpy\narmax_base.py
586
 587
 588
 589
@@ -385,422 +374,377 @@
 947
 948
 949
-950
-951
-952
-953
-954
-955
-956
-957
-958
-959
-960
class BaseMSS(RegressorDictionary, metaclass=ABCMeta):
-    """Base class for Model Structure Selection"""
-
-    @abstractmethod
-    def __init__(self):
-        super().__init__(self)
-        self.max_lag = None
-        self.n_inputs = None
-        self.theta = None
-        self.final_model = None
-        self.pivv = None
-
-    @abstractmethod
-    def fit(self, *, X, y):
-        """abstract method"""
-
-    @abstractmethod
-    def predict(
-        self,
-        *,
-        X: Union[np.ndarray, None] = None,
-        y: Union[np.ndarray, None] = None,
-        steps_ahead: Union[int, None] = None,
-        forecast_horizon: Union[int, None] = None,
-    ) -> np.ndarray:
-        """abstract methods"""
+950
class BaseMSS(RegressorDictionary, metaclass=ABCMeta):
+    """Base class for Model Structure Selection"""
+
+    @abstractmethod
+    def __init__(self):
+        super().__init__(self)
+        self.max_lag = None
+        self.n_inputs = None
+        self.theta = None
+        self.final_model = None
+        self.pivv = None
+
+    @abstractmethod
+    def fit(self, *, X, y):
+        """abstract method"""
 
-    def _code2exponents(self, *, code):
-        """
-        Convert regressor code to exponents array.
-
-        Parameters
-        ----------
-        code : 1D-array of int
-            Codification of one regressor.
-        """
-        regressors = np.array(list(set(code)))
-        regressors_count = Counter(code)
-
-        if np.all(regressors == 0):
-            return np.zeros(self.max_lag * (1 + self.n_inputs))
+    @abstractmethod
+    def predict(
+        self,
+        *,
+        X: Union[np.ndarray, None] = None,
+        y: Union[np.ndarray, None] = None,
+        steps_ahead: Union[int, None] = None,
+        forecast_horizon: Union[int, None] = None,
+    ) -> np.ndarray:
+        """abstract methods"""
+
+    def _code2exponents(self, *, code):
+        """
+        Convert regressor code to exponents array.
 
-        exponents = np.array([], dtype=float)
-        elements = np.round(np.divide(regressors, 1000), 0)[(regressors > 0)].astype(
-            int
-        )
-
-        for j in range(1, self.n_inputs + 2):
-            base_exponents = np.zeros(self.max_lag, dtype=float)
-            if j in elements:
-                for i in range(1, self.max_lag + 1):
-                    regressor_code = int(j * 1000 + i)
-                    base_exponents[-i] = regressors_count[regressor_code]
-                exponents = np.append(exponents, base_exponents)
-
-            else:
-                exponents = np.append(exponents, base_exponents)
+        Parameters
+        ----------
+        code : 1D-array of int
+            Codification of one regressor.
+        """
+        regressors = np.array(list(set(code)))
+        regressors_count = Counter(code)
+
+        if np.all(regressors == 0):
+            return np.zeros(self.max_lag * (1 + self.n_inputs))
+
+        exponents = np.array([], dtype=float)
+        elements = np.round(np.divide(regressors, 1000), 0)[(regressors > 0)].astype(
+            int
+        )
 
-        return exponents
-
-    def _one_step_ahead_prediction(self, X_base: np.ndarray) -> np.ndarray:
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        for j in range(1, self.n_inputs + 2):
+            base_exponents = np.zeros(self.max_lag, dtype=float)
+            if j in elements:
+                for i in range(1, self.max_lag + 1):
+                    regressor_code = int(j * 1000 + i)
+                    base_exponents[-i] = regressors_count[regressor_code]
+                exponents = np.append(exponents, base_exponents)
+
+            else:
+                exponents = np.append(exponents, base_exponents)
+
+        return exponents
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        yhat = np.dot(X_base, self.theta.flatten())
-        return yhat.reshape(-1, 1)
-
-    @abstractmethod
-    def _model_prediction(
-        self,
-        X,
-        y_initial,
-        forecast_horizon=None,
-    ):
-        """model prediction wrapper"""
-
-    def _narmax_predict(
-        self,
-        X,
-        y_initial,
-        forecast_horizon,
-    ):
-        """narmax_predict method"""
-        y_output = np.zeros(
-            forecast_horizon, dtype=float
-        )  # np.zeros(X.shape[0], dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
-
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, forecast_horizon):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            raw_regressor[:final] = y_output[k:i]
-            for j in range(self.n_inputs):
-                init += self.max_lag
-                final += self.max_lag
-                raw_regressor[init:final] = X[k:i, j]
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
-
-            y_output[i] = np.dot(regressor_value, self.theta.flatten())
-        return y_output[self.max_lag : :].reshape(-1, 1)
-
-    @abstractmethod
-    def _nfir_predict(self, X, y_initial):
-        """nfir predict method"""
-        y_output = np.zeros(X.shape[0], dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
-        X = X.reshape(-1, self.n_inputs)
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, X.shape[0]):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            raw_regressor[:final] = y_output[k:i]
-            for j in range(self.n_inputs):
-                init += self.max_lag
-                final += self.max_lag
-                raw_regressor[init:final] = X[k:i, j]
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
-
-            y_output[i] = np.dot(regressor_value, self.theta.flatten())
-        return y_output[self.max_lag : :].reshape(-1, 1)
-
-    def _nar_step_ahead(self, y, steps_ahead):
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
+    def _one_step_ahead_prediction(self, X_base: np.ndarray) -> np.ndarray:
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        yhat = np.dot(X_base, self.theta.flatten())
+        return yhat.reshape(-1, 1)
+
+    @abstractmethod
+    def _model_prediction(
+        self,
+        X,
+        y_initial,
+        forecast_horizon=None,
+    ):
+        """model prediction wrapper"""
+
+    def _narmax_predict(
+        self,
+        X,
+        y_initial,
+        forecast_horizon,
+    ):
+        """narmax_predict method"""
+        y_output = np.zeros(
+            forecast_horizon, dtype=float
+        )  # np.zeros(X.shape[0], dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, forecast_horizon):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            raw_regressor[:final] = y_output[k:i]
+            for j in range(self.n_inputs):
+                init += self.max_lag
+                final += self.max_lag
+                raw_regressor[init:final] = X[k:i, j]
+
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            y_output[i] = np.dot(regressor_value, self.theta.flatten())
+        return y_output[self.max_lag : :].reshape(-1, 1)
+
+    @abstractmethod
+    def _nfir_predict(self, X, y_initial):
+        """nfir predict method"""
+        y_output = np.zeros(X.shape[0], dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+        X = X.reshape(-1, self.n_inputs)
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, X.shape[0]):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            raw_regressor[:final] = y_output[k:i]
+            for j in range(self.n_inputs):
+                init += self.max_lag
+                final += self.max_lag
+                raw_regressor[init:final] = X[k:i, j]
 
-        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-        yhat = np.zeros(len(y) + steps_ahead, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-
-        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-        if len(steps) > 1:
-            for step in steps[:-1]:
-                yhat[i : i + steps_ahead] = self._model_prediction(
-                    X=None, y_initial=y[step:i], forecast_horizon=steps_ahead
-                )[-steps_ahead:].ravel()
-                i += steps_ahead
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            y_output[i] = np.dot(regressor_value, self.theta.flatten())
+        return y_output[self.max_lag : :].reshape(-1, 1)
+
+    def _nar_step_ahead(self, y, steps_ahead):
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
 
-            steps_ahead = np.sum(np.isnan(yhat))
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=None, y_initial=y[steps[-1] : i]
-            )[-steps_ahead:].ravel()
-        else:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=None, y_initial=y[0:i], forecast_horizon=steps_ahead
-            )[-steps_ahead:].ravel()
-
-        yhat = yhat.ravel()[self.max_lag : :]
-        return yhat.reshape(-1, 1)
-
-    def narmax_n_step_ahead(self, X, y, steps_ahead):
-        """n_steps ahead prediction method for NARMAX model"""
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-        X = X.reshape(-1, self.n_inputs)
-        yhat = np.zeros(X.shape[0], dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-        if len(steps) > 1:
-            for step in steps[:-1]:
-                yhat[i : i + steps_ahead] = self._model_prediction(
-                    X=X[step : i + steps_ahead],
-                    y_initial=y[step:i],
-                )[-steps_ahead:].ravel()
-                i += steps_ahead
-
-            steps_ahead = np.sum(np.isnan(yhat))
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[steps[-1] : i + steps_ahead],
-                y_initial=y[steps[-1] : i],
-            )[-steps_ahead:].ravel()
-        else:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[0 : i + steps_ahead],
-                y_initial=y[0:i],
-            )[-steps_ahead:].ravel()
-
-        yhat = yhat.ravel()[self.max_lag : :]
-        return yhat.reshape(-1, 1)
-
-    @abstractmethod
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
+        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+        yhat = np.zeros(len(y) + steps_ahead, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        i = self.max_lag
+
+        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+        if len(steps) > 1:
+            for step in steps[:-1]:
+                yhat[i : i + steps_ahead] = self._model_prediction(
+                    X=None, y_initial=y[step:i], forecast_horizon=steps_ahead
+                )[-steps_ahead:].ravel()
+                i += steps_ahead
+
+            steps_ahead = np.sum(np.isnan(yhat))
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=None, y_initial=y[steps[-1] : i]
+            )[-steps_ahead:].ravel()
+        else:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=None, y_initial=y[0:i], forecast_horizon=steps_ahead
+            )[-steps_ahead:].ravel()
+
+        yhat = yhat.ravel()[self.max_lag : :]
+        return yhat.reshape(-1, 1)
+
+    def narmax_n_step_ahead(self, X, y, steps_ahead):
+        """n_steps ahead prediction method for NARMAX model"""
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+        X = X.reshape(-1, self.n_inputs)
+        yhat = np.zeros(X.shape[0], dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        i = self.max_lag
+        steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+        if len(steps) > 1:
+            for step in steps[:-1]:
+                yhat[i : i + steps_ahead] = self._model_prediction(
+                    X=X[step : i + steps_ahead],
+                    y_initial=y[step:i],
+                )[-steps_ahead:].ravel()
+                i += steps_ahead
+
+            steps_ahead = np.sum(np.isnan(yhat))
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[steps[-1] : i + steps_ahead],
+                y_initial=y[steps[-1] : i],
+            )[-steps_ahead:].ravel()
+        else:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[0 : i + steps_ahead],
+                y_initial=y[0:i],
+            )[-steps_ahead:].ravel()
+
+        yhat = yhat.ravel()[self.max_lag : :]
+        return yhat.reshape(-1, 1)
 
-
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
+    @abstractmethod
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
 
-        yhat = np.zeros(X.shape[0], dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-        X = X.reshape(-1, self.n_inputs)
-        while i < len(y):
-            k = int(i - self.max_lag)
-            # if i + steps_ahead > len(y):
-            #     steps_ahead = len(y) - i  # predicts the remaining values
-
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[k : i + steps_ahead], y_initial=y[k : i + steps_ahead]
-            )[-steps_ahead:].ravel()
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        """
+        if self.model_type == "NARMAX":
+            return self.narmax_n_step_ahead(X, y, steps_ahead)
 
-            i += steps_ahead
-
-        yhat = yhat.ravel()[self.max_lag : :]
-        """
-        if self.model_type == "NARMAX":
-            return self.narmax_n_step_ahead(X, y, steps_ahead)
-
-        if self.model_type == "NAR":
-            return self._nar_step_ahead(y, steps_ahead)
+        if self.model_type == "NAR":
+            return self._nar_step_ahead(y, steps_ahead)
+
+    @abstractmethod
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        """basis function prediction"""
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
 
-    @abstractmethod
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        """basis function prediction"""
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
-
-        # Discard unnecessary initial values
-        # yhat[0:self.max_lag] = y_initial[0:self.max_lag]
-        analyzed_elements_number = self.max_lag + 1
-
-        for i in range(0, forecast_horizon - self.max_lag):
-            if self.model_type == "NARMAX":
-                lagged_data = self.build_input_output_matrix(
-                    X[i : i + analyzed_elements_number],
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1),
-                )
-            elif self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1)
-                )
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(
-                    X[i : i + analyzed_elements_number]
-                )
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            X_tmp, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-
-            a = X_tmp @ self.theta
-            yhat[i + self.max_lag] = a[:, 0]
-
-        return yhat[self.max_lag :].reshape(-1, 1)
-
-    @abstractmethod
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """basis function n step ahead"""
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        # Discard unnecessary initial values
+        # yhat[0:self.max_lag] = y_initial[0:self.max_lag]
+        analyzed_elements_number = self.max_lag + 1
+
+        for i in range(0, forecast_horizon - self.max_lag):
+            if self.model_type == "NARMAX":
+                lagged_data = self.build_input_output_matrix(
+                    X[i : i + analyzed_elements_number],
+                    yhat[i : i + analyzed_elements_number].reshape(-1, 1),
+                )
+            elif self.model_type == "NAR":
+                lagged_data = self.build_output_matrix(
+                    None, yhat[i : i + analyzed_elements_number].reshape(-1, 1)
+                )
+            elif self.model_type == "NFIR":
+                lagged_data = self.build_input_matrix(
+                    X[i : i + analyzed_elements_number], None
+                )
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            X_tmp, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+
+            a = X_tmp @ self.theta
+            yhat[i + self.max_lag] = a[:, 0]
+
+        return yhat[self.max_lag :].reshape(-1, 1)
+
+    @abstractmethod
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """basis function n step ahead"""
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+
+        # Discard unnecessary initial values
+        i = self.max_lag
+
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
 
-        # Discard unnecessary initial values
-        i = self.max_lag
-
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead],
-                    y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-steps_ahead:].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-steps_ahead:].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
+            if self.model_type == "NARMAX":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X[k : i + steps_ahead],
+                    y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-steps_ahead:].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-steps_ahead:].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            i += steps_ahead
+
+        return yhat[self.max_lag :].reshape(-1, 1)
+
+    @abstractmethod
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        """basis n steps horizon"""
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
 
-        return yhat[self.max_lag :].reshape(-1, 1)
-
-    @abstractmethod
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        """basis n steps horizon"""
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-
-        # Discard unnecessary initial values
-        i = self.max_lag
-
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead], y[k : i + steps_ahead], forecast_horizon
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
-
-        yhat = yhat.ravel()
-        return yhat[self.max_lag :].reshape(-1, 1)
-

fit(*, X, y) abstractmethod

abstract method

Source code in sysidentpy\narmax_base.py
@abstractmethod
-def fit(self, *, X, y):
-    """abstract method"""
-

narmax_n_step_ahead(X, y, steps_ahead)

n_steps ahead prediction method for NARMAX model

Source code in sysidentpy\narmax_base.py
756
-757
-758
-759
-760
-761
-762
-763
-764
-765
-766
-767
-768
-769
-770
+        # Discard unnecessary initial values
+        i = self.max_lag
+
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
+
+            if self.model_type == "NARMAX":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X[k : i + steps_ahead], y[k : i + steps_ahead], forecast_horizon
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            i += steps_ahead
+
+        yhat = yhat.ravel()
+        return yhat[self.max_lag :].reshape(-1, 1)
+

fit(*, X, y) abstractmethod

abstract method

Source code in sysidentpy\narmax_base.py
@abstractmethod
+def fit(self, *, X, y):
+    """abstract method"""
+

narmax_n_step_ahead(X, y, steps_ahead)

n_steps ahead prediction method for NARMAX model

Source code in sysidentpy\narmax_base.py
770
 771
 772
 773
@@ -818,58 +762,78 @@
 785
 786
 787
-788
def narmax_n_step_ahead(self, X, y, steps_ahead):
-    """n_steps ahead prediction method for NARMAX model"""
-    if len(y) < self.max_lag:
-        raise Exception("Insufficient initial conditions elements!")
-
-    to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
-    X = X.reshape(-1, self.n_inputs)
-    yhat = np.zeros(X.shape[0], dtype=float)
-    yhat.fill(np.nan)
-    yhat[: self.max_lag] = y[: self.max_lag, 0]
-    i = self.max_lag
-    steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
-    if len(steps) > 1:
-        for step in steps[:-1]:
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X=X[step : i + steps_ahead],
-                y_initial=y[step:i],
-            )[-steps_ahead:].ravel()
-            i += steps_ahead
-
-        steps_ahead = np.sum(np.isnan(yhat))
-        yhat[i : i + steps_ahead] = self._model_prediction(
-            X=X[steps[-1] : i + steps_ahead],
-            y_initial=y[steps[-1] : i],
-        )[-steps_ahead:].ravel()
-    else:
-        yhat[i : i + steps_ahead] = self._model_prediction(
-            X=X[0 : i + steps_ahead],
-            y_initial=y[0:i],
-        )[-steps_ahead:].ravel()
-
-    yhat = yhat.ravel()[self.max_lag : :]
-    return yhat.reshape(-1, 1)
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None) abstractmethod

abstract methods

Source code in sysidentpy\narmax_base.py
@abstractmethod
-def predict(
-    self,
-    *,
-    X: Union[np.ndarray, None] = None,
-    y: Union[np.ndarray, None] = None,
-    steps_ahead: Union[int, None] = None,
-    forecast_horizon: Union[int, None] = None,
-) -> np.ndarray:
-    """abstract methods"""
+788
+789
+790
+791
+792
+793
+794
+795
+796
+797
+798
+799
+800
+801
+802
+803
+804
+805
def narmax_n_step_ahead(self, X, y, steps_ahead):
+    """n_steps ahead prediction method for NARMAX model"""
+    if len(y) < self.max_lag:
+        raise ValueError(
+            "Insufficient initial condition elements! Expected at least"
+            f" {self.max_lag} elements."
+        )
+
+    to_remove = int(np.ceil((len(y) - self.max_lag) / steps_ahead))
+    X = X.reshape(-1, self.n_inputs)
+    yhat = np.zeros(X.shape[0], dtype=float)
+    yhat.fill(np.nan)
+    yhat[: self.max_lag] = y[: self.max_lag, 0]
+    i = self.max_lag
+    steps = [step for step in range(0, to_remove * steps_ahead, steps_ahead)]
+    if len(steps) > 1:
+        for step in steps[:-1]:
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X=X[step : i + steps_ahead],
+                y_initial=y[step:i],
+            )[-steps_ahead:].ravel()
+            i += steps_ahead
+
+        steps_ahead = np.sum(np.isnan(yhat))
+        yhat[i : i + steps_ahead] = self._model_prediction(
+            X=X[steps[-1] : i + steps_ahead],
+            y_initial=y[steps[-1] : i],
+        )[-steps_ahead:].ravel()
+    else:
+        yhat[i : i + steps_ahead] = self._model_prediction(
+            X=X[0 : i + steps_ahead],
+            y_initial=y[0:i],
+        )[-steps_ahead:].ravel()
+
+    yhat = yhat.ravel()[self.max_lag : :]
+    return yhat.reshape(-1, 1)
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None) abstractmethod

abstract methods

Source code in sysidentpy\narmax_base.py
@abstractmethod
+def predict(
+    self,
+    *,
+    X: Union[np.ndarray, None] = None,
+    y: Union[np.ndarray, None] = None,
+    steps_ahead: Union[int, None] = None,
+    forecast_horizon: Union[int, None] = None,
+) -> np.ndarray:
+    """abstract methods"""
 

InformationMatrix

Class for methods regarding preprocessing of columns

Source code in sysidentpy\narmax_base.py
 19
  20
  21
@@ -1123,7 +1087,9 @@
 269
 270
 271
-272
class InformationMatrix:
+272
+273
+274
class InformationMatrix:
     """Class for methods regarding preprocessing of columns"""
 
     def __init__(
@@ -1292,7 +1258,7 @@
         lagged_data = np.concatenate([y_lagged, x_lagged], axis=1)
         return lagged_data
 
-    def build_output_matrix(self, y: np.ndarray) -> np.ndarray:
+    def build_output_matrix(self, *args: np.ndarray) -> np.ndarray:
         """Build the information matrix of output values.
 
         Each columns of the information matrix represents a candidate
@@ -1314,71 +1280,72 @@
         # related to its respective lags. With this approach we can create
         # the information matrix by using all possible combination of
         # the columns as a product in the iterations
-        self.ylag = self._process_ylag()
-        y_lagged = self._create_lagged_y(y)
-        constant = np.ones([y_lagged.shape[0], 1])
-        data = np.concatenate([constant, y_lagged], axis=1)
-        return data
-
-    def build_input_matrix(self, X: np.ndarray) -> np.ndarray:
-        """Build the information matrix of input values.
-
-        Each columns of the information matrix represents a candidate
-        regressor. The set of candidate regressors are based on xlag,
-        ylag, and degree entered by the user.
-
-        Parameters
-        ----------
-        X : array-like
-            Input data used on training phase.
-
-        Returns
-        -------
-        lagged_data = ndarray of floats
-            The lagged matrix built in respect with each lag and column.
-
-        """
-        # Generate a lagged data which each column is a input or output
-        # related to its respective lags. With this approach we can create
-        # the information matrix by using all possible combination of
-        # the columns as a product in the iterations
-
-        n_inputs, self.xlag = self._process_xlag(X)
-        x_lagged = self._create_lagged_X(X, n_inputs)
-        constant = np.ones([x_lagged.shape[0], 1])
-        data = np.concatenate([constant, x_lagged], axis=1)
-        return data
-
-    def build_input_output_matrix(self, X: np.ndarray, y: np.ndarray) -> np.ndarray:
-        """Build the information matrix.
-
-        Each columns of the information matrix represents a candidate
-        regressor. The set of candidate regressors are based on xlag,
-        ylag, and degree entered by the user.
-
-        Parameters
-        ----------
-        y : array-like
-            Target data used on training phase.
-        X : array-like
-            Input data used on training phase.
-
-        Returns
-        -------
-        lagged_data = ndarray of floats
-            The lagged matrix built in respect with each lag and column.
-
-        """
-        # Generate a lagged data which each column is a input or output
-        # related to its respective lags. With this approach we can create
-        # the information matrix by using all possible combination of
-        # the columns as a product in the iterations
-        lagged_data = self.initial_lagged_matrix(X, y)
-        constant = np.ones([lagged_data.shape[0], 1])
-        data = np.concatenate([constant, lagged_data], axis=1)
-        return data
-

build_input_matrix(X)

Build the information matrix of input values.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
X array-like

Input data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
216
-217
+        y = args[1]  # args[0] is X=None in NAR scenario
+        self.ylag = self._process_ylag()
+        y_lagged = self._create_lagged_y(y)
+        constant = np.ones([y_lagged.shape[0], 1])
+        data = np.concatenate([constant, y_lagged], axis=1)
+        return data
+
+    def build_input_matrix(self, *args: np.ndarray) -> np.ndarray:
+        """Build the information matrix of input values.
+
+        Each columns of the information matrix represents a candidate
+        regressor. The set of candidate regressors are based on xlag,
+        ylag, and degree entered by the user.
+
+        Parameters
+        ----------
+        X : array-like
+            Input data used on training phase.
+
+        Returns
+        -------
+        lagged_data = ndarray of floats
+            The lagged matrix built in respect with each lag and column.
+
+        """
+        # Generate a lagged data which each column is a input or output
+        # related to its respective lags. With this approach we can create
+        # the information matrix by using all possible combination of
+        # the columns as a product in the iterations
+
+        X = args[0]  # args[1] is y=None in NFIR scenario
+        n_inputs, self.xlag = self._process_xlag(X)
+        x_lagged = self._create_lagged_X(X, n_inputs)
+        constant = np.ones([x_lagged.shape[0], 1])
+        data = np.concatenate([constant, x_lagged], axis=1)
+        return data
+
+    def build_input_output_matrix(self, X: np.ndarray, y: np.ndarray) -> np.ndarray:
+        """Build the information matrix.
+
+        Each columns of the information matrix represents a candidate
+        regressor. The set of candidate regressors are based on xlag,
+        ylag, and degree entered by the user.
+
+        Parameters
+        ----------
+        y : array-like
+            Target data used on training phase.
+        X : array-like
+            Input data used on training phase.
+
+        Returns
+        -------
+        lagged_data = ndarray of floats
+            The lagged matrix built in respect with each lag and column.
+
+        """
+        # Generate a lagged data which each column is a input or output
+        # related to its respective lags. With this approach we can create
+        # the information matrix by using all possible combination of
+        # the columns as a product in the iterations
+        lagged_data = self.initial_lagged_matrix(X, y)
+        constant = np.ones([lagged_data.shape[0], 1])
+        data = np.concatenate([constant, lagged_data], axis=1)
+        return data
+

build_input_matrix(*args)

Build the information matrix of input values.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
X array-like

Input data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
217
 218
 219
 220
@@ -1404,37 +1371,38 @@
 240
 241
 242
-243
def build_input_matrix(self, X: np.ndarray) -> np.ndarray:
-    """Build the information matrix of input values.
-
-    Each columns of the information matrix represents a candidate
-    regressor. The set of candidate regressors are based on xlag,
-    ylag, and degree entered by the user.
-
-    Parameters
-    ----------
-    X : array-like
-        Input data used on training phase.
-
-    Returns
-    -------
-    lagged_data = ndarray of floats
-        The lagged matrix built in respect with each lag and column.
-
-    """
-    # Generate a lagged data which each column is a input or output
-    # related to its respective lags. With this approach we can create
-    # the information matrix by using all possible combination of
-    # the columns as a product in the iterations
-
-    n_inputs, self.xlag = self._process_xlag(X)
-    x_lagged = self._create_lagged_X(X, n_inputs)
-    constant = np.ones([x_lagged.shape[0], 1])
-    data = np.concatenate([constant, x_lagged], axis=1)
-    return data
-

build_input_output_matrix(X, y)

Build the information matrix.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
y array-like

Target data used on training phase.

required
X array-like

Input data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
def build_input_matrix(self, *args: np.ndarray) -> np.ndarray:
+    """Build the information matrix of input values.
+
+    Each columns of the information matrix represents a candidate
+    regressor. The set of candidate regressors are based on xlag,
+    ylag, and degree entered by the user.
+
+    Parameters
+    ----------
+    X : array-like
+        Input data used on training phase.
+
+    Returns
+    -------
+    lagged_data = ndarray of floats
+        The lagged matrix built in respect with each lag and column.
+
+    """
+    # Generate a lagged data which each column is a input or output
+    # related to its respective lags. With this approach we can create
+    # the information matrix by using all possible combination of
+    # the columns as a product in the iterations
+
+    X = args[0]  # args[1] is y=None in NFIR scenario
+    n_inputs, self.xlag = self._process_xlag(X)
+    x_lagged = self._create_lagged_X(X, n_inputs)
+    constant = np.ones([x_lagged.shape[0], 1])
+    data = np.concatenate([constant, x_lagged], axis=1)
+    return data
+

build_input_output_matrix(X, y)

Build the information matrix.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
y array-like

Target data used on training phase.

required
X array-like

Input data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
247
 248
 249
 250
@@ -1459,35 +1427,37 @@
 269
 270
 271
-272
def build_input_output_matrix(self, X: np.ndarray, y: np.ndarray) -> np.ndarray:
-    """Build the information matrix.
-
-    Each columns of the information matrix represents a candidate
-    regressor. The set of candidate regressors are based on xlag,
-    ylag, and degree entered by the user.
-
-    Parameters
-    ----------
-    y : array-like
-        Target data used on training phase.
-    X : array-like
-        Input data used on training phase.
-
-    Returns
-    -------
-    lagged_data = ndarray of floats
-        The lagged matrix built in respect with each lag and column.
-
-    """
-    # Generate a lagged data which each column is a input or output
-    # related to its respective lags. With this approach we can create
-    # the information matrix by using all possible combination of
-    # the columns as a product in the iterations
-    lagged_data = self.initial_lagged_matrix(X, y)
-    constant = np.ones([lagged_data.shape[0], 1])
-    data = np.concatenate([constant, lagged_data], axis=1)
-    return data
-

build_output_matrix(y)

Build the information matrix of output values.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
y array-like

Target data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
def build_input_output_matrix(self, X: np.ndarray, y: np.ndarray) -> np.ndarray:
+    """Build the information matrix.
+
+    Each columns of the information matrix represents a candidate
+    regressor. The set of candidate regressors are based on xlag,
+    ylag, and degree entered by the user.
+
+    Parameters
+    ----------
+    y : array-like
+        Target data used on training phase.
+    X : array-like
+        Input data used on training phase.
+
+    Returns
+    -------
+    lagged_data = ndarray of floats
+        The lagged matrix built in respect with each lag and column.
+
+    """
+    # Generate a lagged data which each column is a input or output
+    # related to its respective lags. With this approach we can create
+    # the information matrix by using all possible combination of
+    # the columns as a product in the iterations
+    lagged_data = self.initial_lagged_matrix(X, y)
+    constant = np.ones([lagged_data.shape[0], 1])
+    data = np.concatenate([constant, lagged_data], axis=1)
+    return data
+

build_output_matrix(*args)

Build the information matrix of output values.

Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and degree entered by the user.

Parameters:

Name Type Description Default
y array-like

Target data used on training phase.

required

Returns:

Type Description
lagged_data

The lagged matrix built in respect with each lag and column.

Source code in sysidentpy\narmax_base.py
188
 189
 190
 191
@@ -1513,7 +1483,8 @@
 211
 212
 213
-214
def build_output_matrix(self, y: np.ndarray) -> np.ndarray:
+214
+215
def build_output_matrix(self, *args: np.ndarray) -> np.ndarray:
     """Build the information matrix of output values.
 
     Each columns of the information matrix represents a candidate
@@ -1535,11 +1506,12 @@
     # related to its respective lags. With this approach we can create
     # the information matrix by using all possible combination of
     # the columns as a product in the iterations
-    self.ylag = self._process_ylag()
-    y_lagged = self._create_lagged_y(y)
-    constant = np.ones([y_lagged.shape[0], 1])
-    data = np.concatenate([constant, y_lagged], axis=1)
-    return data
+    y = args[1]  # args[0] is X=None in NAR scenario
+    self.ylag = self._process_ylag()
+    y_lagged = self._create_lagged_y(y)
+    constant = np.ones([y_lagged.shape[0], 1])
+    data = np.concatenate([constant, y_lagged], axis=1)
+    return data
 

initial_lagged_matrix(X, y)

Build a lagged matrix concerning each lag for each column.

Parameters:

Name Type Description Default
y array-like

Target data used on training phase.

required
X array-like

Input data used on training phase.

required

Returns:

Name Type Description
lagged_data ndarray of floats

The lagged matrix built in respect with each lag and column.

Examples:

Let X and y be the input and output values of shape Nx1. If the chosen lags are 2 for both input and output the initial lagged matrix will be formed by Y[k-1], Y[k-2], X[k-1], and X[k-2].

Source code in sysidentpy\narmax_base.py
158
 159
 160
@@ -1653,7 +1625,17 @@
     aux = col_to_shift[0 : n_samples - lag].reshape(-1, 1)
     tmp_column[lag:, 0] = aux[:, 0]
     return tmp_column
-

Orthogonalization

Householder reflection and transformation.

Source code in sysidentpy\narmax_base.py

Orthogonalization

Householder reflection and transformation.

Source code in sysidentpy\narmax_base.py
class Orthogonalization:
-    """Householder reflection and transformation."""
-
-    def house(self, x):
-        """Perform a Householder reflection of vector.
-
-        Parameters
-        ----------
-        x : array-like of shape = number_of_training_samples
-            The respective column of the matrix of regressors in each
-            iteration of ERR function.
-
-        Returns
-        -------
-        v : array-like of shape = number_of_training_samples
-            The reflection of the array x.
-
-        References
-        ----------
-        - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
-            Orthogonal least squares methods and their application to non-linear
-            system identification.
-
-        """
-        u = np.linalg.norm(x, 2)
-        if u != 0:
-            aux_b = x[0] + np.sign(x[0]) * u
-            x = x[1:] / aux_b
-            x = np.concatenate((np.array([1]), x))
-        return x
-
-    def rowhouse(self, RA, v):
-        """Perform a row Householder transformation.
-
-        Parameters
-        ----------
-        RA : array-like of shape = number_of_training_samples
-            The respective column of the matrix of regressors in each
-            iteration of ERR function.
-        v : array-like of shape = number_of_training_samples
-            The reflected vector obtained by using the householder reflection.
-
-        Returns
-        -------
-        B : array-like of shape = number_of_training_samples
-
-        References
-        ----------
-        - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
-            Orthogonal least squares methods and their application to
-            non-linear system identification. International Journal of
-            control, 50(5), 1873-1896.
-
-        """
-        b = -2 / np.dot(v.T, v)
-        w = b * np.dot(RA.T, v)
-        w = w.reshape(1, -1)
-        v = v.reshape(-1, 1)
-        RA = RA + v * w
-        B = RA
-        return B
-

house(x)

Perform a Householder reflection of vector.

Parameters:

Name Type Description Default
x array-like of shape

The respective column of the matrix of regressors in each iteration of ERR function.

required

Returns:

Name Type Description
v array-like of shape

The reflection of the array x.

References
  • Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989). Orthogonal least squares methods and their application to non-linear system identification.
Source code in sysidentpy\narmax_base.py
class Orthogonalization:
+    """Householder reflection and transformation."""
+
+    def house(self, x):
+        """Perform a Householder reflection of vector.
+
+        Parameters
+        ----------
+        x : array-like of shape = number_of_training_samples
+            The respective column of the matrix of regressors in each
+            iteration of ERR function.
+
+        Returns
+        -------
+        v : array-like of shape = number_of_training_samples
+            The reflection of the array x.
+
+        References
+        ----------
+        - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
+            Orthogonal least squares methods and their application to non-linear
+            system identification.
+
+        """
+        u = np.linalg.norm(x, 2)
+        if u != 0:
+            aux_b = x[0] + np.sign(x[0]) * u
+            x = x[1:] / aux_b
+            x = np.concatenate((np.array([1]), x))
+        return x
+
+    def rowhouse(self, RA, v):
+        """Perform a row Householder transformation.
+
+        Parameters
+        ----------
+        RA : array-like of shape = number_of_training_samples
+            The respective column of the matrix of regressors in each
+            iteration of ERR function.
+        v : array-like of shape = number_of_training_samples
+            The reflected vector obtained by using the householder reflection.
+
+        Returns
+        -------
+        B : array-like of shape = number_of_training_samples
+
+        References
+        ----------
+        - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
+            Orthogonal least squares methods and their application to
+            non-linear system identification. International Journal of
+            control, 50(5), 1873-1896.
+
+        """
+        b = -2 / np.dot(v.T, v)
+        w = b * np.dot(RA.T, v)
+        w = w.reshape(1, -1)
+        v = v.reshape(-1, 1)
+        RA = RA + v * w
+        B = RA
+        return B
+

house(x)

Perform a Householder reflection of vector.

Parameters:

Name Type Description Default
x array-like of shape

The respective column of the matrix of regressors in each iteration of ERR function.

required

Returns:

Name Type Description
v array-like of shape

The reflection of the array x.

References
  • Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989). Orthogonal least squares methods and their application to non-linear system identification.
Source code in sysidentpy\narmax_base.py
956
+957
+958
+959
+960
+961
+962
+963
+964
+965
+966
 967
 968
 969
@@ -1790,44 +1772,44 @@
 979
 980
 981
-982
-983
-984
-985
-986
-987
-988
-989
-990
-991
-992
def house(self, x):
-    """Perform a Householder reflection of vector.
-
-    Parameters
-    ----------
-    x : array-like of shape = number_of_training_samples
-        The respective column of the matrix of regressors in each
-        iteration of ERR function.
-
-    Returns
-    -------
-    v : array-like of shape = number_of_training_samples
-        The reflection of the array x.
-
-    References
-    ----------
-    - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
-        Orthogonal least squares methods and their application to non-linear
-        system identification.
-
-    """
-    u = np.linalg.norm(x, 2)
-    if u != 0:
-        aux_b = x[0] + np.sign(x[0]) * u
-        x = x[1:] / aux_b
-        x = np.concatenate((np.array([1]), x))
-    return x
-

rowhouse(RA, v)

Perform a row Householder transformation.

Parameters:

Name Type Description Default
RA array-like of shape

The respective column of the matrix of regressors in each iteration of ERR function.

required
v array-like of shape

The reflected vector obtained by using the householder reflection.

required

Returns:

Name Type Description
B array-like of shape
References
  • Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989). Orthogonal least squares methods and their application to non-linear system identification. International Journal of control, 50(5), 1873-1896.
Source code in sysidentpy\narmax_base.py
def house(self, x):
+    """Perform a Householder reflection of vector.
+
+    Parameters
+    ----------
+    x : array-like of shape = number_of_training_samples
+        The respective column of the matrix of regressors in each
+        iteration of ERR function.
+
+    Returns
+    -------
+    v : array-like of shape = number_of_training_samples
+        The reflection of the array x.
+
+    References
+    ----------
+    - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
+        Orthogonal least squares methods and their application to non-linear
+        system identification.
+
+    """
+    u = np.linalg.norm(x, 2)
+    if u != 0:
+        aux_b = x[0] + np.sign(x[0]) * u
+        x = x[1:] / aux_b
+        x = np.concatenate((np.array([1]), x))
+    return x
+

rowhouse(RA, v)

Perform a row Householder transformation.

Parameters:

Name Type Description Default
RA array-like of shape

The respective column of the matrix of regressors in each iteration of ERR function.

required
v array-like of shape

The reflected vector obtained by using the householder reflection.

required

Returns:

Name Type Description
B array-like of shape
References
  • Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989). Orthogonal least squares methods and their application to non-linear system identification. International Journal of control, 50(5), 1873-1896.
Source code in sysidentpy\narmax_base.py
def rowhouse(self, RA, v):
-    """Perform a row Householder transformation.
-
-    Parameters
-    ----------
-    RA : array-like of shape = number_of_training_samples
-        The respective column of the matrix of regressors in each
-        iteration of ERR function.
-    v : array-like of shape = number_of_training_samples
-        The reflected vector obtained by using the householder reflection.
-
-    Returns
-    -------
-    B : array-like of shape = number_of_training_samples
-
-    References
-    ----------
-    - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
-        Orthogonal least squares methods and their application to
-        non-linear system identification. International Journal of
-        control, 50(5), 1873-1896.
-
-    """
-    b = -2 / np.dot(v.T, v)
-    w = b * np.dot(RA.T, v)
-    w = w.reshape(1, -1)
-    v = v.reshape(-1, 1)
-    RA = RA + v * w
-    B = RA
-    return B
-

RegressorDictionary

Bases: InformationMatrix

Base class for Model Structure Selection

Source code in sysidentpy\narmax_base.py
def rowhouse(self, RA, v):
+    """Perform a row Householder transformation.
+
+    Parameters
+    ----------
+    RA : array-like of shape = number_of_training_samples
+        The respective column of the matrix of regressors in each
+        iteration of ERR function.
+    v : array-like of shape = number_of_training_samples
+        The reflected vector obtained by using the householder reflection.
+
+    Returns
+    -------
+    B : array-like of shape = number_of_training_samples
+
+    References
+    ----------
+    - Manuscript: Chen, S., Billings, S. A., & Luo, W. (1989).
+        Orthogonal least squares methods and their application to
+        non-linear system identification. International Journal of
+        control, 50(5), 1873-1896.
+
+    """
+    b = -2 / np.dot(v.T, v)
+    w = b * np.dot(RA.T, v)
+    w = w.reshape(1, -1)
+    v = v.reshape(-1, 1)
+    RA = RA + v * w
+    B = RA
+    return B
+

RegressorDictionary

Bases: InformationMatrix

Base class for Model Structure Selection

Source code in sysidentpy\narmax_base.py
277
 278
 279
 280
@@ -2183,311 +2153,329 @@
 569
 570
 571
-572
class RegressorDictionary(InformationMatrix):
-    """Base class for Model Structure Selection"""
-
-    def __init__(
-        self,
-        xlag: Union[List[Any], Any] = 1,
-        ylag: Union[List[Any], Any] = 1,
-        basis_function: Union[Polynomial, Fourier] = Polynomial(),
-        model_type: str = "NARMAX",
-    ):
-        super().__init__(xlag, ylag)
-        self.basis_function = basis_function
-        self.model_type = model_type
-
-    def create_narmax_code(self, n_inputs: int) -> Tuple[np.ndarray, np.ndarray]:
-        """Create the code representation of the regressors.
-
-        This function generates a codification from all possibles
-        regressors given the maximum lag of the input and output.
-        This is used to write the final terms of the model in a
-        readable form. [1001] -> y(k-1).
-        This code format was based on a dissertation from UFMG. See
-        reference below.
-
-        Parameters
-        ----------
-        n_inputs : int
-            Number of input variables.
-
-        Returns
-        -------
-        max_lag : int
-            This value can be used by another functions.
-        regressor_code : ndarray of int
-            Matrix codification of all possible regressors.
-
-        Examples
-        --------
-        The codification is defined as:
-
-        >>> 100n = y(k-n)
-        >>> 200n = u(k-n)
-        >>> [100n 100n] = y(k-n)y(k-n)
-        >>> [200n 200n] = u(k-n)u(k-n)
-
-        References
-        ----------
-        - Master Thesis: Barbosa, Alípio Monteiro.
-            Técnicas de otimização bi-objetivo para a determinação
-            da estrutura de modelos NARX (2010).
-
-        """
-        if self.basis_function.degree < 1:
-            raise ValueError(
-                f"degree must be integer and > zero. Got {self.basis_function.degree}"
-            )
-
-        if np.min(np.minimum(self.ylag, 1)) < 1:
-            raise ValueError(
-                f"ylag must be integer or list and > zero. Got {self.ylag}"
-            )
-
-        if np.min(np.min(list(chain.from_iterable([[self.xlag]])))) < 1:
-            raise ValueError(
-                f"xlag must be integer or list and > zero. Got {self.xlag}"
-            )
-
-        y_vec = self.get_y_lag_list()
+572
+573
+574
+575
+576
+577
+578
+579
+580
+581
+582
+583
class RegressorDictionary(InformationMatrix):
+    """Base class for Model Structure Selection"""
+
+    def __init__(
+        self,
+        xlag: Union[List[Any], Any] = 1,
+        ylag: Union[List[Any], Any] = 1,
+        basis_function: Union[Polynomial, Fourier] = Polynomial(),
+        model_type: str = "NARMAX",
+    ):
+        super().__init__(xlag, ylag)
+        self.basis_function = basis_function
+        self.model_type = model_type
+
+    def create_narmax_code(self, n_inputs: int) -> Tuple[np.ndarray, np.ndarray]:
+        """Create the code representation of the regressors.
+
+        This function generates a codification from all possibles
+        regressors given the maximum lag of the input and output.
+        This is used to write the final terms of the model in a
+        readable form. [1001] -> y(k-1).
+        This code format was based on a dissertation from UFMG. See
+        reference below.
+
+        Parameters
+        ----------
+        n_inputs : int
+            Number of input variables.
+
+        Returns
+        -------
+        max_lag : int
+            This value can be used by another functions.
+        regressor_code : ndarray of int
+            Matrix codification of all possible regressors.
+
+        Examples
+        --------
+        The codification is defined as:
+
+        >>> 100n = y(k-n)
+        >>> 200n = u(k-n)
+        >>> [100n 100n] = y(k-n)y(k-n)
+        >>> [200n 200n] = u(k-n)u(k-n)
+
+        References
+        ----------
+        - Master Thesis: Barbosa, Alípio Monteiro.
+            Técnicas de otimização bi-objetivo para a determinação
+            da estrutura de modelos NARX (2010).
+
+        """
+        if self.basis_function.degree < 1:
+            raise ValueError(
+                f"degree must be integer and > zero. Got {self.basis_function.degree}"
+            )
+
+        if np.min(np.minimum(self.ylag, 1)) < 1:
+            raise ValueError(
+                f"ylag must be integer or list and > zero. Got {self.ylag}"
+            )
+
+        if np.min(np.min(list(chain.from_iterable([[self.xlag]])))) < 1:
+            raise ValueError(
+                f"xlag must be integer or list and > zero. Got {self.xlag}"
+            )
 
-        if n_inputs == 1:
-            x_vec = self.get_siso_x_lag_list()
-        else:
-            x_vec = self.get_miso_x_lag_list(n_inputs)
-
-        return x_vec, y_vec
+        y_vec = self.get_y_lag_list()
+
+        if n_inputs == 1:
+            x_vec = self.get_siso_x_lag_list()
+        else:
+            x_vec = self.get_miso_x_lag_list(n_inputs)
 
-    def get_y_lag_list(self) -> np.ndarray:
-        """Return y regressor code list.
-
-        Returns
-        -------
-        y_vec = ndarray of ints
-            The y regressor code list given the ylag.
-
-        """
-        if isinstance(self.ylag, list):
-            # create only the lags passed from list
-            y_vec = []
-            y_vec.extend([lag + 1000 for lag in self.ylag])
-            return np.array(y_vec)
-
-        # create a range of lags if passed a int value
-        return np.arange(1001, 1001 + self.ylag)
-
-    def get_siso_x_lag_list(self) -> np.ndarray:
-        """Return x regressor code list for SISO models.
-
-        Returns
-        -------
-        x_vec_tmp = ndarray of ints
-            The x regressor code list given the xlag for a SISO model.
-
-        """
-        if isinstance(self.xlag, list):
-            # create only the lags passed from list
-            x_vec_tmp = []
-            x_vec_tmp.extend([lag + 2000 for lag in self.xlag])
-            return np.array(x_vec_tmp)
-
-        # create a range of lags if passed a int value
-        return np.arange(2001, 2001 + self.xlag)
-
-    def get_miso_x_lag_list(self, n_inputs: int) -> np.ndarray:
-        """Return x regressor code list for MISO models.
-
-        Returns
-        -------
-        x_vec = ndarray of ints
-            The x regressor code list given the xlag for a MISO model.
-
-        """
-        # only list are allowed if n_inputs > 1
-        # the user must entered list of the desired lags explicitly
-        x_vec_tmp = []
-        for i in range(n_inputs):
-            if isinstance(self.xlag[i], list):
-                # create 200n, 300n,..., 400n to describe each input
-                x_vec_tmp.extend([lag + 2000 + i * 1000 for lag in self.xlag[i]])
-            elif isinstance(self.xlag[i], int) and n_inputs > 1:
-                x_vec_tmp.extend(
-                    [np.arange(2001 + i * 1000, 2001 + i * 1000 + self.xlag[i])]
-                )
-
-        # if x_vec is a nested list, ensure all elements are arrays
-        all_arrays = [np.array([i]) if isinstance(i, int) else i for i in x_vec_tmp]
-        return np.concatenate([i for i in all_arrays])
-
-    def regressor_space(self, n_inputs):
-        """Create regressor code based on model type.
-
-        Parameters
-        ----------
-        n_inputs : int
-            Number of input variables.
-
-        Returns
-        -------
-        regressor_code = ndarray of ints
-            The regressor code list given the xlag and ylag for a MISO model.
-
-        """
-        x_vec, y_vec = self.create_narmax_code(n_inputs)
-        reg_aux = np.array([0])
-        if self.model_type == "NARMAX":
-            reg_aux = np.concatenate([reg_aux, y_vec, x_vec])
-        elif self.model_type == "NAR":
-            reg_aux = np.concatenate([reg_aux, y_vec])
-        elif self.model_type == "NFIR":
-            reg_aux = np.concatenate([reg_aux, x_vec])
-        else:
-            raise ValueError(
-                "Unrecognized model type. Model type should be NARMAX, NAR or NFIR"
-            )
-
-        regressor_code = list(
-            combinations_with_replacement(reg_aux, self.basis_function.degree)
-        )
-
-        regressor_code = np.array(regressor_code)
-        regressor_code = regressor_code[:, regressor_code.shape[1] :: -1]
-        return regressor_code
-
-    def _get_index_from_regressor_code(self, regressor_code, model_code):
-        """Get the index of user regressor in regressor space.
-
-        Took from: https://stackoverflow.com/questions/38674027/find-the-row-indexes-of-several-values-in-a-numpy-array/38674038#38674038
+        return x_vec, y_vec
+
+    def get_y_lag_list(self) -> np.ndarray:
+        """Return y regressor code list.
+
+        Returns
+        -------
+        y_vec = ndarray of ints
+            The y regressor code list given the ylag.
+
+        """
+        if isinstance(self.ylag, list):
+            # create only the lags passed from list
+            y_vec = []
+            y_vec.extend([lag + 1000 for lag in self.ylag])
+            return np.array(y_vec)
+
+        # create a range of lags if passed a int value
+        return np.arange(1001, 1001 + self.ylag)
+
+    def get_siso_x_lag_list(self) -> np.ndarray:
+        """Return x regressor code list for SISO models.
+
+        Returns
+        -------
+        x_vec_tmp = ndarray of ints
+            The x regressor code list given the xlag for a SISO model.
+
+        """
+        if isinstance(self.xlag, list):
+            # create only the lags passed from list
+            x_vec_tmp = []
+            x_vec_tmp.extend([lag + 2000 for lag in self.xlag])
+            return np.array(x_vec_tmp)
+
+        # create a range of lags if passed a int value
+        return np.arange(2001, 2001 + self.xlag)
+
+    def get_miso_x_lag_list(self, n_inputs: int) -> np.ndarray:
+        """Return x regressor code list for MISO models.
+
+        Returns
+        -------
+        x_vec = ndarray of ints
+            The x regressor code list given the xlag for a MISO model.
+
+        """
+        # only list are allowed if n_inputs > 1
+        # the user must entered list of the desired lags explicitly
+        x_vec_tmp = []
+        for i in range(n_inputs):
+            if isinstance(self.xlag[i], list):
+                # create 200n, 300n,..., 400n to describe each input
+                x_vec_tmp.extend([lag + 2000 + i * 1000 for lag in self.xlag[i]])
+            elif isinstance(self.xlag[i], int) and n_inputs > 1:
+                x_vec_tmp.extend(
+                    [np.arange(2001 + i * 1000, 2001 + i * 1000 + self.xlag[i])]
+                )
+
+        # if x_vec is a nested list, ensure all elements are arrays
+        all_arrays = [np.array([i]) if isinstance(i, int) else i for i in x_vec_tmp]
+        return np.concatenate([i for i in all_arrays])
+
+    def regressor_space(self, n_inputs):
+        """Create regressor code based on model type.
+
+        Parameters
+        ----------
+        n_inputs : int
+            Number of input variables.
+
+        Returns
+        -------
+        regressor_code = ndarray of ints
+            The regressor code list given the xlag and ylag for a MISO model.
+
+        """
+        x_vec, y_vec = self.create_narmax_code(n_inputs)
+        reg_aux = np.array([0])
+        if self.model_type == "NARMAX":
+            reg_aux = np.concatenate([reg_aux, y_vec, x_vec])
+        elif self.model_type == "NAR":
+            reg_aux = np.concatenate([reg_aux, y_vec])
+        elif self.model_type == "NFIR":
+            reg_aux = np.concatenate([reg_aux, x_vec])
+        else:
+            raise ValueError(
+                "Unrecognized model type. Model type should be NARMAX, NAR or NFIR"
+            )
+
+        regressor_code = list(
+            combinations_with_replacement(reg_aux, self.basis_function.degree)
+        )
+
+        regressor_code = np.array(regressor_code)
+        regressor_code = regressor_code[:, regressor_code.shape[1] :: -1]
+        return regressor_code
+
+    def _get_index_from_regressor_code(self, regressor_code, model_code):
+        """Get the index of user regressor in regressor space.
 
-        Parameters
-        ----------
-        regressor_code : ndarray of int
-            Matrix codification of all possible regressors.
-        model_code : ndarray of int
-            Model defined by the user to simulate.
-
-        Returns
-        -------
-        model_index : ndarray of int
-            Index of model code in the regressor space.
-
-        """
-        dims = regressor_code.max(0) + 1
-        model_index = np.where(
-            np.in1d(
-                np.ravel_multi_index(regressor_code.T, dims),
-                np.ravel_multi_index(model_code.T, dims),
-            )
-        )[0]
-        return model_index
-
-    def _list_output_regressor_code(self, model_code):
-        """Create a flattened array of output regressors.
-
-        Parameters
-        ----------
-        model_code : ndarray of int
-            Model defined by the user to simulate.
-
-        Returns
-        -------
-        model_code : ndarray of int
-            Flattened list of output regressors.
-
-        """
-        regressor_code = [
-            code for code in model_code.ravel() if (code != 0) and (str(code)[0] == "1")
-        ]
-
-        return np.asarray(regressor_code)
+        Took from: https://stackoverflow.com/questions/38674027/find-the-row-indexes-of-several-values-in-a-numpy-array/38674038#38674038
+
+        Parameters
+        ----------
+        regressor_code : ndarray of int
+            Matrix codification of all possible regressors.
+        model_code : ndarray of int
+            Model defined by the user to simulate.
+
+        Returns
+        -------
+        model_index : ndarray of int
+            Index of model code in the regressor space.
+
+        """
+        dims = regressor_code.max(0) + 1
+        model_index = np.where(
+            np.in1d(
+                np.ravel_multi_index(regressor_code.T, dims),
+                np.ravel_multi_index(model_code.T, dims),
+            )
+        )[0]
+        return model_index
+
+    def _list_output_regressor_code(self, model_code):
+        """Create a flattened array of output regressors.
+
+        Parameters
+        ----------
+        model_code : ndarray of int
+            Model defined by the user to simulate.
+
+        Returns
+        -------
+        model_code : ndarray of int
+            Flattened list of output regressors.
+
+        """
+        regressor_code = [
+            code for code in model_code.ravel() if (code != 0) and (str(code)[0] == "1")
+        ]
 
-    def _list_input_regressor_code(self, model_code):
-        """Create a flattened array of input regressors.
-
-        Parameters
-        ----------
-        model_code : ndarray of int
-            Model defined by the user to simulate.
-
-        Returns
-        -------
-        model_code : ndarray of int
-            Flattened list of output regressors.
-
-        """
-        regressor_code = [
-            code for code in model_code.ravel() if (code != 0) and (str(code)[0] != "1")
-        ]
-        return np.asarray(regressor_code)
-
-    def _get_lag_from_regressor_code(self, regressors):
-        """Get the maximum lag from array of regressors.
-
-        Parameters
-        ----------
-        regressors : ndarray of int
-            Flattened list of input or output regressors.
-
-        Returns
-        -------
-        max_lag : int
-            Maximum lag of list of regressors.
-
-        """
-        lag_list = [
-            int(i) for i in regressors.astype("str") for i in [np.sum(int(i[2:]))]
-        ]
-        if len(lag_list) != 0:
-            return max(lag_list)
-
-        return 1
+        return np.asarray(regressor_code)
+
+    def _list_input_regressor_code(self, model_code):
+        """Create a flattened array of input regressors.
+
+        Parameters
+        ----------
+        model_code : ndarray of int
+            Model defined by the user to simulate.
+
+        Returns
+        -------
+        model_code : ndarray of int
+            Flattened list of output regressors.
+
+        """
+        regressor_code = [
+            code for code in model_code.ravel() if (code != 0) and (str(code)[0] != "1")
+        ]
+        return np.asarray(regressor_code)
+
+    def _get_lag_from_regressor_code(self, regressors):
+        """Get the maximum lag from array of regressors.
+
+        Parameters
+        ----------
+        regressors : ndarray of int
+            Flattened list of input or output regressors.
+
+        Returns
+        -------
+        max_lag : int
+            Maximum lag of list of regressors.
+
+        """
+        lag_list = [
+            int(i) for i in regressors.astype("str") for i in [np.sum(int(i[2:]))]
+        ]
+        if len(lag_list) != 0:
+            return max(lag_list)
 
-    def _get_max_lag_from_model_code(self, model_code):
-        """Create a flattened array of input regressors.
-
-        Parameters
-        ----------
-        model_code : ndarray of int
-            Model defined by the user to simulate.
-
-        Returns
-        -------
-        max_lag : int
-            Maximum lag of list of regressors.
-
-        """
-        xlag_code = self._list_input_regressor_code(model_code)
-        ylag_code = self._list_output_regressor_code(model_code)
-        xlag = self._get_lag_from_regressor_code(xlag_code)
-        ylag = self._get_lag_from_regressor_code(ylag_code)
-        return max(xlag, ylag)
-
-    def _get_max_lag(self):
-        """Get the max lag defined by the user.
-
-        Parameters
-        ----------
-        ylag : int
-            The maximum lag of output regressors.
-        xlag : int
-            The maximum lag of input regressors.
-
-        Returns
-        -------
-        max_lag = int
-            The max lag value defined by the user.
-        """
-        ny = np.max(list(chain.from_iterable([[self.ylag]])))
-        nx = np.max(list(chain.from_iterable([[np.array(self.xlag, dtype=object)]])))
-        return np.max([ny, np.max(nx)])
+        return 1
+
+    def _get_max_lag_from_model_code(self, model_code):
+        """Create a flattened array of input regressors.
+
+        Parameters
+        ----------
+        model_code : ndarray of int
+            Model defined by the user to simulate.
+
+        Returns
+        -------
+        max_lag : int
+            Maximum lag of list of regressors.
+
+        """
+        xlag_code = self._list_input_regressor_code(model_code)
+        ylag_code = self._list_output_regressor_code(model_code)
+        xlag = self._get_lag_from_regressor_code(xlag_code)
+        ylag = self._get_lag_from_regressor_code(ylag_code)
+        return max(xlag, ylag)
+
+    def _get_max_lag(self):
+        """Get the max lag defined by the user.
+
+        Parameters
+        ----------
+        ylag : int
+            The maximum lag of output regressors.
+        xlag : int
+            The maximum lag of input regressors.
+
+        Returns
+        -------
+        max_lag = int
+            The max lag value defined by the user.
+        """
+        ny = np.max(list(chain.from_iterable([[self.ylag]])))
+        nx = np.max(list(chain.from_iterable([[np.array(self.xlag, dtype=object)]])))
+        return np.max([ny, np.max(nx)])
+
+    def get_build_io_method(self, model_type):
+        """get info criteria"""
+        build_matrix_options = {
+            "NARMAX": self.build_input_output_matrix,
+            "NFIR": self.build_input_matrix,
+            "NAR": self.build_output_matrix,
+        }
+        return build_matrix_options.get(model_type, None)
 

create_narmax_code(n_inputs)

Create the code representation of the regressors.

This function generates a codification from all possibles regressors given the maximum lag of the input and output. This is used to write the final terms of the model in a readable form. [1001] -> y(k-1). This code format was based on a dissertation from UFMG. See reference below.

Parameters:

Name Type Description Default
n_inputs int

Number of input variables.

required

Returns:

Name Type Description
max_lag int

This value can be used by another functions.

regressor_code ndarray of int

Matrix codification of all possible regressors.

Examples:

The codification is defined as:

>>> 100n = y(k-n)
 >>> 200n = u(k-n)
 >>> [100n 100n] = y(k-n)y(k-n)
 >>> [200n 200n] = u(k-n)u(k-n)
-
References
  • Master Thesis: Barbosa, Alípio Monteiro. Técnicas de otimização bi-objetivo para a determinação da estrutura de modelos NARX (2010).
Source code in sysidentpy\narmax_base.py
289
-290
-291
+
References
  • Master Thesis: Barbosa, Alípio Monteiro. Técnicas de otimização bi-objetivo para a determinação da estrutura de modelos NARX (2010).
Source code in sysidentpy\narmax_base.py
291
 292
 293
 294
@@ -2545,70 +2533,85 @@
 346
 347
 348
-349
def create_narmax_code(self, n_inputs: int) -> Tuple[np.ndarray, np.ndarray]:
-    """Create the code representation of the regressors.
-
-    This function generates a codification from all possibles
-    regressors given the maximum lag of the input and output.
-    This is used to write the final terms of the model in a
-    readable form. [1001] -> y(k-1).
-    This code format was based on a dissertation from UFMG. See
-    reference below.
-
-    Parameters
-    ----------
-    n_inputs : int
-        Number of input variables.
-
-    Returns
-    -------
-    max_lag : int
-        This value can be used by another functions.
-    regressor_code : ndarray of int
-        Matrix codification of all possible regressors.
-
-    Examples
-    --------
-    The codification is defined as:
-
-    >>> 100n = y(k-n)
-    >>> 200n = u(k-n)
-    >>> [100n 100n] = y(k-n)y(k-n)
-    >>> [200n 200n] = u(k-n)u(k-n)
-
-    References
-    ----------
-    - Master Thesis: Barbosa, Alípio Monteiro.
-        Técnicas de otimização bi-objetivo para a determinação
-        da estrutura de modelos NARX (2010).
-
-    """
-    if self.basis_function.degree < 1:
-        raise ValueError(
-            f"degree must be integer and > zero. Got {self.basis_function.degree}"
-        )
-
-    if np.min(np.minimum(self.ylag, 1)) < 1:
-        raise ValueError(
-            f"ylag must be integer or list and > zero. Got {self.ylag}"
-        )
-
-    if np.min(np.min(list(chain.from_iterable([[self.xlag]])))) < 1:
-        raise ValueError(
-            f"xlag must be integer or list and > zero. Got {self.xlag}"
-        )
-
-    y_vec = self.get_y_lag_list()
+349
+350
+351
def create_narmax_code(self, n_inputs: int) -> Tuple[np.ndarray, np.ndarray]:
+    """Create the code representation of the regressors.
+
+    This function generates a codification from all possibles
+    regressors given the maximum lag of the input and output.
+    This is used to write the final terms of the model in a
+    readable form. [1001] -> y(k-1).
+    This code format was based on a dissertation from UFMG. See
+    reference below.
+
+    Parameters
+    ----------
+    n_inputs : int
+        Number of input variables.
+
+    Returns
+    -------
+    max_lag : int
+        This value can be used by another functions.
+    regressor_code : ndarray of int
+        Matrix codification of all possible regressors.
+
+    Examples
+    --------
+    The codification is defined as:
+
+    >>> 100n = y(k-n)
+    >>> 200n = u(k-n)
+    >>> [100n 100n] = y(k-n)y(k-n)
+    >>> [200n 200n] = u(k-n)u(k-n)
+
+    References
+    ----------
+    - Master Thesis: Barbosa, Alípio Monteiro.
+        Técnicas de otimização bi-objetivo para a determinação
+        da estrutura de modelos NARX (2010).
+
+    """
+    if self.basis_function.degree < 1:
+        raise ValueError(
+            f"degree must be integer and > zero. Got {self.basis_function.degree}"
+        )
+
+    if np.min(np.minimum(self.ylag, 1)) < 1:
+        raise ValueError(
+            f"ylag must be integer or list and > zero. Got {self.ylag}"
+        )
+
+    if np.min(np.min(list(chain.from_iterable([[self.xlag]])))) < 1:
+        raise ValueError(
+            f"xlag must be integer or list and > zero. Got {self.xlag}"
+        )
 
-    if n_inputs == 1:
-        x_vec = self.get_siso_x_lag_list()
-    else:
-        x_vec = self.get_miso_x_lag_list(n_inputs)
-
-    return x_vec, y_vec
-

get_miso_x_lag_list(n_inputs)

Return x regressor code list for MISO models.

Returns:

Type Description
x_vec

The x regressor code list given the xlag for a MISO model.

Source code in sysidentpy\narmax_base.py
387
-388
-389
+    y_vec = self.get_y_lag_list()
+
+    if n_inputs == 1:
+        x_vec = self.get_siso_x_lag_list()
+    else:
+        x_vec = self.get_miso_x_lag_list(n_inputs)
+
+    return x_vec, y_vec
+

get_build_io_method(model_type)

get info criteria

Source code in sysidentpy\narmax_base.py
def get_build_io_method(self, model_type):
+    """get info criteria"""
+    build_matrix_options = {
+        "NARMAX": self.build_input_output_matrix,
+        "NFIR": self.build_input_matrix,
+        "NAR": self.build_output_matrix,
+    }
+    return build_matrix_options.get(model_type, None)
+

get_miso_x_lag_list(n_inputs)

Return x regressor code list for MISO models.

Returns:

Type Description
x_vec

The x regressor code list given the xlag for a MISO model.

Source code in sysidentpy\narmax_base.py
389
 390
 391
 392
@@ -2629,33 +2632,33 @@
 407
 408
 409
-410
def get_miso_x_lag_list(self, n_inputs: int) -> np.ndarray:
-    """Return x regressor code list for MISO models.
-
-    Returns
-    -------
-    x_vec = ndarray of ints
-        The x regressor code list given the xlag for a MISO model.
-
-    """
-    # only list are allowed if n_inputs > 1
-    # the user must entered list of the desired lags explicitly
-    x_vec_tmp = []
-    for i in range(n_inputs):
-        if isinstance(self.xlag[i], list):
-            # create 200n, 300n,..., 400n to describe each input
-            x_vec_tmp.extend([lag + 2000 + i * 1000 for lag in self.xlag[i]])
-        elif isinstance(self.xlag[i], int) and n_inputs > 1:
-            x_vec_tmp.extend(
-                [np.arange(2001 + i * 1000, 2001 + i * 1000 + self.xlag[i])]
-            )
-
-    # if x_vec is a nested list, ensure all elements are arrays
-    all_arrays = [np.array([i]) if isinstance(i, int) else i for i in x_vec_tmp]
-    return np.concatenate([i for i in all_arrays])
-

get_siso_x_lag_list()

Return x regressor code list for SISO models.

Returns:

Type Description
x_vec_tmp

The x regressor code list given the xlag for a SISO model.

Source code in sysidentpy\narmax_base.py
def get_miso_x_lag_list(self, n_inputs: int) -> np.ndarray:
+    """Return x regressor code list for MISO models.
+
+    Returns
+    -------
+    x_vec = ndarray of ints
+        The x regressor code list given the xlag for a MISO model.
+
+    """
+    # only list are allowed if n_inputs > 1
+    # the user must entered list of the desired lags explicitly
+    x_vec_tmp = []
+    for i in range(n_inputs):
+        if isinstance(self.xlag[i], list):
+            # create 200n, 300n,..., 400n to describe each input
+            x_vec_tmp.extend([lag + 2000 + i * 1000 for lag in self.xlag[i]])
+        elif isinstance(self.xlag[i], int) and n_inputs > 1:
+            x_vec_tmp.extend(
+                [np.arange(2001 + i * 1000, 2001 + i * 1000 + self.xlag[i])]
+            )
+
+    # if x_vec is a nested list, ensure all elements are arrays
+    all_arrays = [np.array([i]) if isinstance(i, int) else i for i in x_vec_tmp]
+    return np.concatenate([i for i in all_arrays])
+

get_siso_x_lag_list()

Return x regressor code list for SISO models.

Returns:

Type Description
x_vec_tmp

The x regressor code list given the xlag for a SISO model.

Source code in sysidentpy\narmax_base.py
371
 372
 373
 374
@@ -2669,26 +2672,26 @@
 382
 383
 384
-385
def get_siso_x_lag_list(self) -> np.ndarray:
-    """Return x regressor code list for SISO models.
-
-    Returns
-    -------
-    x_vec_tmp = ndarray of ints
-        The x regressor code list given the xlag for a SISO model.
-
-    """
-    if isinstance(self.xlag, list):
-        # create only the lags passed from list
-        x_vec_tmp = []
-        x_vec_tmp.extend([lag + 2000 for lag in self.xlag])
-        return np.array(x_vec_tmp)
-
-    # create a range of lags if passed a int value
-    return np.arange(2001, 2001 + self.xlag)
-

get_y_lag_list()

Return y regressor code list.

Returns:

Type Description
y_vec

The y regressor code list given the ylag.

Source code in sysidentpy\narmax_base.py
def get_siso_x_lag_list(self) -> np.ndarray:
+    """Return x regressor code list for SISO models.
+
+    Returns
+    -------
+    x_vec_tmp = ndarray of ints
+        The x regressor code list given the xlag for a SISO model.
+
+    """
+    if isinstance(self.xlag, list):
+        # create only the lags passed from list
+        x_vec_tmp = []
+        x_vec_tmp.extend([lag + 2000 for lag in self.xlag])
+        return np.array(x_vec_tmp)
+
+    # create a range of lags if passed a int value
+    return np.arange(2001, 2001 + self.xlag)
+

get_y_lag_list()

Return y regressor code list.

Returns:

Type Description
y_vec

The y regressor code list given the ylag.

Source code in sysidentpy\narmax_base.py
353
 354
 355
 356
@@ -2702,26 +2705,26 @@
 364
 365
 366
-367
def get_y_lag_list(self) -> np.ndarray:
-    """Return y regressor code list.
-
-    Returns
-    -------
-    y_vec = ndarray of ints
-        The y regressor code list given the ylag.
-
-    """
-    if isinstance(self.ylag, list):
-        # create only the lags passed from list
-        y_vec = []
-        y_vec.extend([lag + 1000 for lag in self.ylag])
-        return np.array(y_vec)
-
-    # create a range of lags if passed a int value
-    return np.arange(1001, 1001 + self.ylag)
-

regressor_space(n_inputs)

Create regressor code based on model type.

Parameters:

Name Type Description Default
n_inputs int

Number of input variables.

required

Returns:

Type Description
regressor_code

The regressor code list given the xlag and ylag for a MISO model.

Source code in sysidentpy\narmax_base.py
def get_y_lag_list(self) -> np.ndarray:
+    """Return y regressor code list.
+
+    Returns
+    -------
+    y_vec = ndarray of ints
+        The y regressor code list given the ylag.
+
+    """
+    if isinstance(self.ylag, list):
+        # create only the lags passed from list
+        y_vec = []
+        y_vec.extend([lag + 1000 for lag in self.ylag])
+        return np.array(y_vec)
+
+    # create a range of lags if passed a int value
+    return np.arange(1001, 1001 + self.ylag)
+

regressor_space(n_inputs)

Create regressor code based on model type.

Parameters:

Name Type Description Default
n_inputs int

Number of input variables.

required

Returns:

Type Description
regressor_code

The regressor code list given the xlag and ylag for a MISO model.

Source code in sysidentpy\narmax_base.py
414
 415
 416
 417
@@ -2752,38 +2755,40 @@
 442
 443
 444
-445
def regressor_space(self, n_inputs):
-    """Create regressor code based on model type.
-
-    Parameters
-    ----------
-    n_inputs : int
-        Number of input variables.
-
-    Returns
-    -------
-    regressor_code = ndarray of ints
-        The regressor code list given the xlag and ylag for a MISO model.
-
-    """
-    x_vec, y_vec = self.create_narmax_code(n_inputs)
-    reg_aux = np.array([0])
-    if self.model_type == "NARMAX":
-        reg_aux = np.concatenate([reg_aux, y_vec, x_vec])
-    elif self.model_type == "NAR":
-        reg_aux = np.concatenate([reg_aux, y_vec])
-    elif self.model_type == "NFIR":
-        reg_aux = np.concatenate([reg_aux, x_vec])
-    else:
-        raise ValueError(
-            "Unrecognized model type. Model type should be NARMAX, NAR or NFIR"
-        )
-
-    regressor_code = list(
-        combinations_with_replacement(reg_aux, self.basis_function.degree)
-    )
-
-    regressor_code = np.array(regressor_code)
-    regressor_code = regressor_code[:, regressor_code.shape[1] :: -1]
-    return regressor_code
+445
+446
+447
def regressor_space(self, n_inputs):
+    """Create regressor code based on model type.
+
+    Parameters
+    ----------
+    n_inputs : int
+        Number of input variables.
+
+    Returns
+    -------
+    regressor_code = ndarray of ints
+        The regressor code list given the xlag and ylag for a MISO model.
+
+    """
+    x_vec, y_vec = self.create_narmax_code(n_inputs)
+    reg_aux = np.array([0])
+    if self.model_type == "NARMAX":
+        reg_aux = np.concatenate([reg_aux, y_vec, x_vec])
+    elif self.model_type == "NAR":
+        reg_aux = np.concatenate([reg_aux, y_vec])
+    elif self.model_type == "NFIR":
+        reg_aux = np.concatenate([reg_aux, x_vec])
+    else:
+        raise ValueError(
+            "Unrecognized model type. Model type should be NARMAX, NAR or NFIR"
+        )
+
+    regressor_code = list(
+        combinations_with_replacement(reg_aux, self.basis_function.degree)
+    )
+
+    regressor_code = np.array(regressor_code)
+    regressor_code = regressor_code[:, regressor_code.shape[1] :: -1]
+    return regressor_code
 
\ No newline at end of file diff --git a/docs/code/neural-narx/index.html b/docs/code/neural-narx/index.html index 90ac93c4..c29df9c7 100644 --- a/docs/code/neural-narx/index.html +++ b/docs/code/neural-narx/index.html @@ -10,7 +10,7 @@ body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);} -

Documentation for Neural NARX

Build Polynomial NARMAX Models

NARXNN

Bases: BaseMSS

NARX Neural Network model build on top of Pytorch

Currently we support a Series-Parallel (open-loop) Feedforward Network training process, which make the training process easier, and we convert the NARX network from Series-Parallel to the Parallel (closed-loop) configuration for prediction.

Parameters:

Name Type Description Default
ylag int, default

The maximum lag of the output.

1
xlag int, default

The maximum lag of the input.

1
basis_function

Defines which basis function will be used in the model.

None
model_type

The user can choose "NARMAX", "NAR" and "NFIR" models

'NARMAX'
batch_size int, default

Size of mini-batches of data for stochastic optimizers

100
learning_rate float, default

Learning rate schedule for weight updates

0.01
epochs int, default

Number of training epochs

200
loss_func str, default

Select the loss function available in torch.nn.functional

'mse_loss'
optimizer str, default

The solver for weight optimization

'Adam'
optim_params dict, default

Optional parameters for the optimizer

None
net default

The defined network using nn.Module

None
verbose bool, default

Show the training and validation loss at each iteration

False

Examples:

>>> from torch import nn
+                        

Documentation for Neural NARX

Build Polynomial NARMAX Models

NARXNN

Bases: BaseMSS

NARX Neural Network model build on top of Pytorch

Currently we support a Series-Parallel (open-loop) Feedforward Network training process, which make the training process easier, and we convert the NARX network from Series-Parallel to the Parallel (closed-loop) configuration for prediction.

Parameters:

Name Type Description Default
ylag int, default

The maximum lag of the output.

1
xlag int, default

The maximum lag of the input.

1
basis_function

Defines which basis function will be used in the model.

Polynomial()
model_type

The user can choose "NARMAX", "NAR" and "NFIR" models

'NARMAX'
batch_size int, default

Size of mini-batches of data for stochastic optimizers

100
learning_rate float, default

Learning rate schedule for weight updates

0.01
epochs int, default

Number of training epochs

200
loss_func str, default

Select the loss function available in torch.nn.functional

'mse_loss'
optimizer str, default

The solver for weight optimization

'Adam'
optim_params dict, default

Optional parameters for the optimizer

None
net default

The defined network using nn.Module

None
verbose bool, default

Show the training and validation loss at each iteration

False

Examples:

>>> from torch import nn
 >>> import numpy as np
 >>> import pandas as pd
 >>> import matplotlib.pyplot as plt
@@ -56,8 +56,7 @@
 >>> yhat = neural_narx.predict(X=x_valid, y=y_valid)
 >>> print(mean_squared_error(y_valid, yhat))
 0.000131
-

References

Source code in sysidentpy\neural_network\narx_nn.py
 29
- 30
+

References

Source code in sysidentpy\neural_network\narx_nn.py
 30
  31
  32
  33
@@ -847,498 +846,505 @@
 817
 818
 819
-820
class NARXNN(BaseMSS):
-    """NARX Neural Network model build on top of Pytorch
-
-    Currently we support a Series-Parallel (open-loop) Feedforward Network training
-    process, which make the training process easier, and we convert the
-    NARX network from Series-Parallel to the Parallel (closed-loop) configuration for
-    prediction.
-
-    Parameters
-    ----------
-    ylag : int, default=2
-        The maximum lag of the output.
-    xlag : int, default=2
-        The maximum lag of the input.
-    basis_function: Polynomial or Fourier basis functions
-        Defines which basis function will be used in the model.
-    model_type: str, default="NARMAX"
-        The user can choose "NARMAX", "NAR" and "NFIR" models
-    batch_size : int, default=100
-        Size of mini-batches of data for stochastic optimizers
-    learning_rate : float, default=0.01
-        Learning rate schedule for weight updates
-    epochs : int, default=100
-        Number of training epochs
-    loss_func : str, default='mse_loss'
-        Select the loss function available in torch.nn.functional
-    optimizer : str, default='SGD'
-        The solver for weight optimization
-    optim_params : dict, default=None
-        Optional parameters for the optimizer
-    net : default=None
-        The defined network using nn.Module
-    verbose : bool, default=False
-        Show the training and validation loss at each iteration
-
-    Examples
-    --------
-    >>> from torch import nn
-    >>> import numpy as np
-    >>> import pandas as pd
-    >>> import matplotlib.pyplot as plt
-    >>> from sysidentpy.metrics import mean_squared_error
-    >>> from sysidentpy.utils.generate_data import get_siso_data
-    >>> from sysidentpy.neural_network import NARXNN
-    >>> from sysidentpy.utils.generate_data import get_siso_data
-    >>> x_train, x_valid, y_train, y_valid = get_siso_data(
-    ...     n=1000,
-    ...     colored_noise=False,
-    ...     sigma=0.01,
-    ...     train_percentage=80
-    ... )
-    >>> narx_nn = NARXNN(
-    ...     ylag=2,
-    ...     xlag=2,
-    ...     basis_function=basis_function,
-    ...     model_type="NARMAX",
-    ...     loss_func='mse_loss',
-    ...     optimizer='Adam',
-    ...     epochs=200,
-    ...     verbose=False,
-    ...     optim_params={'betas': (0.9, 0.999), 'eps': 1e-05} # for the optimizer
-    ... )
-    >>> class Net(nn.Module):
-    ...     def __init__(self):
-    ...         super().__init__()
-    ...         self.lin = nn.Linear(4, 10)
-    ...         self.lin2 = nn.Linear(10, 10)
-    ...         self.lin3 = nn.Linear(10, 1)
-    ...         self.tanh = nn.Tanh()
-    >>>
-    ...     def forward(self, xb):
-    ...         z = self.lin(xb)
-    ...         z = self.tanh(z)
-    ...         z = self.lin2(z)
-    ...         z = self.tanh(z)
-    ...         z = self.lin3(z)
-    ...         return z
-    >>>
-    >>> narx_nn.net = Net()
-    >>> neural_narx.fit(X=x_train, y=y_train)
-    >>> yhat = neural_narx.predict(X=x_valid, y=y_valid)
-    >>> print(mean_squared_error(y_valid, yhat))
-    0.000131
-
-    References
-    ----------
-    - Manuscript: Orthogonal least squares methods and their application
-       to non-linear system identification
-       <https://eprints.soton.ac.uk/251147/1/778742007_content.pdf>`_
-
-    """
-
-    def __init__(
-        self,
-        *,
-        ylag=1,
-        xlag=1,
-        model_type="NARMAX",
-        basis_function=None,
-        batch_size=100,  # batch size
-        learning_rate=0.01,  # learning rate
-        epochs=200,  # how many epochs to train for
-        loss_func="mse_loss",
-        optimizer="Adam",
-        net=None,
-        train_percentage=80,
-        verbose=False,
-        optim_params=None,
-        device="cpu",
-    ):
-        self.ylag = ylag
-        self.xlag = xlag
-        self.basis_function = basis_function
-        self.model_type = model_type
-        self.non_degree = basis_function.degree
-        self.max_lag = self._get_max_lag()
-        self.batch_size = batch_size
-        self.learning_rate = learning_rate
-        self.epochs = epochs
-        self.loss_func = getattr(F, loss_func)
-        self.optimizer = optimizer
-        self.net = net
-        self.train_percentage = train_percentage
-        self.verbose = verbose
-        self.optim_params = optim_params
-        self.device = self._check_cuda(device)
-        self.regressor_code = None
-        self.train_loss = None
-        self.val_loss = None
-        self.ensemble = None
-        self.n_inputs = None
-        self.final_model = None
-        self._validate_params()
-
-    def _validate_params(self):
-        """Validate input params."""
-
-        if not isinstance(self.batch_size, int) or self.batch_size < 1:
-            raise ValueError(
-                f"bacth_size must be integer and > zero. Got {self.batch_size}"
-            )
-
-        if not isinstance(self.epochs, int) or self.epochs < 1:
-            raise ValueError(f"epochs must be integer and > zero. Got {self.epochs}")
-
-        if not isinstance(self.train_percentage, int) or self.train_percentage < 0:
-            raise ValueError(
-                f"bacth_size must be integer and > zero. Got {self.train_percentage}"
-            )
-
-        if not isinstance(self.verbose, bool):
-            raise TypeError(f"verbose must be False or True. Got {self.verbose}")
-
-    def _check_cuda(self, device):
-        if device not in ["cpu", "cuda"]:
-            raise ValueError(f"device must be 'cpu' or 'cuda'. Got {device}")
-
-        if device == "cpu":
-            return torch.device("cpu")
-
-        if device == "cuda":
-            if torch.cuda.is_available():
-                return torch.device("cuda")
+820
+821
+822
+823
+824
+825
+826
+827
+828
class NARXNN(BaseMSS):
+    """NARX Neural Network model build on top of Pytorch
+
+    Currently we support a Series-Parallel (open-loop) Feedforward Network training
+    process, which make the training process easier, and we convert the
+    NARX network from Series-Parallel to the Parallel (closed-loop) configuration for
+    prediction.
+
+    Parameters
+    ----------
+    ylag : int, default=2
+        The maximum lag of the output.
+    xlag : int, default=2
+        The maximum lag of the input.
+    basis_function: Polynomial or Fourier basis functions
+        Defines which basis function will be used in the model.
+    model_type: str, default="NARMAX"
+        The user can choose "NARMAX", "NAR" and "NFIR" models
+    batch_size : int, default=100
+        Size of mini-batches of data for stochastic optimizers
+    learning_rate : float, default=0.01
+        Learning rate schedule for weight updates
+    epochs : int, default=100
+        Number of training epochs
+    loss_func : str, default='mse_loss'
+        Select the loss function available in torch.nn.functional
+    optimizer : str, default='SGD'
+        The solver for weight optimization
+    optim_params : dict, default=None
+        Optional parameters for the optimizer
+    net : default=None
+        The defined network using nn.Module
+    verbose : bool, default=False
+        Show the training and validation loss at each iteration
+
+    Examples
+    --------
+    >>> from torch import nn
+    >>> import numpy as np
+    >>> import pandas as pd
+    >>> import matplotlib.pyplot as plt
+    >>> from sysidentpy.metrics import mean_squared_error
+    >>> from sysidentpy.utils.generate_data import get_siso_data
+    >>> from sysidentpy.neural_network import NARXNN
+    >>> from sysidentpy.utils.generate_data import get_siso_data
+    >>> x_train, x_valid, y_train, y_valid = get_siso_data(
+    ...     n=1000,
+    ...     colored_noise=False,
+    ...     sigma=0.01,
+    ...     train_percentage=80
+    ... )
+    >>> narx_nn = NARXNN(
+    ...     ylag=2,
+    ...     xlag=2,
+    ...     basis_function=basis_function,
+    ...     model_type="NARMAX",
+    ...     loss_func='mse_loss',
+    ...     optimizer='Adam',
+    ...     epochs=200,
+    ...     verbose=False,
+    ...     optim_params={'betas': (0.9, 0.999), 'eps': 1e-05} # for the optimizer
+    ... )
+    >>> class Net(nn.Module):
+    ...     def __init__(self):
+    ...         super().__init__()
+    ...         self.lin = nn.Linear(4, 10)
+    ...         self.lin2 = nn.Linear(10, 10)
+    ...         self.lin3 = nn.Linear(10, 1)
+    ...         self.tanh = nn.Tanh()
+    >>>
+    ...     def forward(self, xb):
+    ...         z = self.lin(xb)
+    ...         z = self.tanh(z)
+    ...         z = self.lin2(z)
+    ...         z = self.tanh(z)
+    ...         z = self.lin3(z)
+    ...         return z
+    >>>
+    >>> narx_nn.net = Net()
+    >>> neural_narx.fit(X=x_train, y=y_train)
+    >>> yhat = neural_narx.predict(X=x_valid, y=y_valid)
+    >>> print(mean_squared_error(y_valid, yhat))
+    0.000131
+
+    References
+    ----------
+    - Manuscript: Orthogonal least squares methods and their application
+       to non-linear system identification
+       <https://eprints.soton.ac.uk/251147/1/778742007_content.pdf>`_
+
+    """
+
+    def __init__(
+        self,
+        *,
+        ylag=1,
+        xlag=1,
+        model_type="NARMAX",
+        basis_function=Polynomial(),
+        batch_size=100,
+        learning_rate=0.01,
+        epochs=200,
+        loss_func="mse_loss",
+        optimizer="Adam",
+        net=None,
+        train_percentage=80,
+        verbose=False,
+        optim_params=None,
+        device="cpu",
+    ):
+        self.ylag = ylag
+        self.xlag = xlag
+        self.basis_function = basis_function
+        self.model_type = model_type
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.non_degree = basis_function.degree
+        self.max_lag = self._get_max_lag()
+        self.batch_size = batch_size
+        self.learning_rate = learning_rate
+        self.epochs = epochs
+        self.loss_func = getattr(F, loss_func)
+        self.optimizer = optimizer
+        self.net = net
+        self.train_percentage = train_percentage
+        self.verbose = verbose
+        self.optim_params = optim_params
+        self.device = self._check_cuda(device)
+        self.regressor_code = None
+        self.train_loss = None
+        self.val_loss = None
+        self.ensemble = None
+        self.n_inputs = None
+        self.final_model = None
+        self._validate_params()
+
+    def _validate_params(self):
+        """Validate input params."""
+
+        if not isinstance(self.batch_size, int) or self.batch_size < 1:
+            raise ValueError(
+                f"bacth_size must be integer and > zero. Got {self.batch_size}"
+            )
+
+        if not isinstance(self.epochs, int) or self.epochs < 1:
+            raise ValueError(f"epochs must be integer and > zero. Got {self.epochs}")
+
+        if not isinstance(self.train_percentage, int) or self.train_percentage < 0:
+            raise ValueError(
+                f"bacth_size must be integer and > zero. Got {self.train_percentage}"
+            )
+
+        if not isinstance(self.verbose, bool):
+            raise TypeError(f"verbose must be False or True. Got {self.verbose}")
+
+        if isinstance(self.ylag, int) and self.ylag < 1:
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if isinstance(self.xlag, int) and self.xlag < 1:
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
+
+        if not isinstance(self.xlag, (int, list)):
+            raise ValueError(f"xlag must be integer and > zero. Got {self.xlag}")
 
-            warnings.warn(
-                "No CUDA available. We set the device as CPU",
-                stacklevel=2,
-            )
-
-        return torch.device("cpu")
-
-    def define_opt(self):
-        """Defines the optimizer using the user parameters."""
-        opt = getattr(optim, self.optimizer)
-        return opt(self.net.parameters(), lr=self.learning_rate, **self.optim_params)
+        if not isinstance(self.ylag, (int, list)):
+            raise ValueError(f"ylag must be integer and > zero. Got {self.ylag}")
+
+        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+            )
+
+    def _check_cuda(self, device):
+        if device not in ["cpu", "cuda"]:
+            raise ValueError(f"device must be 'cpu' or 'cuda'. Got {device}")
 
-    def loss_batch(self, X, y, opt=None):
-        """Compute the loss for one batch.
+        if device == "cpu":
+            return torch.device("cpu")
 
-        Parameters
-        ----------
-        X : ndarray of floats
-            The regressor matrix.
-        y : ndarray of floats
-            The output data.
-
-        Returns
-        -------
-        loss : float
-            The loss of one batch.
-
-        """
-        loss = self.loss_func(self.net(X), y)
-
-        if opt is not None:
-            opt.zero_grad()
-            loss.backward()
-            opt.step()
-
-        return loss.item(), len(X)
-
-    def split_data(self, X, y):
-        """Return the lagged matrix and the y values given the maximum lags.
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data.
-        y : ndarray of floats
-            The output data.
-
-        Returns
-        -------
-        y : ndarray of floats
-            The y values considering the lags.
-        reg_matrix : ndarray of floats
-            The information matrix of the model.
+        if device == "cuda":
+            if torch.cuda.is_available():
+                return torch.device("cuda")
+
+            warnings.warn(
+                "No CUDA available. We set the device as CPU",
+                stacklevel=2,
+            )
+
+        return torch.device("cpu")
+
+    def define_opt(self):
+        """Defines the optimizer using the user parameters."""
+        opt = getattr(optim, self.optimizer)
+        return opt(self.net.parameters(), lr=self.learning_rate, **self.optim_params)
+
+    def loss_batch(self, X, y, opt=None):
+        """Compute the loss for one batch.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The regressor matrix.
+        y : ndarray of floats
+            The output data.
+
+        Returns
+        -------
+        loss : float
+            The loss of one batch.
+
+        """
+        loss = self.loss_func(self.net(X), y)
+
+        if opt is not None:
+            opt.zero_grad()
+            loss.backward()
+            opt.step()
 
-        """
+        return loss.item(), len(X)
 
-        if y is None:
-            raise ValueError("y cannot be None")
+    def split_data(self, X, y):
+        """Return the lagged matrix and the y values given the maximum lags.
 
-        self.max_lag = self._get_max_lag()
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            check_X_y(X, y)
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        basis_name = self.basis_function.__class__.__name__
-        if basis_name == "Polynomial":
-            reg_matrix = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-            reg_matrix = reg_matrix[:, 1:]
-        else:
-            reg_matrix, self.ensemble = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=None
-            )
-
-        if X is not None:
-            self.n_inputs = _num_features(X)
-        else:
-            self.n_inputs = 1  # only used to create the regressor space base
-
-        self.regressor_code = self.regressor_space(self.n_inputs)
-        if basis_name != "Polynomial" and self.basis_function.ensemble:
-            basis_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-        elif basis_name != "Polynomial" and self.basis_function.ensemble is False:
-            self.regressor_code = np.sort(
-                np.tile(
-                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-                ),
-                axis=0,
-            )
-
-        if basis_name == "Polynomial":
-            self.regressor_code = self.regressor_code[
-                1:
-            ]  # removes the column of the constant
-
-        self.final_model = self.regressor_code.copy()
-        reg_matrix = np.atleast_1d(reg_matrix).astype(np.float32)
-
-        y = np.atleast_1d(y[self.max_lag :]).astype(np.float32)
-        return reg_matrix, y
-
-    def convert_to_tensor(self, reg_matrix, y):
-        """Return the lagged matrix and the y values given the maximum lags.
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data.
+        y : ndarray of floats
+            The output data.
+
+        Returns
+        -------
+        y : ndarray of floats
+            The y values considering the lags.
+        reg_matrix : ndarray of floats
+            The information matrix of the model.
+
+        """
+
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X, y)
+
+        basis_name = self.basis_function.__class__.__name__
+        if basis_name == "Polynomial":
+            reg_matrix = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+            reg_matrix = reg_matrix[:, 1:]
+        else:
+            reg_matrix, self.ensemble = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=None
+            )
+
+        if X is not None:
+            self.n_inputs = _num_features(X)
+        else:
+            self.n_inputs = 1  # only used to create the regressor space base
+
+        self.regressor_code = self.regressor_space(self.n_inputs)
+        if basis_name != "Polynomial" and self.basis_function.ensemble:
+            basis_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+            self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+        elif basis_name != "Polynomial" and self.basis_function.ensemble is False:
+            self.regressor_code = np.sort(
+                np.tile(
+                    self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+                ),
+                axis=0,
+            )
+
+        if basis_name == "Polynomial":
+            self.regressor_code = self.regressor_code[
+                1:
+            ]  # removes the column of the constant
 
-        Based on Pytorch official docs:
-        https://pytorch.org/tutorials/beginner/nn_tutorial.html
+        self.final_model = self.regressor_code.copy()
+        reg_matrix = np.atleast_1d(reg_matrix).astype(np.float32)
 
-        Parameters
-        ----------
-        reg_matrix : ndarray of floats
-            The information matrix of the model.
-        y : ndarray of floats
-            The output data
-
-        Returns
-        -------
-        Tensor: tensor
-            tensors that have the same size of the first dimension.
-
-        """
-        reg_matrix, y = map(torch.tensor, (reg_matrix, y))
-        return TensorDataset(reg_matrix, y)
+        y = np.atleast_1d(y[self.max_lag :]).astype(np.float32)
+        return reg_matrix, y
+
+    def convert_to_tensor(self, reg_matrix, y):
+        """Return the lagged matrix and the y values given the maximum lags.
+
+        Based on Pytorch official docs:
+        https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+        Parameters
+        ----------
+        reg_matrix : ndarray of floats
+            The information matrix of the model.
+        y : ndarray of floats
+            The output data
 
-    def get_data(self, train_ds):
-        """Return the lagged matrix and the y values given the maximum lags.
-
-        Based on Pytorch official docs:
-        https://pytorch.org/tutorials/beginner/nn_tutorial.html
-
-        Parameters
-        ----------
-        train_ds: tensor
-            Tensors that have the same size of the first dimension.
-
-        Returns
-        -------
-        Dataloader: dataloader
-            tensors that have the same size of the first dimension.
-
-        """
-        pin_memory = False if self.device.type == "cpu" else True
-        return DataLoader(
-            train_ds, batch_size=self.batch_size, pin_memory=pin_memory, shuffle=False
-        )
-
-    def data_transform(self, X, y):
-        """Return the data transformed in tensors using Dataloader.
+        Returns
+        -------
+        Tensor: tensor
+            tensors that have the same size of the first dimension.
+
+        """
+        reg_matrix, y = map(torch.tensor, (reg_matrix, y))
+        return TensorDataset(reg_matrix, y)
+
+    def get_data(self, train_ds):
+        """Return the lagged matrix and the y values given the maximum lags.
+
+        Based on Pytorch official docs:
+        https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+        Parameters
+        ----------
+        train_ds: tensor
+            Tensors that have the same size of the first dimension.
+
+        Returns
+        -------
+        Dataloader: dataloader
+            tensors that have the same size of the first dimension.
 
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data.
-        y : ndarray of floats
-            The output data.
-
-        Returns
-        -------
-        Tensors : Dataloader
-
-        """
-        if y is None:
-            raise ValueError("y cannot be None")
-
-        x_train, y_train = self.split_data(X, y)
-        train_ds = self.convert_to_tensor(x_train, y_train)
-        train_dl = self.get_data(train_ds)
-        return train_dl
+        """
+        pin_memory = False if self.device.type == "cpu" else True
+        return DataLoader(
+            train_ds, batch_size=self.batch_size, pin_memory=pin_memory, shuffle=False
+        )
+
+    def data_transform(self, X, y):
+        """Return the data transformed in tensors using Dataloader.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data.
+        y : ndarray of floats
+            The output data.
+
+        Returns
+        -------
+        Tensors : Dataloader
 
-    def fit(self, *, X=None, y=None, X_test=None, y_test=None):
-        """Train a NARX Neural Network model.
-
-        This is an training pipeline that allows a friendly usage
-        by the user. The training pipeline was based on
-        https://pytorch.org/tutorials/beginner/nn_tutorial.html
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the training process.
-        y : ndarray of floats
-            The output data to be used in the training process.
-        X_test : ndarray of floats
-            The input data to be used in the prediction process.
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-
-        Returns
-        -------
-        net : nn.Module
-            The model fitted.
-        train_loss: ndarrays of floats
-            The training loss of each batch
-        val_loss: ndarrays of floats
-            The validation loss of each batch
+        """
+        if y is None:
+            raise ValueError("y cannot be None")
+
+        x_train, y_train = self.split_data(X, y)
+        train_ds = self.convert_to_tensor(x_train, y_train)
+        train_dl = self.get_data(train_ds)
+        return train_dl
+
+    def fit(self, *, X=None, y=None, X_test=None, y_test=None):
+        """Train a NARX Neural Network model.
+
+        This is an training pipeline that allows a friendly usage
+        by the user. The training pipeline was based on
+        https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the training process.
+        y : ndarray of floats
+            The output data to be used in the training process.
+        X_test : ndarray of floats
+            The input data to be used in the prediction process.
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
 
-        """
-        train_dl = self.data_transform(X, y)
-        if self.verbose:
-            if X_test is None or y_test is None:
-                raise ValueError(
-                    "X_test and y_test cannot be None if you set verbose=True"
-                )
-            valid_dl = self.data_transform(X_test, y_test)
+        Returns
+        -------
+        net : nn.Module
+            The model fitted.
+        train_loss: ndarrays of floats
+            The training loss of each batch
+        val_loss: ndarrays of floats
+            The validation loss of each batch
 
-        opt = self.define_opt()
-        self.val_loss = []
-        self.train_loss = []
-        for epoch in range(self.epochs):
-            self.net.train()
-            for X, y in train_dl:
-                X, y = X.to(self.device), y.to(self.device)
-                self.loss_batch(X, y, opt=opt)
+        """
+        train_dl = self.data_transform(X, y)
+        if self.verbose:
+            if X_test is None or y_test is None:
+                raise ValueError(
+                    "X_test and y_test cannot be None if you set verbose=True"
+                )
+            valid_dl = self.data_transform(X_test, y_test)
 
-            if self.verbose:
-                train_losses, train_nums = zip(
-                    *[
-                        self.loss_batch(X.to(self.device), y.to(self.device))
-                        for X, y in train_dl
-                    ]
-                )
-                self.train_loss.append(
-                    np.sum(np.multiply(train_losses, train_nums)) / np.sum(train_nums)
-                )
-
-                self.net.eval()
-                with torch.no_grad():
-                    losses, nums = zip(
-                        *[
-                            self.loss_batch(X.to(self.device), y.to(self.device))
-                            for X, y in valid_dl
-                        ]
-                    )
-                self.val_loss.append(np.sum(np.multiply(losses, nums)) / np.sum(nums))
-
-                logging.info(
-                    "Train metrics: "
-                    + str(self.train_loss[epoch])
-                    + " | Validation metrics: "
-                    + str(self.val_loss[epoch])
-                )
-        return self
-
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted given an input and initial values.
-
-        The predict function allows a friendly usage by the user.
-        Given a trained model, predict values given
-        a new set of data.
-
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future).
-
-        Currently we only support infinity-steps-ahead prediction,
-        but run 1-step-ahead prediction manually is straightforward.
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
-
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                return self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            if steps_ahead == 1:
-                return self._one_step_ahead_prediction(X, y)
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            return self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-
-        if steps_ahead is None:
-            return self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-        if steps_ahead == 1:
-            return self._one_step_ahead_prediction(X, y)
-
-        return self._basis_function_n_step_prediction(
-            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-        )
+        opt = self.define_opt()
+        self.val_loss = []
+        self.train_loss = []
+        for epoch in range(self.epochs):
+            self.net.train()
+            for X, y in train_dl:
+                X, y = X.to(self.device), y.to(self.device)
+                self.loss_batch(X, y, opt=opt)
+
+            if self.verbose:
+                train_losses, train_nums = zip(
+                    *[
+                        self.loss_batch(X.to(self.device), y.to(self.device))
+                        for X, y in train_dl
+                    ]
+                )
+                self.train_loss.append(
+                    np.sum(np.multiply(train_losses, train_nums)) / np.sum(train_nums)
+                )
+
+                self.net.eval()
+                with torch.no_grad():
+                    losses, nums = zip(
+                        *[
+                            self.loss_batch(X.to(self.device), y.to(self.device))
+                            for X, y in valid_dl
+                        ]
+                    )
+                self.val_loss.append(np.sum(np.multiply(losses, nums)) / np.sum(nums))
+
+                logging.info(
+                    "Train metrics: "
+                    + str(self.train_loss[epoch])
+                    + " | Validation metrics: "
+                    + str(self.val_loss[epoch])
+                )
+        return self
+
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted given an input and initial values.
+
+        The predict function allows a friendly usage by the user.
+        Given a trained model, predict values given
+        a new set of data.
+
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future).
+
+        Currently we only support infinity-steps-ahead prediction,
+        but run 1-step-ahead prediction manually is straightforward.
+
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                return self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            if steps_ahead == 1:
+                return self._one_step_ahead_prediction(X, y)
+
+            _check_positive_int(steps_ahead, "steps_ahead")
+            return self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
 
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
-
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
+        if steps_ahead is None:
+            return self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+        if steps_ahead == 1:
+            return self._one_step_ahead_prediction(X, y)
+
+        return self._basis_function_n_step_prediction(
+            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+        )
+
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        lagged_data = self.build_matrix(X, y)
 
         basis_name = self.basis_function.__class__.__name__
         if basis_name == "Polynomial":
@@ -1379,276 +1385,275 @@
 
         """
         if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        yhat = np.zeros(X.shape[0], dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-        i = self.max_lag
-        X = X.reshape(-1, self.n_inputs)
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            yhat[i : i + steps_ahead] = self._model_prediction(
-                X[k : i + steps_ahead], y[k : i + steps_ahead]
-            )[-steps_ahead:].ravel()
-
-            i += steps_ahead
-
-        yhat = yhat.ravel()
-        return yhat.reshape(-1, 1)
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        yhat = np.zeros(X.shape[0], dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+        i = self.max_lag
+        X = X.reshape(-1, self.n_inputs)
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
+
+            yhat[i : i + steps_ahead] = self._model_prediction(
+                X[k : i + steps_ahead], y[k : i + steps_ahead]
+            )[-steps_ahead:].ravel()
+
+            i += steps_ahead
 
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
+        yhat = yhat.ravel()
+        return yhat.reshape(-1, 1)
 
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
-
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        elif self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-        else:
-            raise Exception(
-                "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-            )
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
+
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
 
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
 
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        y_output = np.zeros(forecast_horizon, dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
 
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, forecast_horizon):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            raw_regressor[:final] = y_output[k:i]
-            for j in range(self.n_inputs):
-                init += self.max_lag
-                final += self.max_lag
-                raw_regressor[init:final] = X[k:i, j]
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
-
-            regressor_value = np.atleast_1d(regressor_value).astype(np.float32)
-            y_output = y_output.astype(np.float32)
-            x_valid, _ = map(torch.tensor, (regressor_value, y_output))
-            y_output[i] = self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
-        return y_output.reshape(-1, 1)
-
-    def _nfir_predict(self, X, y_initial):
-        y_output = np.zeros(X.shape[0], dtype=float)
-        y_output.fill(np.nan)
-        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
-        X = X.reshape(-1, self.n_inputs)
-        model_exponents = [
-            self._code2exponents(code=model) for model in self.final_model
-        ]
-        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
-        for i in range(self.max_lag, X.shape[0]):
-            init = 0
-            final = self.max_lag
-            k = int(i - self.max_lag)
-            for j in range(self.n_inputs):
-                raw_regressor[init:final] = X[k:i, j]
-                init += self.max_lag
-                final += self.max_lag
-
-            regressor_value = np.zeros(len(model_exponents))
-            for j, model_exponent in enumerate(model_exponents):
-                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
-
-            regressor_value = np.atleast_1d(regressor_value).astype(np.float32)
-            y_output = y_output.astype(np.float32)
-            x_valid, _ = map(torch.tensor, (regressor_value, y_output))
-            y_output[i] = self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
-        return y_output.reshape(-1, 1)
-
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        y_output = np.zeros(forecast_horizon, dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, forecast_horizon):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            raw_regressor[:final] = y_output[k:i]
+            for j in range(self.n_inputs):
+                init += self.max_lag
+                final += self.max_lag
+                raw_regressor[init:final] = X[k:i, j]
+
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            regressor_value = np.atleast_1d(regressor_value).astype(np.float32)
+            y_output = y_output.astype(np.float32)
+            x_valid, _ = map(torch.tensor, (regressor_value, y_output))
+            y_output[i] = self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
+        return y_output.reshape(-1, 1)
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = np.zeros(X.shape[0], dtype=float)
+        y_output.fill(np.nan)
+        y_output[: self.max_lag] = y_initial[: self.max_lag, 0]
+        X = X.reshape(-1, self.n_inputs)
+        model_exponents = [
+            self._code2exponents(code=model) for model in self.final_model
+        ]
+        raw_regressor = np.zeros(len(model_exponents[0]), dtype=float)
+        for i in range(self.max_lag, X.shape[0]):
+            init = 0
+            final = self.max_lag
+            k = int(i - self.max_lag)
+            for j in range(self.n_inputs):
+                raw_regressor[init:final] = X[k:i, j]
+                init += self.max_lag
+                final += self.max_lag
+
+            regressor_value = np.zeros(len(model_exponents))
+            for j, model_exponent in enumerate(model_exponents):
+                regressor_value[j] = np.prod(np.power(raw_regressor, model_exponent))
+
+            regressor_value = np.atleast_1d(regressor_value).astype(np.float32)
+            y_output = y_output.astype(np.float32)
+            x_valid, _ = map(torch.tensor, (regressor_value, y_output))
+            y_output[i] = self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
+        return y_output.reshape(-1, 1)
+
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
 
-        analyzed_elements_number = self.max_lag + 1
-
-        for i in range(0, forecast_horizon - self.max_lag):
-            if self.model_type == "NARMAX":
-                lagged_data = self.build_input_output_matrix(
-                    X[i : i + analyzed_elements_number],
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1),
-                )
-            elif self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(
-                    yhat[i : i + analyzed_elements_number].reshape(-1, 1)
-                )
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(
-                    X[i : i + analyzed_elements_number]
-                )
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            X_tmp, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-            )
-            X_tmp = np.atleast_1d(X_tmp).astype(np.float32)
-            yhat = yhat.astype(np.float32)
-            x_valid, _ = map(torch.tensor, (X_tmp, yhat))
-            yhat[i + self.max_lag] = (
-                self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
-            )
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        if len(y) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
-
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
-
-        i = self.max_lag
-
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead], y[k : i + steps_ahead]
-                )[-steps_ahead:].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
+        if self.model_type == "NAR":
+            self.n_inputs = 0
+
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y_initial[: self.max_lag, 0]
+
+        analyzed_elements_number = self.max_lag + 1
+
+        for i in range(0, forecast_horizon - self.max_lag):
+            if self.model_type == "NARMAX":
+                lagged_data = self.build_input_output_matrix(
+                    X[i : i + analyzed_elements_number],
+                    yhat[i : i + analyzed_elements_number].reshape(-1, 1),
+                )
+            elif self.model_type == "NAR":
+                lagged_data = self.build_output_matrix(
+                    yhat[i : i + analyzed_elements_number].reshape(-1, 1)
+                )
+            elif self.model_type == "NFIR":
+                lagged_data = self.build_input_matrix(
+                    X[i : i + analyzed_elements_number]
+                )
+            else:
+                raise ValueError(
+                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
+                    " NFIR."
+                )
+
+            X_tmp, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+            )
+            X_tmp = np.atleast_1d(X_tmp).astype(np.float32)
+            yhat = yhat.astype(np.float32)
+            x_valid, _ = map(torch.tensor, (X_tmp, yhat))
+            yhat[i + self.max_lag] = (
+                self.net(x_valid.to(self.device))[0].detach().cpu().numpy()
+            )
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        if len(y) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+
+        i = self.max_lag
+
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
+
+            if self.model_type == "NARMAX":
                 yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                )[-steps_ahead:].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
-
-        return yhat.reshape(-1, 1)
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        yhat = np.zeros(forecast_horizon, dtype=float)
-        yhat.fill(np.nan)
-        yhat[: self.max_lag] = y[: self.max_lag, 0]
+                    X[k : i + steps_ahead], y[k : i + steps_ahead]
+                )[-steps_ahead:].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                )[-steps_ahead:].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
 
-        i = self.max_lag
+            i += steps_ahead
 
-        while i < len(y):
-            k = int(i - self.max_lag)
-            if i + steps_ahead > len(y):
-                steps_ahead = len(y) - i  # predicts the remaining values
-
-            if self.model_type == "NARMAX":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X[k : i + steps_ahead], y[k : i + steps_ahead]
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NAR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=None,
-                    y_initial=y[k : i + steps_ahead],
-                    forecast_horizon=forecast_horizon,
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            elif self.model_type == "NFIR":
-                yhat[i : i + steps_ahead] = self._basis_function_predict(
-                    X=X[k : i + steps_ahead],
-                    y_initial=y[k : i + steps_ahead],
-                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
-            else:
-                raise ValueError(
-                    "Unrecognized model type. The model_type should be NARMAX, NAR or"
-                    " NFIR."
-                )
-
-            i += steps_ahead
-
-        yhat = yhat.ravel()
-        return yhat.reshape(-1, 1)
-

convert_to_tensor(reg_matrix, y)

Return the lagged matrix and the y values given the maximum lags.

Based on Pytorch official docs: https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The information matrix of the model.

required
y ndarray of floats

The output data

required

Returns:

Name Type Description
Tensor tensor

tensors that have the same size of the first dimension.

Source code in sysidentpy\neural_network\narx_nn.py
309
-310
-311
-312
-313
-314
-315
-316
-317
-318
+        return yhat.reshape(-1, 1)
+
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        yhat = np.zeros(forecast_horizon, dtype=float)
+        yhat.fill(np.nan)
+        yhat[: self.max_lag] = y[: self.max_lag, 0]
+
+        i = self.max_lag
+
+        while i < len(y):
+            k = int(i - self.max_lag)
+            if i + steps_ahead > len(y):
+                steps_ahead = len(y) - i  # predicts the remaining values
+
+            if self.model_type == "NARMAX":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X[k : i + steps_ahead], y[k : i + steps_ahead]
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NAR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=None,
+                    y_initial=y[k : i + steps_ahead],
+                    forecast_horizon=forecast_horizon,
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            elif self.model_type == "NFIR":
+                yhat[i : i + steps_ahead] = self._basis_function_predict(
+                    X=X[k : i + steps_ahead],
+                    y_initial=y[k : i + steps_ahead],
+                )[-forecast_horizon : -forecast_horizon + steps_ahead].ravel()
+            else:
+                raise ValueError(
+                    f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+                )
+
+            i += steps_ahead
+
+        yhat = yhat.ravel()
+        return yhat.reshape(-1, 1)
+

convert_to_tensor(reg_matrix, y)

Return the lagged matrix and the y values given the maximum lags.

Based on Pytorch official docs: https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
reg_matrix ndarray of floats

The information matrix of the model.

required
y ndarray of floats

The output data

required

Returns:

Name Type Description
Tensor tensor

tensors that have the same size of the first dimension.

Source code in sysidentpy\neural_network\narx_nn.py
318
 319
 320
 321
@@ -1659,37 +1664,37 @@
 326
 327
 328
-329
def convert_to_tensor(self, reg_matrix, y):
-    """Return the lagged matrix and the y values given the maximum lags.
-
-    Based on Pytorch official docs:
-    https://pytorch.org/tutorials/beginner/nn_tutorial.html
-
-    Parameters
-    ----------
-    reg_matrix : ndarray of floats
-        The information matrix of the model.
-    y : ndarray of floats
-        The output data
-
-    Returns
-    -------
-    Tensor: tensor
-        tensors that have the same size of the first dimension.
-
-    """
-    reg_matrix, y = map(torch.tensor, (reg_matrix, y))
-    return TensorDataset(reg_matrix, y)
-

data_transform(X, y)

Return the data transformed in tensors using Dataloader.

Parameters:

Name Type Description Default
X ndarray of floats

The input data.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
Tensors Dataloader
Source code in sysidentpy\neural_network\narx_nn.py
def convert_to_tensor(self, reg_matrix, y):
+    """Return the lagged matrix and the y values given the maximum lags.
+
+    Based on Pytorch official docs:
+    https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+    Parameters
+    ----------
+    reg_matrix : ndarray of floats
+        The information matrix of the model.
+    y : ndarray of floats
+        The output data
+
+    Returns
+    -------
+    Tensor: tensor
+        tensors that have the same size of the first dimension.
+
+    """
+    reg_matrix, y = map(torch.tensor, (reg_matrix, y))
+    return TensorDataset(reg_matrix, y)
+

data_transform(X, y)

Return the data transformed in tensors using Dataloader.

Parameters:

Name Type Description Default
X ndarray of floats

The input data.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
Tensors Dataloader
Source code in sysidentpy\neural_network\narx_nn.py
362
 363
 364
 365
@@ -1701,45 +1706,45 @@
 371
 372
 373
-374
def data_transform(self, X, y):
-    """Return the data transformed in tensors using Dataloader.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data.
-    y : ndarray of floats
-        The output data.
-
-    Returns
-    -------
-    Tensors : Dataloader
-
-    """
-    if y is None:
-        raise ValueError("y cannot be None")
-
-    x_train, y_train = self.split_data(X, y)
-    train_ds = self.convert_to_tensor(x_train, y_train)
-    train_dl = self.get_data(train_ds)
-    return train_dl
-

define_opt()

Defines the optimizer using the user parameters.

Source code in sysidentpy\neural_network\narx_nn.py
def define_opt(self):
-    """Defines the optimizer using the user parameters."""
-    opt = getattr(optim, self.optimizer)
-    return opt(self.net.parameters(), lr=self.learning_rate, **self.optim_params)
-

fit(*, X=None, y=None, X_test=None, y_test=None)

Train a NARX Neural Network model.

This is an training pipeline that allows a friendly usage by the user. The training pipeline was based on https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None

Returns:

Name Type Description
net nn.Module

The model fitted.

train_loss ndarrays of floats

The training loss of each batch

val_loss ndarrays of floats

The validation loss of each batch

Source code in sysidentpy\neural_network\narx_nn.py
def data_transform(self, X, y):
+    """Return the data transformed in tensors using Dataloader.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data.
+    y : ndarray of floats
+        The output data.
+
+    Returns
+    -------
+    Tensors : Dataloader
+
+    """
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    x_train, y_train = self.split_data(X, y)
+    train_ds = self.convert_to_tensor(x_train, y_train)
+    train_dl = self.get_data(train_ds)
+    return train_dl
+

define_opt()

Defines the optimizer using the user parameters.

Source code in sysidentpy\neural_network\narx_nn.py
def define_opt(self):
+    """Defines the optimizer using the user parameters."""
+    opt = getattr(optim, self.optimizer)
+    return opt(self.net.parameters(), lr=self.learning_rate, **self.optim_params)
+

fit(*, X=None, y=None, X_test=None, y_test=None)

Train a NARX Neural Network model.

This is an training pipeline that allows a friendly usage by the user. The training pipeline was based on https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the training process.

None
y ndarray of floats

The output data to be used in the training process.

None
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None

Returns:

Name Type Description
net nn.Module

The model fitted.

train_loss ndarrays of floats

The training loss of each batch

val_loss ndarrays of floats

The validation loss of each batch

Source code in sysidentpy\neural_network\narx_nn.py
385
 386
 387
 388
@@ -1802,89 +1807,89 @@
 445
 446
 447
-448
def fit(self, *, X=None, y=None, X_test=None, y_test=None):
-    """Train a NARX Neural Network model.
-
-    This is an training pipeline that allows a friendly usage
-    by the user. The training pipeline was based on
-    https://pytorch.org/tutorials/beginner/nn_tutorial.html
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the training process.
-    y : ndarray of floats
-        The output data to be used in the training process.
-    X_test : ndarray of floats
-        The input data to be used in the prediction process.
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-
-    Returns
-    -------
-    net : nn.Module
-        The model fitted.
-    train_loss: ndarrays of floats
-        The training loss of each batch
-    val_loss: ndarrays of floats
-        The validation loss of each batch
+448
+449
+450
+451
+452
+453
+454
+455
+456
+457
def fit(self, *, X=None, y=None, X_test=None, y_test=None):
+    """Train a NARX Neural Network model.
+
+    This is an training pipeline that allows a friendly usage
+    by the user. The training pipeline was based on
+    https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the training process.
+    y : ndarray of floats
+        The output data to be used in the training process.
+    X_test : ndarray of floats
+        The input data to be used in the prediction process.
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
 
-    """
-    train_dl = self.data_transform(X, y)
-    if self.verbose:
-        if X_test is None or y_test is None:
-            raise ValueError(
-                "X_test and y_test cannot be None if you set verbose=True"
-            )
-        valid_dl = self.data_transform(X_test, y_test)
+    Returns
+    -------
+    net : nn.Module
+        The model fitted.
+    train_loss: ndarrays of floats
+        The training loss of each batch
+    val_loss: ndarrays of floats
+        The validation loss of each batch
 
-    opt = self.define_opt()
-    self.val_loss = []
-    self.train_loss = []
-    for epoch in range(self.epochs):
-        self.net.train()
-        for X, y in train_dl:
-            X, y = X.to(self.device), y.to(self.device)
-            self.loss_batch(X, y, opt=opt)
+    """
+    train_dl = self.data_transform(X, y)
+    if self.verbose:
+        if X_test is None or y_test is None:
+            raise ValueError(
+                "X_test and y_test cannot be None if you set verbose=True"
+            )
+        valid_dl = self.data_transform(X_test, y_test)
 
-        if self.verbose:
-            train_losses, train_nums = zip(
-                *[
-                    self.loss_batch(X.to(self.device), y.to(self.device))
-                    for X, y in train_dl
-                ]
-            )
-            self.train_loss.append(
-                np.sum(np.multiply(train_losses, train_nums)) / np.sum(train_nums)
-            )
-
-            self.net.eval()
-            with torch.no_grad():
-                losses, nums = zip(
-                    *[
-                        self.loss_batch(X.to(self.device), y.to(self.device))
-                        for X, y in valid_dl
-                    ]
-                )
-            self.val_loss.append(np.sum(np.multiply(losses, nums)) / np.sum(nums))
-
-            logging.info(
-                "Train metrics: "
-                + str(self.train_loss[epoch])
-                + " | Validation metrics: "
-                + str(self.val_loss[epoch])
-            )
-    return self
-

get_data(train_ds)

Return the lagged matrix and the y values given the maximum lags.

Based on Pytorch official docs: https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
train_ds

Tensors that have the same size of the first dimension.

required

Returns:

Name Type Description
Dataloader dataloader

tensors that have the same size of the first dimension.

Source code in sysidentpy\neural_network\narx_nn.py
331
-332
-333
-334
-335
-336
-337
-338
-339
-340
+    opt = self.define_opt()
+    self.val_loss = []
+    self.train_loss = []
+    for epoch in range(self.epochs):
+        self.net.train()
+        for X, y in train_dl:
+            X, y = X.to(self.device), y.to(self.device)
+            self.loss_batch(X, y, opt=opt)
+
+        if self.verbose:
+            train_losses, train_nums = zip(
+                *[
+                    self.loss_batch(X.to(self.device), y.to(self.device))
+                    for X, y in train_dl
+                ]
+            )
+            self.train_loss.append(
+                np.sum(np.multiply(train_losses, train_nums)) / np.sum(train_nums)
+            )
+
+            self.net.eval()
+            with torch.no_grad():
+                losses, nums = zip(
+                    *[
+                        self.loss_batch(X.to(self.device), y.to(self.device))
+                        for X, y in valid_dl
+                    ]
+                )
+            self.val_loss.append(np.sum(np.multiply(losses, nums)) / np.sum(nums))
+
+            logging.info(
+                "Train metrics: "
+                + str(self.train_loss[epoch])
+                + " | Validation metrics: "
+                + str(self.val_loss[epoch])
+            )
+    return self
+

get_data(train_ds)

Return the lagged matrix and the y values given the maximum lags.

Based on Pytorch official docs: https://pytorch.org/tutorials/beginner/nn_tutorial.html

Parameters:

Name Type Description Default
train_ds

Tensors that have the same size of the first dimension.

required

Returns:

Name Type Description
Dataloader dataloader

tensors that have the same size of the first dimension.

Source code in sysidentpy\neural_network\narx_nn.py
340
 341
 342
 343
@@ -1895,84 +1900,84 @@
 348
 349
 350
-351
def get_data(self, train_ds):
-    """Return the lagged matrix and the y values given the maximum lags.
-
-    Based on Pytorch official docs:
-    https://pytorch.org/tutorials/beginner/nn_tutorial.html
-
-    Parameters
-    ----------
-    train_ds: tensor
-        Tensors that have the same size of the first dimension.
-
-    Returns
-    -------
-    Dataloader: dataloader
-        tensors that have the same size of the first dimension.
-
-    """
-    pin_memory = False if self.device.type == "cpu" else True
-    return DataLoader(
-        train_ds, batch_size=self.batch_size, pin_memory=pin_memory, shuffle=False
-    )
-

loss_batch(X, y, opt=None)

Compute the loss for one batch.

Parameters:

Name Type Description Default
X ndarray of floats

The regressor matrix.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
loss float

The loss of one batch.

Source code in sysidentpy\neural_network\narx_nn.py
def get_data(self, train_ds):
+    """Return the lagged matrix and the y values given the maximum lags.
+
+    Based on Pytorch official docs:
+    https://pytorch.org/tutorials/beginner/nn_tutorial.html
+
+    Parameters
+    ----------
+    train_ds: tensor
+        Tensors that have the same size of the first dimension.
+
+    Returns
+    -------
+    Dataloader: dataloader
+        tensors that have the same size of the first dimension.
+
+    """
+    pin_memory = False if self.device.type == "cpu" else True
+    return DataLoader(
+        train_ds, batch_size=self.batch_size, pin_memory=pin_memory, shuffle=False
+    )
+

loss_batch(X, y, opt=None)

Compute the loss for one batch.

Parameters:

Name Type Description Default
X ndarray of floats

The regressor matrix.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
loss float

The loss of one batch.

Source code in sysidentpy\neural_network\narx_nn.py
def loss_batch(self, X, y, opt=None):
-    """Compute the loss for one batch.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The regressor matrix.
-    y : ndarray of floats
-        The output data.
-
-    Returns
-    -------
-    loss : float
-        The loss of one batch.
-
-    """
-    loss = self.loss_func(self.net(X), y)
-
-    if opt is not None:
-        opt.zero_grad()
-        loss.backward()
-        opt.step()
-
-    return loss.item(), len(X)
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted given an input and initial values.

The predict function allows a friendly usage by the user. Given a trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future).

Currently we only support infinity-steps-ahead prediction, but run 1-step-ahead prediction manually is straightforward.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\neural_network\narx_nn.py
def loss_batch(self, X, y, opt=None):
+    """Compute the loss for one batch.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The regressor matrix.
+    y : ndarray of floats
+        The output data.
+
+    Returns
+    -------
+    loss : float
+        The loss of one batch.
+
+    """
+    loss = self.loss_func(self.net(X), y)
+
+    if opt is not None:
+        opt.zero_grad()
+        loss.backward()
+        opt.step()
+
+    return loss.item(), len(X)
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted given an input and initial values.

The predict function allows a friendly usage by the user. Given a trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future).

Currently we only support infinity-steps-ahead prediction, but run 1-step-ahead prediction manually is straightforward.

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\neural_network\narx_nn.py
459
 460
 461
 462
@@ -2005,69 +2010,59 @@
 489
 490
 491
-492
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted given an input and initial values.
-
-    The predict function allows a friendly usage by the user.
-    Given a trained model, predict values given
-    a new set of data.
-
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future).
-
-    Currently we only support infinity-steps-ahead prediction,
-    but run 1-step-ahead prediction manually is straightforward.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
-
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            return self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-        if steps_ahead == 1:
-            return self._one_step_ahead_prediction(X, y)
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        return self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-
-    if steps_ahead is None:
-        return self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-    if steps_ahead == 1:
-        return self._one_step_ahead_prediction(X, y)
-
-    return self._basis_function_n_step_prediction(
-        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-    )
-

split_data(X, y)

Return the lagged matrix and the y values given the maximum lags.

Parameters:

Name Type Description Default
X ndarray of floats

The input data.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
y ndarray of floats

The y values considering the lags.

reg_matrix ndarray of floats

The information matrix of the model.

Source code in sysidentpy\neural_network\narx_nn.py
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted given an input and initial values.
+
+    The predict function allows a friendly usage by the user.
+    Given a trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future).
+
+    Currently we only support infinity-steps-ahead prediction,
+    but run 1-step-ahead prediction manually is straightforward.
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            return self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+        if steps_ahead == 1:
+            return self._one_step_ahead_prediction(X, y)
+
+        _check_positive_int(steps_ahead, "steps_ahead")
+        return self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+
+    if steps_ahead is None:
+        return self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+    if steps_ahead == 1:
+        return self._one_step_ahead_prediction(X, y)
+
+    return self._basis_function_n_step_prediction(
+        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+    )
+

split_data(X, y)

Return the lagged matrix and the y values given the maximum lags.

Parameters:

Name Type Description Default
X ndarray of floats

The input data.

required
y ndarray of floats

The output data.

required

Returns:

Name Type Description
y ndarray of floats

The y values considering the lags.

reg_matrix ndarray of floats

The information matrix of the model.

Source code in sysidentpy\neural_network\narx_nn.py
249
 250
 251
 252
@@ -2125,82 +2120,81 @@
 304
 305
 306
-307
def split_data(self, X, y):
-    """Return the lagged matrix and the y values given the maximum lags.
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data.
-    y : ndarray of floats
-        The output data.
-
-    Returns
-    -------
-    y : ndarray of floats
-        The y values considering the lags.
-    reg_matrix : ndarray of floats
-        The information matrix of the model.
-
-    """
-
-    if y is None:
-        raise ValueError("y cannot be None")
+307
+308
+309
+310
+311
+312
+313
+314
+315
+316
def split_data(self, X, y):
+    """Return the lagged matrix and the y values given the maximum lags.
 
-    self.max_lag = self._get_max_lag()
-    if self.model_type == "NAR":
-        lagged_data = self.build_output_matrix(y)
-    elif self.model_type == "NFIR":
-        lagged_data = self.build_input_matrix(X)
-    elif self.model_type == "NARMAX":
-        check_X_y(X, y)
-        lagged_data = self.build_input_output_matrix(X, y)
-    else:
-        raise ValueError(
-            "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-        )
-
-    basis_name = self.basis_function.__class__.__name__
-    if basis_name == "Polynomial":
-        reg_matrix = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-        reg_matrix = reg_matrix[:, 1:]
-    else:
-        reg_matrix, self.ensemble = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=None
-        )
-
-    if X is not None:
-        self.n_inputs = _num_features(X)
-    else:
-        self.n_inputs = 1  # only used to create the regressor space base
-
-    self.regressor_code = self.regressor_space(self.n_inputs)
-    if basis_name != "Polynomial" and self.basis_function.ensemble:
-        basis_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
-    elif basis_name != "Polynomial" and self.basis_function.ensemble is False:
-        self.regressor_code = np.sort(
-            np.tile(
-                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
-            ),
-            axis=0,
-        )
-
-    if basis_name == "Polynomial":
-        self.regressor_code = self.regressor_code[
-            1:
-        ]  # removes the column of the constant
-
-    self.final_model = self.regressor_code.copy()
-    reg_matrix = np.atleast_1d(reg_matrix).astype(np.float32)
-
-    y = np.atleast_1d(y[self.max_lag :]).astype(np.float32)
-    return reg_matrix, y
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data.
+    y : ndarray of floats
+        The output data.
+
+    Returns
+    -------
+    y : ndarray of floats
+        The y values considering the lags.
+    reg_matrix : ndarray of floats
+        The information matrix of the model.
+
+    """
+
+    if y is None:
+        raise ValueError("y cannot be None")
+
+    self.max_lag = self._get_max_lag()
+    lagged_data = self.build_matrix(X, y)
+
+    basis_name = self.basis_function.__class__.__name__
+    if basis_name == "Polynomial":
+        reg_matrix = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+        reg_matrix = reg_matrix[:, 1:]
+    else:
+        reg_matrix, self.ensemble = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=None
+        )
+
+    if X is not None:
+        self.n_inputs = _num_features(X)
+    else:
+        self.n_inputs = 1  # only used to create the regressor space base
+
+    self.regressor_code = self.regressor_space(self.n_inputs)
+    if basis_name != "Polynomial" and self.basis_function.ensemble:
+        basis_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+        self.regressor_code = np.concatenate([self.regressor_code[1:], basis_code])
+    elif basis_name != "Polynomial" and self.basis_function.ensemble is False:
+        self.regressor_code = np.sort(
+            np.tile(
+                self.regressor_code[1:, :], (self.basis_function.repetition, 1)
+            ),
+            axis=0,
+        )
+
+    if basis_name == "Polynomial":
+        self.regressor_code = self.regressor_code[
+            1:
+        ]  # removes the column of the constant
+
+    self.final_model = self.regressor_code.copy()
+    reg_matrix = np.atleast_1d(reg_matrix).astype(np.float32)
+
+    y = np.atleast_1d(y[self.max_lag :]).astype(np.float32)
+    return reg_matrix, y
 
\ No newline at end of file diff --git a/docs/code/parameter-estimation/index.html b/docs/code/parameter-estimation/index.html index 465e67cd..98c0d673 100644 --- a/docs/code/parameter-estimation/index.html +++ b/docs/code/parameter-estimation/index.html @@ -1032,7 +1032,7 @@ for _ in range(30): e = np.concatenate([np.zeros([max_lag, 1]), e], axis=0) - lagged_data = im.build_output_matrix(e) + lagged_data = im.build_output_matrix(None, e) e_regressors = self.basis_function.fit( lagged_data, max_lag, predefined_regressors=None diff --git a/docs/code/simulation/index.html b/docs/code/simulation/index.html index 1f46cff0..9fccf06b 100644 --- a/docs/code/simulation/index.html +++ b/docs/code/simulation/index.html @@ -600,29 +600,7 @@ 565 566 567 -568 -569 -570 -571 -572 -573 -574 -575 -576 -577 -578 -579 -580 -581 -582 -583 -584 -585 -586 -587 -588 -589 -590
class SimulateNARMAX(Estimators, BaseMSS):
+568
class SimulateNARMAX(Estimators, BaseMSS):
     r"""Simulation of Polynomial NARMAX model
 
     The NARMAX model is described as:
@@ -745,458 +723,451 @@
         )
         self.elag = elag
         self.model_type = model_type
-        self.basis_function = basis_function
-        self.estimator = estimator
-        self.extended_least_squares = extended_least_squares
-        self.estimate_parameter = estimate_parameter
-        self.calculate_err = calculate_err
-        self.n_inputs = None
-        self.xlag = None
-        self.ylag = None
-        self.n_terms = None
-        self.err = None
-        self.final_model = None
-        self.theta = None
-        self.pivv = None
-        self.non_degree = None
-        self._validate_simulate_params()
-
-    def _validate_simulate_params(self):
-        if not isinstance(self.estimate_parameter, bool):
-            raise TypeError(
-                "estimate_parameter must be False or True. Got"
-                f" {self.estimate_parameter}"
-            )
-
-        if not isinstance(self.calculate_err, bool):
-            raise TypeError(
-                f"calculate_err must be False or True. Got {self.calculate_err}"
-            )
-
-        if self.basis_function is None:
-            raise TypeError(f"basis_function can't be. Got {self.basis_function}")
-
-        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
-            raise ValueError(
-                f"model_type must be NARMAX, NAR, or NFIR. Got {self.model_type}"
-            )
-
-    def _check_simulate_params(self, y_train, y_test, model_code, steps_ahead, theta):
-        if self.basis_function.__class__.__name__ != "Polynomial":
-            raise NotImplementedError(
-                "Currently, SimulateNARMAX only works for polynomial models."
-            )
-
-        if y_test is None:
-            raise ValueError("y_test cannot be None")
-
-        if not isinstance(model_code, np.ndarray):
-            raise TypeError(f"model_code must be an np.np.ndarray. Got {model_code}")
-
-        if not isinstance(steps_ahead, (int, type(None))):
-            raise ValueError(
-                f"steps_ahead must be None or integer > zero. Got {steps_ahead}"
-            )
-
-        if not isinstance(theta, np.ndarray) and not self.estimate_parameter:
-            raise TypeError(
-                "If estimate_parameter is False, theta must be an np.ndarray. Got"
-                f" {theta}"
-            )
-
-        if self.estimate_parameter:
-            if not all(isinstance(i, np.ndarray) for i in [y_train]):
-                raise TypeError(
-                    "If estimate_parameter is True, X_train and y_train must be an"
-                    f" np.ndarray. Got {type(y_train)}"
-                )
-
-    def simulate(
-        self,
-        *,
-        X_train=None,
-        y_train=None,
-        X_test=None,
-        y_test=None,
-        model_code=None,
-        steps_ahead=None,
-        theta=None,
-        forecast_horizon=None,
-    ):
-        """Simulate a model defined by the user.
-
-        Parameters
-        ----------
-        X_train : ndarray of floats
-            The input data to be used in the training process.
-        y_train : ndarray of floats
-            The output data to be used in the training process.
-        X_test : ndarray of floats
-            The input data to be used in the prediction process.
-        y_test : ndarray of floats
-            The output data (initial conditions) to be used in the prediction process.
-        model_code : ndarray of int
-            Flattened list of input or output regressors.
-        steps_ahead = int, default = None
-            The forecast horizon.
-        theta : array-like of shape = number_of_model_elements
-            The parameters of the model.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
-        results : string
-            Where:
-                First column represents each regressor element;
-                Second column represents associated parameter;
-                Third column represents the error reduction ratio associated
-                to each regressor.
-
-        """
-        self._check_simulate_params(y_train, y_test, model_code, steps_ahead, theta)
-
-        if X_test is not None:
-            self.n_inputs = _num_features(X_test)
-        else:
-            self.n_inputs = 1  # just to create the regressor space base
-
-        xlag_code = self._list_input_regressor_code(model_code)
-        ylag_code = self._list_output_regressor_code(model_code)
-        self.xlag = self._get_lag_from_regressor_code(xlag_code)
-        self.ylag = self._get_lag_from_regressor_code(ylag_code)
-        self.max_lag = max(self.xlag, self.ylag)
-        if self.n_inputs != 1:
-            self.xlag = self.n_inputs * [list(range(1, self.max_lag + 1))]
-
-        # for MetaMSS NAR modelling
-        if self.model_type == "NAR" and forecast_horizon is None:
-            forecast_horizon = y_test.shape[0] - self.max_lag
-
-        self.non_degree = model_code.shape[1]
-        regressor_code = self.regressor_space(self.n_inputs)
-
-        self.pivv = self._get_index_from_regressor_code(regressor_code, model_code)
-        self.final_model = regressor_code[self.pivv]
-        # to use in the predict function
-        self.n_terms = self.final_model.shape[0]
-        if self.estimate_parameter and not self.calculate_err:
-            if self.model_type == "NARMAX":
-                self.max_lag = self._get_max_lag()
-                lagged_data = self.build_input_output_matrix(X_train, y_train)
-            elif self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(y_train)
-                self.max_lag = self._get_max_lag()
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(X_train)
-                self.max_lag = self._get_max_lag()
-
-            psi = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=self.pivv
-            )
-
-            self.theta = getattr(self, self.estimator)(psi, y_train)
-            if self.extended_least_squares is True:
-                self.theta = self._unbiased_estimator(
-                    psi, y_train, self.theta, self.elag, self.max_lag, self.estimator
-                )
-
-            self.err = self.n_terms * [0]
-        elif not self.estimate_parameter:
-            self.theta = theta
-            self.err = self.n_terms * [0]
-        else:
-            if self.model_type == "NARMAX":
-                self.max_lag = self._get_max_lag()
-                lagged_data = self.build_input_output_matrix(X_train, y_train)
-            elif self.model_type == "NAR":
-                lagged_data = self.build_output_matrix(y_train)
-                self.max_lag = self._get_max_lag()
-            elif self.model_type == "NFIR":
-                lagged_data = self.build_input_matrix(X_train)
-                self.max_lag = self._get_max_lag()
-
-            psi = self.basis_function.fit(
-                lagged_data, self.max_lag, predefined_regressors=self.pivv
-            )
-
-            _, self.err, _, _ = self.error_reduction_ratio(
-                psi, y_train, self.n_terms, self.final_model
-            )
-            self.theta = getattr(self, self.estimator)(psi, y_train)
-            if self.extended_least_squares is True:
-                self.theta = self._unbiased_estimator(
-                    psi, y_train, self.theta, self.non_degree, self.elag, self.max_lag
-                )
-
-        return self.predict(
-            X=X_test,
-            y=y_test,
-            steps_ahead=steps_ahead,
-            forecast_horizon=forecast_horizon,
-        )
-
-    def error_reduction_ratio(self, psi, y, process_term_number, regressor_code):
-        """Perform the Error Reduction Ration algorithm.
-
-        Parameters
-        ----------
-        y : array-like of shape = n_samples
-            The target data used in the identification process.
-        psi : ndarray of floats
-            The information matrix of the model.
-        process_term_number : int
-            Number of Process Terms defined by the user.
-
-        Returns
-        -------
-        err : array-like of shape = number_of_model_elements
-            The respective ERR calculated for each regressor.
-        piv : array-like of shape = number_of_model_elements
-            Contains the index to put the regressors in the correct order
-            based on err values.
-        psi_orthogonal : ndarray of floats
-            The updated and orthogonal information matrix.
-
-        References
-        ----------
-        - Manuscript: Orthogonal least squares methods and their application
-           to non-linear system identification
-           https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
-        - Manuscript (portuguese): Identificação de Sistemas não Lineares
-           Utilizando Modelos NARMAX Polinomiais – Uma Revisão
-           e Novos Resultados
-
-        """
-        squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
-        tmp_psi = psi.copy()
-        y = y[self.max_lag :, 0].reshape(-1, 1)
-        tmp_y = y.copy()
-        dimension = tmp_psi.shape[1]
-        piv = np.arange(dimension)
-        tmp_err = np.zeros(dimension)
-        err = np.zeros(dimension)
-
-        for i in np.arange(0, dimension):
-            for j in np.arange(i, dimension):
-                # Add `eps` in the denominator to omit division by zero if
-                # denominator is zero
-                tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
-                    np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
-                )
-
-            if i == process_term_number:
-                break
-
-            piv_index = np.argmax(tmp_err[i:]) + i
-            err[i] = tmp_err[piv_index]
-            tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
-            piv[[piv_index, i]] = piv[[i, piv_index]]
-
-            v = Orthogonalization().house(tmp_psi[i:, i])
-
-            row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
-
-            tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
-
-            tmp_psi[i:, i:] = np.copy(row_result)
+        self.build_matrix = self.get_build_io_method(model_type)
+        self.basis_function = basis_function
+        self.estimator = estimator
+        self.extended_least_squares = extended_least_squares
+        self.estimate_parameter = estimate_parameter
+        self.calculate_err = calculate_err
+        self.n_inputs = None
+        self.xlag = None
+        self.ylag = None
+        self.n_terms = None
+        self.err = None
+        self.final_model = None
+        self.theta = None
+        self.pivv = None
+        self.non_degree = None
+        self._validate_simulate_params()
+
+    def _validate_simulate_params(self):
+        if not isinstance(self.estimate_parameter, bool):
+            raise TypeError(
+                "estimate_parameter must be False or True. Got"
+                f" {self.estimate_parameter}"
+            )
+
+        if not isinstance(self.calculate_err, bool):
+            raise TypeError(
+                f"calculate_err must be False or True. Got {self.calculate_err}"
+            )
+
+        if self.basis_function is None:
+            raise TypeError(f"basis_function can't be. Got {self.basis_function}")
+
+        if self.model_type not in ["NARMAX", "NAR", "NFIR"]:
+            raise ValueError(
+                f"model_type must be NARMAX, NAR, or NFIR. Got {self.model_type}"
+            )
+
+    def _check_simulate_params(self, y_train, y_test, model_code, steps_ahead, theta):
+        if self.basis_function.__class__.__name__ != "Polynomial":
+            raise NotImplementedError(
+                "Currently, SimulateNARMAX only works for polynomial models."
+            )
+
+        if y_test is None:
+            raise ValueError("y_test cannot be None")
+
+        if not isinstance(model_code, np.ndarray):
+            raise TypeError(f"model_code must be an np.np.ndarray. Got {model_code}")
+
+        if not isinstance(steps_ahead, (int, type(None))):
+            raise ValueError(
+                f"steps_ahead must be None or integer > zero. Got {steps_ahead}"
+            )
+
+        if not isinstance(theta, np.ndarray) and not self.estimate_parameter:
+            raise TypeError(
+                "If estimate_parameter is False, theta must be an np.ndarray. Got"
+                f" {theta}"
+            )
+
+        if self.estimate_parameter:
+            if not all(isinstance(i, np.ndarray) for i in [y_train]):
+                raise TypeError(
+                    "If estimate_parameter is True, X_train and y_train must be an"
+                    f" np.ndarray. Got {type(y_train)}"
+                )
+
+    def simulate(
+        self,
+        *,
+        X_train=None,
+        y_train=None,
+        X_test=None,
+        y_test=None,
+        model_code=None,
+        steps_ahead=None,
+        theta=None,
+        forecast_horizon=None,
+    ):
+        """Simulate a model defined by the user.
+
+        Parameters
+        ----------
+        X_train : ndarray of floats
+            The input data to be used in the training process.
+        y_train : ndarray of floats
+            The output data to be used in the training process.
+        X_test : ndarray of floats
+            The input data to be used in the prediction process.
+        y_test : ndarray of floats
+            The output data (initial conditions) to be used in the prediction process.
+        model_code : ndarray of int
+            Flattened list of input or output regressors.
+        steps_ahead = int, default = None
+            The forecast horizon.
+        theta : array-like of shape = number_of_model_elements
+            The parameters of the model.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+        results : string
+            Where:
+                First column represents each regressor element;
+                Second column represents associated parameter;
+                Third column represents the error reduction ratio associated
+                to each regressor.
+
+        """
+        self._check_simulate_params(y_train, y_test, model_code, steps_ahead, theta)
+
+        if X_test is not None:
+            self.n_inputs = _num_features(X_test)
+        else:
+            self.n_inputs = 1  # just to create the regressor space base
+
+        xlag_code = self._list_input_regressor_code(model_code)
+        ylag_code = self._list_output_regressor_code(model_code)
+        self.xlag = self._get_lag_from_regressor_code(xlag_code)
+        self.ylag = self._get_lag_from_regressor_code(ylag_code)
+        self.max_lag = max(self.xlag, self.ylag)
+        if self.n_inputs != 1:
+            self.xlag = self.n_inputs * [list(range(1, self.max_lag + 1))]
+
+        # for MetaMSS NAR modelling
+        if self.model_type == "NAR" and forecast_horizon is None:
+            forecast_horizon = y_test.shape[0] - self.max_lag
+
+        self.non_degree = model_code.shape[1]
+        regressor_code = self.regressor_space(self.n_inputs)
+
+        self.pivv = self._get_index_from_regressor_code(regressor_code, model_code)
+        self.final_model = regressor_code[self.pivv]
+        # to use in the predict function
+        self.n_terms = self.final_model.shape[0]
+        if self.estimate_parameter and not self.calculate_err:
+            self.max_lag = self._get_max_lag()
+            lagged_data = self.build_matrix(X_train, y_train)
+            psi = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=self.pivv
+            )
+
+            self.theta = getattr(self, self.estimator)(psi, y_train)
+            if self.extended_least_squares is True:
+                self.theta = self._unbiased_estimator(
+                    psi, y_train, self.theta, self.elag, self.max_lag, self.estimator
+                )
+
+            self.err = self.n_terms * [0]
+        elif not self.estimate_parameter:
+            self.theta = theta
+            self.err = self.n_terms * [0]
+        else:
+            self.max_lag = self._get_max_lag()
+            lagged_data = self.build_matrix(X_train, y_train)
+            psi = self.basis_function.fit(
+                lagged_data, self.max_lag, predefined_regressors=self.pivv
+            )
+
+            _, self.err, _, _ = self.error_reduction_ratio(
+                psi, y_train, self.n_terms, self.final_model
+            )
+            self.theta = getattr(self, self.estimator)(psi, y_train)
+            if self.extended_least_squares is True:
+                self.theta = self._unbiased_estimator(
+                    psi, y_train, self.theta, self.non_degree, self.elag, self.max_lag
+                )
+
+        return self.predict(
+            X=X_test,
+            y=y_test,
+            steps_ahead=steps_ahead,
+            forecast_horizon=forecast_horizon,
+        )
+
+    def error_reduction_ratio(self, psi, y, process_term_number, regressor_code):
+        """Perform the Error Reduction Ration algorithm.
+
+        Parameters
+        ----------
+        y : array-like of shape = n_samples
+            The target data used in the identification process.
+        psi : ndarray of floats
+            The information matrix of the model.
+        process_term_number : int
+            Number of Process Terms defined by the user.
+
+        Returns
+        -------
+        err : array-like of shape = number_of_model_elements
+            The respective ERR calculated for each regressor.
+        piv : array-like of shape = number_of_model_elements
+            Contains the index to put the regressors in the correct order
+            based on err values.
+        psi_orthogonal : ndarray of floats
+            The updated and orthogonal information matrix.
+
+        References
+        ----------
+        - Manuscript: Orthogonal least squares methods and their application
+           to non-linear system identification
+           https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
+        - Manuscript (portuguese): Identificação de Sistemas não Lineares
+           Utilizando Modelos NARMAX Polinomiais – Uma Revisão
+           e Novos Resultados
+
+        """
+        squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
+        tmp_psi = psi.copy()
+        y = y[self.max_lag :, 0].reshape(-1, 1)
+        tmp_y = y.copy()
+        dimension = tmp_psi.shape[1]
+        piv = np.arange(dimension)
+        tmp_err = np.zeros(dimension)
+        err = np.zeros(dimension)
+
+        for i in np.arange(0, dimension):
+            for j in np.arange(i, dimension):
+                # Add `eps` in the denominator to omit division by zero if
+                # denominator is zero
+                tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
+                    np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
+                )
+
+            if i == process_term_number:
+                break
+
+            piv_index = np.argmax(tmp_err[i:]) + i
+            err[i] = tmp_err[piv_index]
+            tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
+            piv[[piv_index, i]] = piv[[i, piv_index]]
+
+            v = Orthogonalization().house(tmp_psi[i:, i])
+
+            row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
+
+            tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
+
+            tmp_psi[i:, i:] = np.copy(row_result)
+
+        tmp_piv = piv[0:process_term_number]
+        psi_orthogonal = psi[:, tmp_piv]
+        model_code = regressor_code[tmp_piv, :].copy()
+        return model_code, err, piv, psi_orthogonal
+
+    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+        """Return the predicted values given an input.
+
+        The predict function allows a friendly usage by the user.
+        Given a previously trained model, predict values given
+        a new set of data.
+
+        This method accept y values mainly for prediction n-steps ahead
+        (to be implemented in the future)
 
-        tmp_piv = piv[0:process_term_number]
-        psi_orthogonal = psi[:, tmp_piv]
-        model_code = regressor_code[tmp_piv, :].copy()
-        return model_code, err, piv, psi_orthogonal
-
-    def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-        """Return the predicted values given an input.
-
-        The predict function allows a friendly usage by the user.
-        Given a previously trained model, predict values given
-        a new set of data.
+        Parameters
+        ----------
+        X : ndarray of floats
+            The input data to be used in the prediction process.
+        y : ndarray of floats
+            The output data to be used in the prediction process.
+        steps_ahead : int (default = None)
+            The user can use free run simulation, one-step ahead prediction
+            and n-step ahead prediction.
+        forecast_horizon : int, default=None
+            The number of predictions over the time.
 
-        This method accept y values mainly for prediction n-steps ahead
-        (to be implemented in the future)
-
-        Parameters
-        ----------
-        X : ndarray of floats
-            The input data to be used in the prediction process.
-        y : ndarray of floats
-            The output data to be used in the prediction process.
-        steps_ahead : int (default = None)
-            The user can use free run simulation, one-step ahead prediction
-            and n-step ahead prediction.
-        forecast_horizon : int, default=None
-            The number of predictions over the time.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-            The predicted values of the model.
-
-        """
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            if steps_ahead is None:
-                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-                yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-                return yhat
-            if steps_ahead == 1:
-                yhat = self._one_step_ahead_prediction(X, y)
-                yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-                return yhat
-
-            _check_positive_int(steps_ahead, "steps_ahead")
-            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-            yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-            return yhat
+        Returns
+        -------
+        yhat : ndarray of floats
+            The predicted values of the model.
+
+        """
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            if steps_ahead is None:
+                yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+            if steps_ahead == 1:
+                yhat = self._one_step_ahead_prediction(X, y)
+                yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+                return yhat
+
+            _check_positive_int(steps_ahead, "steps_ahead")
+            yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        if steps_ahead is None:
+            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        yhat = self._basis_function_n_step_prediction(
+            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+        )
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
 
-        if steps_ahead is None:
-            yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-            return yhat
-
-        yhat = self._basis_function_n_step_prediction(
-            X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-        )
-        yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-        return yhat
-
-    def _one_step_ahead_prediction(self, X, y):
-        """Perform the 1-step-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The 1-step-ahead predicted values of the model.
+    def _one_step_ahead_prediction(self, X, y):
+        """Perform the 1-step-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The 1-step-ahead predicted values of the model.
+
+        """
+        lagged_data = self.build_matrix(X, y)
+        if self.basis_function.__class__.__name__ == "Polynomial":
+            X_base = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
+        else:
+            X_base, _ = self.basis_function.transform(
+                lagged_data,
+                self.max_lag,
+                predefined_regressors=self.pivv[: len(self.final_model)],
+            )
 
-        """
-        if self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y)
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X)
-        elif self.model_type == "NARMAX":
-            lagged_data = self.build_input_output_matrix(X, y)
-        else:
-            raise ValueError(
-                "Unrecognized model type. The model_type should be NARMAX, NAR or NFIR."
-            )
-
-        if self.basis_function.__class__.__name__ == "Polynomial":
-            X_base = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-        else:
-            X_base, _ = self.basis_function.transform(
-                lagged_data,
-                self.max_lag,
-                predefined_regressors=self.pivv[: len(self.final_model)],
-            )
-
-        yhat = super()._one_step_ahead_prediction(X_base)
-        return yhat.reshape(-1, 1)
-
-    def _n_step_ahead_prediction(self, X, y, steps_ahead):
-        """Perform the n-steps-ahead prediction of a model.
-
-        Parameters
-        ----------
-        y : array-like of shape = max_lag
-            Initial conditions values of the model
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
+        yhat = super()._one_step_ahead_prediction(X_base)
+        return yhat.reshape(-1, 1)
+
+    def _n_step_ahead_prediction(self, X, y, steps_ahead):
+        """Perform the n-steps-ahead prediction of a model.
+
+        Parameters
+        ----------
+        y : array-like of shape = max_lag
+            Initial conditions values of the model
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The n-steps-ahead predicted values of the model.
+
+        """
+        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
+        return yhat
+
+    def _model_prediction(self, X, y_initial, forecast_horizon=None):
+        """Perform the infinity steps-ahead simulation of a model.
+
+        Parameters
+        ----------
+        y_initial : array-like of shape = max_lag
+            Number of initial conditions values of output
+            to start recursive process.
+        X : ndarray of floats of shape = n_samples
+            Vector with input values to be used in model simulation.
+
+        Returns
+        -------
+        yhat : ndarray of floats
+               The predicted values of the model.
 
-        Returns
-        -------
-        yhat : ndarray of floats
-               The n-steps-ahead predicted values of the model.
-
-        """
-        yhat = super()._n_step_ahead_prediction(X, y, steps_ahead)
-        return yhat
-
-    def _model_prediction(self, X, y_initial, forecast_horizon=None):
-        """Perform the infinity steps-ahead simulation of a model.
-
-        Parameters
-        ----------
-        y_initial : array-like of shape = max_lag
-            Number of initial conditions values of output
-            to start recursive process.
-        X : ndarray of floats of shape = n_samples
-            Vector with input values to be used in model simulation.
-
-        Returns
-        -------
-        yhat : ndarray of floats
-               The predicted values of the model.
+        """
+        if self.model_type in ["NARMAX", "NAR"]:
+            return self._narmax_predict(X, y_initial, forecast_horizon)
+        if self.model_type == "NFIR":
+            return self._nfir_predict(X, y_initial)
+
+        raise ValueError(
+            f"model_type must be NARMAX, NAR or NFIR. Got {self.model_type}"
+        )
+
+    def _narmax_predict(self, X, y_initial, forecast_horizon):
+        if len(y_initial) < self.max_lag:
+            raise ValueError(
+                "Insufficient initial condition elements! Expected at least"
+                f" {self.max_lag} elements."
+            )
+
+        if X is not None:
+            forecast_horizon = X.shape[0]
+        else:
+            forecast_horizon = forecast_horizon + self.max_lag
+
+        if self.model_type == "NAR":
+            self.n_inputs = 0
 
-        """
-        if self.model_type in ["NARMAX", "NAR"]:
-            return self._narmax_predict(X, y_initial, forecast_horizon)
-        if self.model_type == "NFIR":
-            return self._nfir_predict(X, y_initial)
-
-        raise Exception(
-            "model_type do not exist! Model type must be NARMAX, NAR or NFIR"
-        )
-
-    def _narmax_predict(self, X, y_initial, forecast_horizon):
-        if len(y_initial) < self.max_lag:
-            raise Exception("Insufficient initial conditions elements!")
-
-        if X is not None:
-            forecast_horizon = X.shape[0]
-        else:
-            forecast_horizon = forecast_horizon + self.max_lag
+        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
+        return y_output
+
+    def _nfir_predict(self, X, y_initial):
+        y_output = super()._nfir_predict(X, y_initial)
+        return y_output
+
+    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
+        """not implemented"""
+        raise NotImplementedError(
+            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
+        )
+
+    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
+        """not implemented"""
+        raise NotImplementedError(
+            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
+        )
 
-        if self.model_type == "NAR":
-            self.n_inputs = 0
-
-        y_output = super()._narmax_predict(X, y_initial, forecast_horizon)
-        return y_output
+    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
+        """not implemented"""
+        raise NotImplementedError(
+            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
+        )
 
-    def _nfir_predict(self, X, y_initial):
-        y_output = super()._nfir_predict(X, y_initial)
-        return y_output
-
-    def _basis_function_predict(self, X, y_initial, forecast_horizon=None):
-        """not implemented"""
-        raise NotImplementedError(
-            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
-        )
-
-    def _basis_function_n_step_prediction(self, X, y, steps_ahead, forecast_horizon):
-        """not implemented"""
-        raise NotImplementedError(
-            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
-        )
-
-    def _basis_function_n_steps_horizon(self, X, y, steps_ahead, forecast_horizon):
-        """not implemented"""
-        raise NotImplementedError(
-            "You can only use Polynomial Basis Function in SimulateNARMAX for now."
-        )
-
-    def fit(self, *, X=None, y=None):
-        """not implemented"""
-        raise NotImplementedError(
-            "There is no fit method in Simulate because the model is predefined."
-        )
-

error_reduction_ratio(psi, y, process_term_number, regressor_code)

Perform the Error Reduction Ration algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required
process_term_number int

Number of Process Terms defined by the user.

required

Returns:

Name Type Description
err array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

psi_orthogonal ndarray of floats

The updated and orthogonal information matrix.

References
  • Manuscript: Orthogonal least squares methods and their application to non-linear system identification https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
  • Manuscript (portuguese): Identificação de Sistemas não Lineares Utilizando Modelos NARMAX Polinomiais – Uma Revisão e Novos Resultados
Source code in sysidentpy\simulation\_simulation.py
331
+    def fit(self, *, X=None, y=None):
+        """not implemented"""
+        raise NotImplementedError(
+            "There is no fit method in Simulate because the model is predefined."
+        )
+

error_reduction_ratio(psi, y, process_term_number, regressor_code)

Perform the Error Reduction Ration algorithm.

Parameters:

Name Type Description Default
y array-like of shape

The target data used in the identification process.

required
psi ndarray of floats

The information matrix of the model.

required
process_term_number int

Number of Process Terms defined by the user.

required

Returns:

Name Type Description
err array-like of shape

The respective ERR calculated for each regressor.

piv array-like of shape

Contains the index to put the regressors in the correct order based on err values.

psi_orthogonal ndarray of floats

The updated and orthogonal information matrix.

References
  • Manuscript: Orthogonal least squares methods and their application to non-linear system identification https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
  • Manuscript (portuguese): Identificação de Sistemas não Lineares Utilizando Modelos NARMAX Polinomiais – Uma Revisão e Novos Resultados
Source code in sysidentpy\simulation\_simulation.py
316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
 332
 333
 334
@@ -1249,9 +1220,85 @@
 381
 382
 383
-384
-385
-386
+384
def error_reduction_ratio(self, psi, y, process_term_number, regressor_code):
+    """Perform the Error Reduction Ration algorithm.
+
+    Parameters
+    ----------
+    y : array-like of shape = n_samples
+        The target data used in the identification process.
+    psi : ndarray of floats
+        The information matrix of the model.
+    process_term_number : int
+        Number of Process Terms defined by the user.
+
+    Returns
+    -------
+    err : array-like of shape = number_of_model_elements
+        The respective ERR calculated for each regressor.
+    piv : array-like of shape = number_of_model_elements
+        Contains the index to put the regressors in the correct order
+        based on err values.
+    psi_orthogonal : ndarray of floats
+        The updated and orthogonal information matrix.
+
+    References
+    ----------
+    - Manuscript: Orthogonal least squares methods and their application
+       to non-linear system identification
+       https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
+    - Manuscript (portuguese): Identificação de Sistemas não Lineares
+       Utilizando Modelos NARMAX Polinomiais – Uma Revisão
+       e Novos Resultados
+
+    """
+    squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
+    tmp_psi = psi.copy()
+    y = y[self.max_lag :, 0].reshape(-1, 1)
+    tmp_y = y.copy()
+    dimension = tmp_psi.shape[1]
+    piv = np.arange(dimension)
+    tmp_err = np.zeros(dimension)
+    err = np.zeros(dimension)
+
+    for i in np.arange(0, dimension):
+        for j in np.arange(i, dimension):
+            # Add `eps` in the denominator to omit division by zero if
+            # denominator is zero
+            tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
+                np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
+            )
+
+        if i == process_term_number:
+            break
+
+        piv_index = np.argmax(tmp_err[i:]) + i
+        err[i] = tmp_err[piv_index]
+        tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
+        piv[[piv_index, i]] = piv[[i, piv_index]]
+
+        v = Orthogonalization().house(tmp_psi[i:, i])
+
+        row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
+
+        tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
+
+        tmp_psi[i:, i:] = np.copy(row_result)
+
+    tmp_piv = piv[0:process_term_number]
+    psi_orthogonal = psi[:, tmp_piv]
+    model_code = regressor_code[tmp_piv, :].copy()
+    return model_code, err, piv, psi_orthogonal
+

fit(*, X=None, y=None)

not implemented

Source code in sysidentpy\simulation\_simulation.py
def fit(self, *, X=None, y=None):
+    """not implemented"""
+    raise NotImplementedError(
+        "There is no fit method in Simulate because the model is predefined."
+    )
+

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\simulation\_simulation.py
386
 387
 388
 389
@@ -1264,85 +1311,9 @@
 396
 397
 398
-399
def error_reduction_ratio(self, psi, y, process_term_number, regressor_code):
-    """Perform the Error Reduction Ration algorithm.
-
-    Parameters
-    ----------
-    y : array-like of shape = n_samples
-        The target data used in the identification process.
-    psi : ndarray of floats
-        The information matrix of the model.
-    process_term_number : int
-        Number of Process Terms defined by the user.
-
-    Returns
-    -------
-    err : array-like of shape = number_of_model_elements
-        The respective ERR calculated for each regressor.
-    piv : array-like of shape = number_of_model_elements
-        Contains the index to put the regressors in the correct order
-        based on err values.
-    psi_orthogonal : ndarray of floats
-        The updated and orthogonal information matrix.
-
-    References
-    ----------
-    - Manuscript: Orthogonal least squares methods and their application
-       to non-linear system identification
-       https://eprints.soton.ac.uk/251147/1/778742007_content.pdf
-    - Manuscript (portuguese): Identificação de Sistemas não Lineares
-       Utilizando Modelos NARMAX Polinomiais – Uma Revisão
-       e Novos Resultados
-
-    """
-    squared_y = np.dot(y[self.max_lag :].T, y[self.max_lag :])
-    tmp_psi = psi.copy()
-    y = y[self.max_lag :, 0].reshape(-1, 1)
-    tmp_y = y.copy()
-    dimension = tmp_psi.shape[1]
-    piv = np.arange(dimension)
-    tmp_err = np.zeros(dimension)
-    err = np.zeros(dimension)
-
-    for i in np.arange(0, dimension):
-        for j in np.arange(i, dimension):
-            # Add `eps` in the denominator to omit division by zero if
-            # denominator is zero
-            tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
-                np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) * squared_y + self.eps
-            )
-
-        if i == process_term_number:
-            break
-
-        piv_index = np.argmax(tmp_err[i:]) + i
-        err[i] = tmp_err[piv_index]
-        tmp_psi[:, [piv_index, i]] = tmp_psi[:, [i, piv_index]]
-        piv[[piv_index, i]] = piv[[i, piv_index]]
-
-        v = Orthogonalization().house(tmp_psi[i:, i])
-
-        row_result = Orthogonalization().rowhouse(tmp_psi[i:, i:], v)
-
-        tmp_y[i:] = Orthogonalization().rowhouse(tmp_y[i:], v)
-
-        tmp_psi[i:, i:] = np.copy(row_result)
-
-    tmp_piv = piv[0:process_term_number]
-    psi_orthogonal = psi[:, tmp_piv]
-    model_code = regressor_code[tmp_piv, :].copy()
-    return model_code, err, piv, psi_orthogonal
-

fit(*, X=None, y=None)

not implemented

Source code in sysidentpy\simulation\_simulation.py
def fit(self, *, X=None, y=None):
-    """not implemented"""
-    raise NotImplementedError(
-        "There is no fit method in Simulate because the model is predefined."
-    )
-

predict(*, X=None, y=None, steps_ahead=None, forecast_horizon=None)

Return the predicted values given an input.

The predict function allows a friendly usage by the user. Given a previously trained model, predict values given a new set of data.

This method accept y values mainly for prediction n-steps ahead (to be implemented in the future)

Parameters:

Name Type Description Default
X ndarray of floats

The input data to be used in the prediction process.

None
y ndarray of floats

The output data to be used in the prediction process.

None
steps_ahead int (default

The user can use free run simulation, one-step ahead prediction and n-step ahead prediction.

None
forecast_horizon int, default

The number of predictions over the time.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

Source code in sysidentpy\simulation\_simulation.py
401
+399
+400
+401
 402
 403
 404
@@ -1383,80 +1354,64 @@
 439
 440
 441
-442
-443
-444
-445
-446
-447
-448
-449
-450
-451
-452
-453
-454
-455
-456
-457
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
-    """Return the predicted values given an input.
-
-    The predict function allows a friendly usage by the user.
-    Given a previously trained model, predict values given
-    a new set of data.
+442
def predict(self, *, X=None, y=None, steps_ahead=None, forecast_horizon=None):
+    """Return the predicted values given an input.
+
+    The predict function allows a friendly usage by the user.
+    Given a previously trained model, predict values given
+    a new set of data.
+
+    This method accept y values mainly for prediction n-steps ahead
+    (to be implemented in the future)
+
+    Parameters
+    ----------
+    X : ndarray of floats
+        The input data to be used in the prediction process.
+    y : ndarray of floats
+        The output data to be used in the prediction process.
+    steps_ahead : int (default = None)
+        The user can use free run simulation, one-step ahead prediction
+        and n-step ahead prediction.
+    forecast_horizon : int, default=None
+        The number of predictions over the time.
 
-    This method accept y values mainly for prediction n-steps ahead
-    (to be implemented in the future)
-
-    Parameters
-    ----------
-    X : ndarray of floats
-        The input data to be used in the prediction process.
-    y : ndarray of floats
-        The output data to be used in the prediction process.
-    steps_ahead : int (default = None)
-        The user can use free run simulation, one-step ahead prediction
-        and n-step ahead prediction.
-    forecast_horizon : int, default=None
-        The number of predictions over the time.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
-
-    """
-    if self.basis_function.__class__.__name__ == "Polynomial":
-        if steps_ahead is None:
-            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
-            yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-            return yhat
-        if steps_ahead == 1:
-            yhat = self._one_step_ahead_prediction(X, y)
-            yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-            return yhat
-
-        _check_positive_int(steps_ahead, "steps_ahead")
-        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
-        yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-        return yhat
-
-    if steps_ahead is None:
-        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
-        yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-        return yhat
-    if steps_ahead == 1:
-        yhat = self._one_step_ahead_prediction(X, y)
-        yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-        return yhat
-
-    yhat = self._basis_function_n_step_prediction(
-        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
-    )
-    yhat = np.concatenate([y[:self.max_lag], yhat], axis=0)
-    return yhat
-

simulate(*, X_train=None, y_train=None, X_test=None, y_test=None, model_code=None, steps_ahead=None, theta=None, forecast_horizon=None)

Simulate a model defined by the user.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

None
y_train ndarray of floats

The output data to be used in the training process.

None
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None
model_code ndarray of int

Flattened list of input or output regressors.

None
steps_ahead

The forecast horizon.

None
theta array-like of shape

The parameters of the model.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

results string

Where: First column represents each regressor element; Second column represents associated parameter; Third column represents the error reduction ratio associated to each regressor.

Source code in sysidentpy\simulation\_simulation.py
206
-207
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+
+    """
+    if self.basis_function.__class__.__name__ == "Polynomial":
+        if steps_ahead is None:
+            yhat = self._model_prediction(X, y, forecast_horizon=forecast_horizon)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+        if steps_ahead == 1:
+            yhat = self._one_step_ahead_prediction(X, y)
+            yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+            return yhat
+
+        _check_positive_int(steps_ahead, "steps_ahead")
+        yhat = self._n_step_ahead_prediction(X, y, steps_ahead=steps_ahead)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    if steps_ahead is None:
+        yhat = self._basis_function_predict(X, y, forecast_horizon=forecast_horizon)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+    if steps_ahead == 1:
+        yhat = self._one_step_ahead_prediction(X, y)
+        yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+        return yhat
+
+    yhat = self._basis_function_n_step_prediction(
+        X, y, steps_ahead=steps_ahead, forecast_horizon=forecast_horizon
+    )
+    yhat = np.concatenate([y[: self.max_lag], yhat], axis=0)
+    return yhat
+

simulate(*, X_train=None, y_train=None, X_test=None, y_test=None, model_code=None, steps_ahead=None, theta=None, forecast_horizon=None)

Simulate a model defined by the user.

Parameters:

Name Type Description Default
X_train ndarray of floats

The input data to be used in the training process.

None
y_train ndarray of floats

The output data to be used in the training process.

None
X_test ndarray of floats

The input data to be used in the prediction process.

None
y_test ndarray of floats

The output data (initial conditions) to be used in the prediction process.

None
model_code ndarray of int

Flattened list of input or output regressors.

None
steps_ahead

The forecast horizon.

None
theta array-like of shape

The parameters of the model.

None

Returns:

Name Type Description
yhat ndarray of floats

The predicted values of the model.

results string

Where: First column represents each regressor element; Second column represents associated parameter; Third column represents the error reduction ratio associated to each regressor.

Source code in sysidentpy\simulation\_simulation.py
207
 208
 209
 210
@@ -1563,143 +1518,112 @@
 311
 312
 313
-314
-315
-316
-317
-318
-319
-320
-321
-322
-323
-324
-325
-326
-327
-328
-329
def simulate(
-    self,
-    *,
-    X_train=None,
-    y_train=None,
-    X_test=None,
-    y_test=None,
-    model_code=None,
-    steps_ahead=None,
-    theta=None,
-    forecast_horizon=None,
-):
-    """Simulate a model defined by the user.
-
-    Parameters
-    ----------
-    X_train : ndarray of floats
-        The input data to be used in the training process.
-    y_train : ndarray of floats
-        The output data to be used in the training process.
-    X_test : ndarray of floats
-        The input data to be used in the prediction process.
-    y_test : ndarray of floats
-        The output data (initial conditions) to be used in the prediction process.
-    model_code : ndarray of int
-        Flattened list of input or output regressors.
-    steps_ahead = int, default = None
-        The forecast horizon.
-    theta : array-like of shape = number_of_model_elements
-        The parameters of the model.
-
-    Returns
-    -------
-    yhat : ndarray of floats
-        The predicted values of the model.
-    results : string
-        Where:
-            First column represents each regressor element;
-            Second column represents associated parameter;
-            Third column represents the error reduction ratio associated
-            to each regressor.
-
-    """
-    self._check_simulate_params(y_train, y_test, model_code, steps_ahead, theta)
-
-    if X_test is not None:
-        self.n_inputs = _num_features(X_test)
-    else:
-        self.n_inputs = 1  # just to create the regressor space base
-
-    xlag_code = self._list_input_regressor_code(model_code)
-    ylag_code = self._list_output_regressor_code(model_code)
-    self.xlag = self._get_lag_from_regressor_code(xlag_code)
-    self.ylag = self._get_lag_from_regressor_code(ylag_code)
-    self.max_lag = max(self.xlag, self.ylag)
-    if self.n_inputs != 1:
-        self.xlag = self.n_inputs * [list(range(1, self.max_lag + 1))]
-
-    # for MetaMSS NAR modelling
-    if self.model_type == "NAR" and forecast_horizon is None:
-        forecast_horizon = y_test.shape[0] - self.max_lag
-
-    self.non_degree = model_code.shape[1]
-    regressor_code = self.regressor_space(self.n_inputs)
-
-    self.pivv = self._get_index_from_regressor_code(regressor_code, model_code)
-    self.final_model = regressor_code[self.pivv]
-    # to use in the predict function
-    self.n_terms = self.final_model.shape[0]
-    if self.estimate_parameter and not self.calculate_err:
-        if self.model_type == "NARMAX":
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X_train, y_train)
-        elif self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y_train)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X_train)
-            self.max_lag = self._get_max_lag()
-
-        psi = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=self.pivv
-        )
-
-        self.theta = getattr(self, self.estimator)(psi, y_train)
-        if self.extended_least_squares is True:
-            self.theta = self._unbiased_estimator(
-                psi, y_train, self.theta, self.elag, self.max_lag, self.estimator
-            )
-
-        self.err = self.n_terms * [0]
-    elif not self.estimate_parameter:
-        self.theta = theta
-        self.err = self.n_terms * [0]
-    else:
-        if self.model_type == "NARMAX":
-            self.max_lag = self._get_max_lag()
-            lagged_data = self.build_input_output_matrix(X_train, y_train)
-        elif self.model_type == "NAR":
-            lagged_data = self.build_output_matrix(y_train)
-            self.max_lag = self._get_max_lag()
-        elif self.model_type == "NFIR":
-            lagged_data = self.build_input_matrix(X_train)
-            self.max_lag = self._get_max_lag()
-
-        psi = self.basis_function.fit(
-            lagged_data, self.max_lag, predefined_regressors=self.pivv
-        )
-
-        _, self.err, _, _ = self.error_reduction_ratio(
-            psi, y_train, self.n_terms, self.final_model
-        )
-        self.theta = getattr(self, self.estimator)(psi, y_train)
-        if self.extended_least_squares is True:
-            self.theta = self._unbiased_estimator(
-                psi, y_train, self.theta, self.non_degree, self.elag, self.max_lag
-            )
-
-    return self.predict(
-        X=X_test,
-        y=y_test,
-        steps_ahead=steps_ahead,
-        forecast_horizon=forecast_horizon,
-    )
+314
def simulate(
+    self,
+    *,
+    X_train=None,
+    y_train=None,
+    X_test=None,
+    y_test=None,
+    model_code=None,
+    steps_ahead=None,
+    theta=None,
+    forecast_horizon=None,
+):
+    """Simulate a model defined by the user.
+
+    Parameters
+    ----------
+    X_train : ndarray of floats
+        The input data to be used in the training process.
+    y_train : ndarray of floats
+        The output data to be used in the training process.
+    X_test : ndarray of floats
+        The input data to be used in the prediction process.
+    y_test : ndarray of floats
+        The output data (initial conditions) to be used in the prediction process.
+    model_code : ndarray of int
+        Flattened list of input or output regressors.
+    steps_ahead = int, default = None
+        The forecast horizon.
+    theta : array-like of shape = number_of_model_elements
+        The parameters of the model.
+
+    Returns
+    -------
+    yhat : ndarray of floats
+        The predicted values of the model.
+    results : string
+        Where:
+            First column represents each regressor element;
+            Second column represents associated parameter;
+            Third column represents the error reduction ratio associated
+            to each regressor.
+
+    """
+    self._check_simulate_params(y_train, y_test, model_code, steps_ahead, theta)
+
+    if X_test is not None:
+        self.n_inputs = _num_features(X_test)
+    else:
+        self.n_inputs = 1  # just to create the regressor space base
+
+    xlag_code = self._list_input_regressor_code(model_code)
+    ylag_code = self._list_output_regressor_code(model_code)
+    self.xlag = self._get_lag_from_regressor_code(xlag_code)
+    self.ylag = self._get_lag_from_regressor_code(ylag_code)
+    self.max_lag = max(self.xlag, self.ylag)
+    if self.n_inputs != 1:
+        self.xlag = self.n_inputs * [list(range(1, self.max_lag + 1))]
+
+    # for MetaMSS NAR modelling
+    if self.model_type == "NAR" and forecast_horizon is None:
+        forecast_horizon = y_test.shape[0] - self.max_lag
+
+    self.non_degree = model_code.shape[1]
+    regressor_code = self.regressor_space(self.n_inputs)
+
+    self.pivv = self._get_index_from_regressor_code(regressor_code, model_code)
+    self.final_model = regressor_code[self.pivv]
+    # to use in the predict function
+    self.n_terms = self.final_model.shape[0]
+    if self.estimate_parameter and not self.calculate_err:
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X_train, y_train)
+        psi = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=self.pivv
+        )
+
+        self.theta = getattr(self, self.estimator)(psi, y_train)
+        if self.extended_least_squares is True:
+            self.theta = self._unbiased_estimator(
+                psi, y_train, self.theta, self.elag, self.max_lag, self.estimator
+            )
+
+        self.err = self.n_terms * [0]
+    elif not self.estimate_parameter:
+        self.theta = theta
+        self.err = self.n_terms * [0]
+    else:
+        self.max_lag = self._get_max_lag()
+        lagged_data = self.build_matrix(X_train, y_train)
+        psi = self.basis_function.fit(
+            lagged_data, self.max_lag, predefined_regressors=self.pivv
+        )
+
+        _, self.err, _, _ = self.error_reduction_ratio(
+            psi, y_train, self.n_terms, self.final_model
+        )
+        self.theta = getattr(self, self.estimator)(psi, y_train)
+        if self.extended_least_squares is True:
+            self.theta = self._unbiased_estimator(
+                psi, y_train, self.theta, self.non_degree, self.elag, self.max_lag
+            )
+
+    return self.predict(
+        X=X_test,
+        y=y_test,
+        steps_ahead=steps_ahead,
+        forecast_horizon=forecast_horizon,
+    )
 
\ No newline at end of file diff --git a/docs/examples/identification_of_an_electromechanical_system/index.html b/docs/examples/identification_of_an_electromechanical_system/index.html index 0c26423c..aec43f52 100644 --- a/docs/examples/identification_of_an_electromechanical_system/index.html +++ b/docs/examples/identification_of_an_electromechanical_system/index.html @@ -1022,7 +1022,7 @@ } } init_mathjax(); -