Skip to content

Releases: ecrl/ecnet

alvaDesc bug fix

29 Aug 22:45
09b8eed
Compare
Choose a tag to compare

Sets alvaDesc na\r return values to 0.0

Update dependencies

25 Jul 15:06
391d93d
Compare
Choose a tag to compare
  • update all package dependencies

4.1.2 - Update build/install method, add GitHub workflows, unittest -> pytest

01 Aug 22:26
ac192b0
Compare
Choose a tag to compare
  • Build/installation now uses pyproject.toml instead of the deprecated setup.py
  • Added GitHub workflows for PyPI publishing and unit testing
  • Unit tests now use pytest instead of unittest

Dependency update

07 Apr 21:12
59dc3dc
Compare
Choose a tag to compare

Update to package dependencies, notably PyTorch 1.8.0 -> 2.0.0. ECNet now requires Python 3.11+.

Updates to training runtime, tuning arguments

30 Jun 17:36
85bd818
Compare
Choose a tag to compare
  • Added option to shuffle training/validation subsets every epoch
  • Update to docstrings/documentation
  • Added a "getting started" notebook in the examples directory
  • New argument format for ABC-based parameter tuning

PyTorch rework, new API, bundled property sets

27 Apr 02:00
381a8a2
Compare
Choose a tag to compare
  • ECNet now leverages the PyTorch package for ML operations
    • This change presented an opportunity to overhaul ECNet from the ground up, allowing us to think about how the user will interact with this package. Ultimately, we wanted to make interactions easier.
  • Custom data structures were weird, and didn't belong in a ML toolkit. Instead, we offer PyTorch-based data structures, adjusted to house chemical data. Users can obtain SMILES strings and property values, or a ML-ready structure ready to be passed to ECNet for training.
  • All these changes require documentation, so full API documentation is available. We also have an example script, and would like to include more examples in the future.

Update to ECabc-based hyper-parameter tuning functions

10 Oct 04:45
fd874fc
Compare
Choose a tag to compare

Per ECabc's API changes in its 3.0.0 update, this ECNet update incorporates these changes into all relevant functions.

Better implementation of TensorFlow 2.0

28 Jan 20:02
87e83e8
Compare
Choose a tag to compare
  • ecnet.models.mlp.MultilayerPerceptron's implementation now makes sense, and leads to faster training times
  • some database cleanup
  • in case ECNet is not utilized to use a pre-trained project, input QSPR descriptor names are also saved inside the project (DataFrame object not required)

ML back-end, workflows, database additions/encoding, and more

06 Jan 22:18
5a989e2
Compare
Choose a tag to compare
  • Addition of validated, PaDEL/alvaDesc-generated YSI databases

  • Update to repository links, author information

  • ecnet.utils.data_utils now forces UTF-8 encoding for all database creation/saving

  • ML back-end updated to TensorFlow 2.0.0

    • No API changes to ecnet.models.mlp.MultilayerPerceptron
    • Existing .h5 model files will not work with the updated class

    Note: initially, PyTorch was looked at as an alternative; however, after tests to evaluate performance were conducted and the viability of installing PyTorch on high-performance machines available to the ECRL were both deemed inadequate, updating to TensorFlow 2.0.0 was deemed the most appropriate action.

  • Only the following hyper-parameters are tuned with the built-in functions:

    • Learning rate of Adam optimization function
    • Learning rate decay of Adam optimization function
    • Batch size during training
    • Patience (if validating, # epochs to wait for better validation loss, else terminate training)
    • Size of each hidden layer

    Note: with the relatively small number of samples our models are trained with, it does not make sense to adjust hyper-parameters such as beta_1, beta_2, and epsilon. The hyper-parameters listed above are theorized to play a much more important role with how the models train/perform.

  • Added the UML ECRL's general publication workflow as ecnet.workflows.ecrl_workflow.create_model

  • If using ecnet.Server and not creating a project, a single model's filename can now be specified as an additional argument (default: model.h5)

  • TensorFlow's verbose argument is now propagated from ecnet.Server.train to the model during training; added as an additional argument

  • ecnet.models.mlp.MultilayerPerceptron.fit now returns a tuple: (learn losses, validation losses); learn losses and validation losses are both lists containing loss values (mean squared error) at every epoch; if training a single model using ecnet.Server.train, this tuple is returned; if not performing validation, the validation losses list is populated with None elements equal in size to the learn losses list

  • If installing using setup.py, installing TensorFlow is optional; to skip the installation of the pre-compiled PyPI distribution of TensorFlow, run setup.py with python setup.py --omit_tf install

    Note: other methods of installing TensorFlow offer clear benefits (GPU support, different CPU instruction sets, etc.), therefore we want to provide an option for the user to use an existing installation of TensorFlow instead of forcing the PyPI-sourced version.

Bug fixes

16 Jul 01:02
ca4d76d
Compare
Choose a tag to compare
  • If validation/test sets are empty, input parameter limiting processes will still run
  • Server.limit_inputs now correctly returns input parameter names, importances