Skip to content

Releases: FluxML/MLJFlux.jl

v0.6.0

29 Sep 22:33
de2b3c6
Compare
Choose a tag to compare

MLJFlux v0.6.0

Diff since v0.5.1

All models, except ImageClassifier, now support categorical features (presented as table columns with a CategoricalVector type). Rather than one-hot encoding, embeddings into a continuous space are learned (i.e, by adding an embedding layer) and the dimension of theses spaces can be specified by the user, using a new dictionary-valued hyperparameter, embedding_dims. The learned embeddings are exposed by a new implementation of transform, which means they can be used with other models (transfer learning) as described in Cheng Guo and Felix Berkhahn (2016): Entity Embeddings of Categorical Variables.

Also, all continuous input presented to these models is now forced to be Float32, but this is the only breaking change.

Merged pull requests:

Closed issues:

  • deprecated warning in documentation (#236)
  • Some minor doc issues (#258)
  • Fix code snippet in Readme (#266)
  • MultitargetNeuralNetworkRegressor doc example doesn't work as intended (#268)

v0.5.1

12 Jun 06:05
3766e44
Compare
Choose a tag to compare

MLJFlux v0.5.1

Diff since v0.5.0

Merged pull requests:

v0.5.0

11 Jun 01:17
ec59410
Compare
Choose a tag to compare

MLJFlux v0.5.0

Diff since v0.4.0

  • (new model) Add NeuralNetworkBinaryClasssifier, an optimised form of NeuralNetworkClassifier for the special case of two target classes. Use Flux.σ instead of softmax for the default finaliser (#248)
  • (internals) Switch from implicit to explicit differentiation (#251)
  • (breaking) Use optimisers from Optimisers.jl instead of Flux.jl (#251). Note that the new optimisers are immutable.
  • (RNG changes.) Change the default value of the model field rng fromRandom.GLOBAL_RNG to Random.default_rng(). Change the seeded RNG, obtained by specifying an integer value for rng, from MersenneTwister to Xoshiro (#251)
  • (RNG changes.) Update the Short builder so that the rng argument of build(::Short, rng, ...)
    is passed on to the Dropout layer, as these layers now support this on a GPU, at
    least for rng=Random.default_rng() (#251)
  • (weakly breaking) Change the implementation of L1/L2 regularization from explicit loss penalization to weight/sign decay (internally chained with the user-specified optimiser). The only breakage for users is that the losses reported in the history will no longer be penalized, because the penalty is not explicitly computed (#251)

Merged pull requests:

Closed issues:

  • Stop using implicit style differentiating (#221)
  • Update examples/MNIST (#234)
  • MultitargetNeuralNetworkRegressor has surprising target_scitype (#246)
  • Error fit GPU (#247)
  • Metalhead has broken MLJFlux (#249)

v0.4.0

17 Oct 06:46
8eaed13
Compare
Choose a tag to compare

MLJFlux v0.4.0

Diff since v0.3.1

Merged pull requests:

v0.3.1

11 Sep 23:28
ab630f5
Compare
Choose a tag to compare

MLJFlux v0.3.1

Diff since v0.3.0

  • Improve the error message for faulty builders (#238)

Merged pull requests:

v0.3.0

25 Aug 01:48
37b5f31
Compare
Choose a tag to compare

MLJFlux v0.3.0

Diff since v0.2.10

Merged pull requests:

  • Off-by-one error in clean! method (#227) (@MarkArdman)
  • CompatHelper: bump compat for Flux to 0.14, (keep existing compat) (#228) (@github-actions[bot])
  • Actions node 12 => node 16 (#229) (@vnegi10)
  • Bump compat for Metalhead (#232) (@ablaom)
  • For a 0.3 release (#235) (@ablaom)

v0.2.10

24 Apr 06:33
7f9dd57
Compare
Choose a tag to compare

MLJFlux v0.2.10

Diff since v0.2.9

Closed issues:

  • Builder does not appear to build an architecture compatible with supplied data (#216)
  • Calling predict on matrix throws error (#218)

Merged pull requests:

  • Sorting out issue with calling predict on matrix (#219) (@pat-alt)
  • Dispatch fit! and train! on model for greater extensibility (#222) (@pat-alt)
  • For a 0.2.10 release (#223) (@ablaom)
  • Fix target scitype for multitarget regressor (#224) (@ablaom)

v0.2.9

22 Sep 23:37
0602655
Compare
Choose a tag to compare

MLJFlux v0.2.9

Diff since v0.2.8

  • (bug fix) Address improper scaling of l2/l1 loss penalty with batch size (#213) @mohamed82008

Closed issues:

  • Penalty (wrongly) not multiplied by the relative batch size (#213)

Merged pull requests:

v0.2.8

22 Aug 20:17
ac253f1
Compare
Choose a tag to compare

MLJFlux v0.2.8

Diff since v0.2.7

Closed issues:

  • Define a working default Flux chain builder for ImageClassifier (#162)
  • Broken link in MNIST example (#204)
  • MNIST example has wrong axis label: (#210)

Merged pull requests:

v0.2.7

18 Apr 21:30
af48de8
Compare
Choose a tag to compare

MLJFlux v0.2.7

Diff since v0.2.6

Merged pull requests:

  • CompatHelper: bump compat for Flux to 0.13, (keep existing compat) (#202) (@github-actions[bot])
  • For a 0.2.7 release (#203) (@ablaom)