Releases: deel-ai/deel-lip
Releases · deel-ai/deel-lip
1.5.0
New features and improvements
- Two new losses based on standard Keras cross-entropy losses with a settable temperature for softmax:
TauSparseCategoricalCrossentropy
equivalent to KerasSparseCategoricalCrossentropy
TauBinaryCrossentropy
equivalent to KerasBinaryCrossentropy
- New module
deel.lip.compute_layer_sv
to compute the largest and lowest singular values of layerscompute_layer_sv()
or of a whole modelcompute_model_sv()
. - Power iteration algorithm for convolution.
- New "Getting Started" tutorial to introduce 1-Lipschitz neural networks.
- Documentation migration from Sphinx to MkDocs.
API changes
- Activations are now imported via
deel.lip.layers
submodule, e.g.deel.lip.layers.GroupSort
instead ofdeel.lip.activations.GroupSort
. We adopted the same convention as Keras. The legacy submodule is still available for retro-compatibility but will be removed in a future release. - Unconstrained layers must now be imported using
deel.lip.layers.unconstrained
submodule, e.g.deel.lip.layers.unconstrained.PadConv2D
.
Fixes
- Fix InvertibleUpSampling
__call__()
returning None.
Full changelog: v1.4.0...v1.5.0
1.4.0
New features and improvements
- Two new layers:
SpectralConv2DTranspose
, a Lipschitz version of the KerasConv2DTranspose
layer- activation layer
Householder
which is a parametrized generalization of theGroupSort2
- Two new regularizers to foster orthogonality:
LorthRegularizer
for an orthogonal convolutionOrthDenseRegularizer
for an orthogonalDense
matrix kernel
- Two new losses for Lipschitz networks:
TauCategoricalCrossentropy
, a categorical cross-entropy loss with temperature scalingtau
CategoricalHinge
, a hinge loss for multi-class problems based on the implementation of the KerasCategoricalHinge
- Two new custom callbacks:
LossParamScheduler
to change loss hyper-parameters during training, e.g.min_margin
,alpha
andtau
LossParamLog
to log the value of loss parameters
- The Björck orthogonalization algorithm was accelerated.
- Normalizers (power iteration and Björck) use
tf.while_loop
and theswap_memory
argument can be globally set usingset_swap_memory(bool)
. Default value isTrue
to save memory usage in GPU. - The new function
set_stop_grad_spectral(bool)
allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value isTrue
. Stopping gradient propagation reduces runtime. - Due to bugs in TensorFlow serialization of custom losses and metrics (version 2.0 and 2.1), deel-lip now only supports TensorFlow >= 2.2.
Fixes
SpectralInitializer
does not reuse anymore the same base initializer in multiple instances.
Full Changelog: v1.3.0...v1.4.0
1.3.0
New features and improvements
- New layer
PadConv2D
to handle in particular circular padding in convolutional layer - Losses handle multi-label classification
- Losses are now element-wise.
reduction
parameter in custom losses can be set to None. - New metrics are introduced:
ProvableAvgRobustness
andProvableRobustAccuracy
API changes
KR
is not a function anymore but a class derived fromtf.keras.losses.Loss
.negative_KR
function was removed. Use the lossHKR(alpha=0)
instead.- The stopping criterion for Spectral normalization and Björck orthogonalization (iterative methods) is no more the number of iterations
niter_spectral
andniter_bjorck
. The methods are now stopped based on the difference between two iterations:eps_spectral
andeps_bjorck
. This API change occurs in:- Lipschitz layers, such as
SpectralDense
andSpectralConv2D
- normalizer
reshaped_kernel_orthogonalization
- constraint
SpectralConstraint
- initializer
SpectralInitializer
- Lipschitz layers, such as
Full Changelog: v1.2.0...v1.3.0
1.2.0
this revision contains:
- code refactoring: storing wbar in a tf.variable
- update of the documentation's notebooks
- update of the Callbacks, Initializers, Constraints...
- update of the losses and tests for losses
- improved loss stability for small batches
- added
ScaledGlobalL2NormPooling2D
- new way to export keras serializable objects
This ends the support of tf2.0. Only versions >= tf2.1 are supported.
1.1.1
This revision contains:
- bugfixes in
losses.py
: fixed a problem with data types inHKR_loss
and fixed a weighting problem inKR_multiclass_loss
. - changed behavior of
FrobeniusDense
in the multi class setup : now usingFrobeniusDense
with 10 output neurons is now equivalent to stack 10FrobeniuDense
layers with 1 output neuron. The L2normalization is performed on each neuron instead of the full weight matrix
1.1.0
This version add new features:
InvertibleDownSampling
andInvertibleUpSampling
- multiclass extension of the HKR loss
It also contains the multiple fixes for:
- bug with
L2NormPooling
- bug with
vanilla_export
- bug with
tf.function
annotation causing incorrect Lipschitz constant inSequential
(for constant others than 1).
Breaking changes:
- the
true_values
parameter has been removed in binary HKR as both (1, -1) and (1,0) are handled automatically.
1.0.2
1.0.1
Initial release - v1.0.0
Controlling the Lipschitz constant of a layer or a whole neural network has many applications ranging from adversarial robustness to Wasserstein distance estimation.
This library provides implementation of k-Lispchitz layers for keras.