Releases: SeldonIO/alibi
Releases · SeldonIO/alibi
v0.6.4
v0.6.4 (2022-01-28)
This is a patch release to correct a regression in AnchorImage
introduced in v0.6.3
.
Fixed
- Fix a bug introduced in
v0.6.3
whereAnchorImage
would ignore usersegmentation_kwargs
(#581).
Development
- The maximum versions of
Pillow
andscikit-image
have been bumped to 9.x and 0.19.x respectively.
v0.6.3
v0.6.3 (2022-01-18)
Added
- New feature A callback can now be passed to
IntegratedGradients
via thetarget_fn
argument, in order to calculate the scalar target dimension from the model output. This is to bypass the requirement of passingtarget
directly toexplain
when thetarget
of interest may depend on the prediction output. See the example in the docs. (#523). - A new comprehensive Introduction to explainability added to the documentation (#510).
Changed
- Python 3.6 has been deprecated from the supported versions as it has reached end-of-life.
Fixed
- Fix a bug with passing background images to
AnchorImage
leading to an error (#542). - Fix a bug with rounding errors being introduced in
CounterfactualRLTabular
(#550).
Development
- Docstrings have been updated and consolidated (#548). For developers, docstring conventions have been documented in CONTRIBUTING.md.
numpy
typing has been updated to be compatible withnumpy 1.22
(#543). This is a prerequisite for upgrading totensorflow 2.7
.- To further improve reliability, strict
Optional
type-checking withmypy
has been reinstated (#541). - The Alibi CI tests now include Windows and MacOS platforms (#575).
- The maximum
tensorflow
version has been bumped from 2.6 to 2.7 (#377).
v0.6.2
v0.6.2 (2021-11-18)
Added
- Documentation on using black-box and white-box models in the context of alibi, see here.
AnchorTabular
,AnchorImage
andAnchorText
now expose an additionaldtype
keyword argument with a default value ofnp.float32
. This is to ensure that whenever a userpredictor
is called internally with dummy data a correct data type can be ensured (#506).- Custom exceptions. A new public module
alibi.exceptions
defining thealibi
exception hierarchy. This introduces two exceptions,AlibiPredictorCallException
andAlibiPredictorReturnTypeError
. See #520 for more details.
Changed
- For
AnchorImage
, coerceimage_shape
argument into a tuple to implicitly allow passing a list input which eases use of configuration files. In the future the typing will be improved to be more explicit about allowed types with runtime type checking. - Updated the minimum
shap
version to the latest0.40.0
as this fixes an installation issue ifalibi
andshap
are installed with the same command.
Fixed
- Fix a bug with version saving being overwritten on subsequent saves (#481).
- Fix a bug in the Integrated Gradients notebook with transformer models due to a regression in the upstream
transformers
library (#528). - Fix a bug in
IntegratedGradients
withforward_kwargs
not always being correctly passed (#525). - Fix a bug resetting
TreeShap
predictor (#534).
Development
- Now using
readthedocs
Docker image in our CI to replicate the doc building environment exactly. Also enabledreadthedocs
build on PR feature which allows browsing the built docs on every PR. - New notebook execution testing framework via Github Actions. There are two new GA workflows, test_all_notebooks which is run once a week and can be triggered manually, and test_changed_notebooks which detects if any notebooks have been modified in a PR and executes only those. Not all notebooks are amenable to be tested automatically due to long running times or complex software/hardware dependencies. We maintain a list of notebooks to be excluded in the testing script under testing/test_notebooks.py.
- Now using
myst
(a markdown superset) for more flexible documentation (#482). - Added a CITATION.cff file.
v0.6.1
v0.6.1 (2021-09-02)
Added
- New feature An implementation of Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning is now available via
alibi.explainers.CounterfactualRL
andalibi.explainers.CounterfactualRLTabular
classes. The method is model-agnostic and the implementation is written in both PyTorch and TensorFlow. See docs for more information.
Changed
- Future breaking change The names of
CounterFactual
andCounterFactualProto
classes have been changed toCounterfactual
andCounterfactualProto
respectively for consistency and correctness. The old class names continue working for now but emit a deprecation warning message and will be removed in an upcoming version. dill
behaviour was changed to not extend thepickle
protocol so that standard usage ofpickle
in a session withalibi
does not change expectedpickle
behaviour. See discussion.AnchorImage
internals refactored to avoid persistent state betweenexplain
calls.
Development
- A PR checklist is available under CONTRIBUTING.md. In the future many of these may be turned into automated checks.
pandoc
version for docs building updated to1.19.2
which is what is used onreadthedocs
.- Citation updated to the JMLR paper.
v0.6.0
v0.6.0 (2021-07-08)
Added
- New feature
AnchorText
now supports sampling according to masked language models via thetransformers
library. See docs and the example for using the new functionality. - Breaking change due to the new masked language model sampling for
AnchorText
the public API for the constructor has changed. See docs for a full description of the new API. AnchorTabular
now supports one-hot encoded categorical variables in addition to the default ordinal/label encoded representation of categorical variables.IntegratedGradients
changes to allow explaining a wider variety of models. In particular, a newforward_kwargs
argument toexplain
allows passing additional arguments to the model andattribute_to_layer_inputs
flag to allow calculating attributions with respect to layer input instead of output if set toTrue
. The API and capabilities now track more closely to the captum.aiPyTorch
implementation.- Example of using
IntegratedGradients
to explaintransformer
models. - Python 3.9 support.
Fixed
IntegratedGradients
- fix the path definition for attributions calculated with respect to an internal layer. Previously the paths were defined in terms of the inputs and baselines, now they are correctly defined in terms of the corresponding layer input/output.
v0.5.8
v0.5.8 (2021-04-29)
Added
- Experimental explainer serialization support using
dill
. See docs for more details.
Fixed
- Handle layers which are not part of
model.layers
forIntegratedGradients
.
Development
- Update type hints to be compatible with
numpy
1.20. - Separate licence build step in CI, only check licences against latest Python version.
v0.5.7
v0.5.7 (2021-03-31)
Changed
- Support for
KernelShap
andTreeShap
now requires installing theshap
dependency explicitly after installingalibi
. This can be achieved by runningpip install alibi && pip install alibi[shap]
. The reason for this is that the build process for the upstreamshap
package is not well configured resulting in broken installations as detailed in #376 and shap/shap#1802. We expect this to be a temporary change until changes are made upstream.
Added
- A
reset_predictor
method for black-box explainers. The intended use case for this is for deploying an already configured explainer to work with a remote predictor endpoint instead of the local predictor used in development. alibi.datasets.load_cats
function which loads a small sample of cat images shipped with the library to be used in examples.
Fixed
- Deprecated the
alibi.datasets.fetch_imagenet
function as the Imagenet API is no longer available. IntegratedGradients
now works with subclassed TensorFlow models.- Removed support for calculating attributions wrt multiple layers in
IntegratedGradients
as this was not working properly and is difficult to do in the general case.
Development
- Fixed an issue with
AnchorTabular
tests not being picked up due to a name change of test data fixtures.
v0.5.6
v0.5.6 (2021-02-18)
Added
- Breaking change
IntegratedGradients
now supports models with multiple inputs. For each input of the model, attributions are calculated and returned in a list. Also extends the method allowing to calculate attributions for multiple internal layers. If a list of layers is passed, a list of attributions is returned. See #321. ALE
now supports selecting a subset of features to explain. This can be useful to reduce runtime if only some features are of interest and also indirectly helps dealing with categorical variables by being able to exclude them (asALE
does not support categorical variables).
Fixed
AnchorTabular
coverage calculation was incorrect which was caused by incorrectly indexing a list, this is now resolved.ALE
was causing an error when a constant feature was present. This is now handled explicitly and the user has control over how to handle these features. See https://docs.seldon.io/projects/alibi/en/latest/api/alibi.explainers.ale.html#alibi.explainers.ale.ALE for more details.- Release of Spacy 3.0 broke the
AnchorText
functionality as the waylexeme_prob
tables are loaded was changed. This is now fixed by explicitly handling the loading depending on thespacy
version. - Fixed documentation to refer to the
Explanation
object instead of the olddict
object. - Added warning boxes to
CounterFactual
,CounterFactualProto
andCEM
docs to explain the necessity of clearing the TensorFlow graph if switching to a new model in the same session.
Development
- Introduced lower and upper bounds for library and development dependencies to limit the potential for breaking functionality upon new releases of dependencies.
- Added dependabot support to automatically monitor new releases of dependencies (both library and development).
- Switched from Travis CI to Github Actions as the former limited their free tier.
- Removed unused CI provider configs from the repo to reduce clutter.
- Simplified development dependencies to just two files,
requirements/dev.txt
andrequirements/docs.txt
. - Split out the docs building stage as a separate step on CI as it doesn't need to run on every Python version thus saving time.
- Added
.readthedocs.yml
to control how user-facing docs are built directly from the repo. - Removed testing related entries to
setup.py
as the workflow is both unused and outdated. - Avoid
shap==0.38.1
as a dependency as it assumesIPython
is installed and breaks the installation.
v0.5.5
v0.5.5 (2020-10-20)
Added
- New feature Distributed backend using
ray
. To use, installray
usingpip install alibi[ray]
. - New feature
KernelShap
distributed version using the new distributed backend. - For anchor methods added an explanation field
data['raw']['instances']
which is a batch-wise version of the existingdata['raw']['instance']
. This is in preparation for the eventual batch support for anchor methods. - Pre-commit hook for
pyupgrade
vianbqa
for formatting example notebooks using Python 3.6+ syntax.
Fixed
- Flaky test for distributed anchors (note: this is the old non-batchwise implementation) by dropping the precision treshold.
- Notebook string formatting upgraded to Python 3.6+ f-strings.
Changed
- Breaking change For anchor methods, the returned explanation field
data['raw']['prediction']
is now batch-wise, i.e. forAnchorTabular
andAnchorImage
it is a 1-dimensionalnumpy
array whilst forAnchorText
it is a list of strings. This is in preparation for the eventual batch support for anchor methods. - Removed dependency on
prettyprinter
and substituted with a slightly modified standard library version ofPrettyPrinter
. This is to prepare for aconda
release which requires all dependencies to also be published onconda
.
v0.5.4
v0.5.4 (2020-09-03)
Added
update_metadata
method for anyExplainer
object to enable easy book-keeping for algorithm parameters
Fixed
- Updated
KernelShap
wrapper to work with the newestshap>=0.36
library - Fix some missing metadata parameters in
KernelShap
andTreeShap