(WWW'21) ATON - an Outlier Interpreation / Outlier explanation method
-
Updated
Jul 17, 2022 - Python
(WWW'21) ATON - an Outlier Interpreation / Outlier explanation method
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Experiments to explain entity resolution systems
CAVES-dataset accepted at SIGIR'22
Comprehensible Convolutional Neural Networks via Guided Concept Learning
List of papers in the area of Explainable Artificial Intelligence Year wise
[TMLR] "Can You Win Everything with Lottery Ticket?" by Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, Zhangyang Wang
We introduce XBrainLab, an open-source user-friendly software, for accelerated interpretation of neural patterns from EEG data based on cutting-edge computational approach.
ML Pipeline. Detail documentation of the project in README. Click on actions to see the script.
Code for ER-Test, accepted to the Findings of EMNLP 2022
Transform the way you work with boolean logic by forming them from discrete propositions. This enables you to dynamically generate custom output, such as providing explanations about the causes behind a result.
Domestic robot example configured for the multi-level explainability framework
The mechanisms behind image classification using a pretrained CNN model in high-dimensional spaces 🏞️
tornado plots for model sensitivity analysis
A framework for evaluating auto-interp pipelines, i.e., natural language explanations of neurons.
A project in an AI seminar
TS4NLE is converts the explanation of an eXplainable AI (XAI) system into natural language utterances comprehensible by humans.
Add a description, image, and links to the explanability topic page so that developers can more easily learn about it.
To associate your repository with the explanability topic, visit your repo's landing page and select "manage topics."