Skip to content
/ Attn_ED Public

Using Encoder-Decoder with attention mechanism as the predictive model, and choosing from DiCE, LIME, and SHAP to explain the model

Notifications You must be signed in to change notification settings

qisuqi/Attn_ED

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 

Repository files navigation

AttnED

Using Encoder-Decoder with Attention Mechanism (AttnED) as the prediction model, and have three post-hoc XAI methods to choose from to explain the model - DiCE, LIME, and SHAP.

Using AttnED with SHAP is accepted at the AI4H Workshop at the 16TH INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGY & INTERNET BASED SYSTEMS as first author. Titled "Predicting and Explaining Hearing Aid Usage Using Encoder-Decoder with Attention Mechanism and SHAP".

Demo

demo.py shows how Attn_ED is used with the open sourced EvoSynth Data [Christensen et al. 2019].

Jeppe H. Christensen, Niels Pontoppidan, Rikke Rossing, Marco Anisetti, Doris-Eva Bamiou, George Spanoudakis, Louisa Murdin, Thanos Bibas, Dimitris Kikidiks, Nikos Dimakopoulos, & Apostolos Ecomomou. (2019). Fully synthetic longitudinal real-world data from hearing aid wearers for public health policy modeling (1.0: 08-04-2019: 4p) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.2668210

Citing AttnED

@inproceedings{su2022predicting,
  title={Predicting and Explaining Hearing Aid Usage Using Encoder-Decoder with Attention Mechanism and SHAP},
  author={Su, Qiqi and Iliadou, Eleftheria},
  booktitle={2022 16th International Conference on Signal-Image Technology \& Internet-Based Systems (SITIS)},
  pages={308--315},
  year={2022},
  organization={IEEE}
}

About

Using Encoder-Decoder with attention mechanism as the predictive model, and choosing from DiCE, LIME, and SHAP to explain the model

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published