Skip to content

GhadaElkhawaga/XAI_predictivemonitoring_Consistency

Repository files navigation

XAI_predictivemonitoring_Consistency

This repo contains experiments executed to prove the applicability of a proposed approach to evaluate global XAI methods. The evaluation approach examins how explanations agree with ground truth extracted from underlying data. The full illustration of the new approach is available in the paper with the title: "Why Should I Trust Your Explanation?: An Evaluation Approach of XAI Methods Applied to Predictive Process Monitoring Results". Our experiments and illustrations are with respect to Predictive Process Monitoring (PPM) event logs and in the context of a PPM pipeline. Figure 1 provides an overall view of the approach workflow. Proposed Approach

To use this repo:

Run "Main.py". This module calls others to train a model for each preprocessed event log, then calls explanation module to explain predictions. It also calls necessary functions to compute the different metrics needed to evaluate and compare different XAI methods.

"experiments.py" contains some experiments executed to validate our ratios computation based on scores rather than the raw number of features at the intersection.

Note that in this Repo, we retrieve pre-trained models and encoded datasets which are trained and saved as part of the experiments presented in this Repo: (https://github.com/GhadaElkhawaga/PPM_XAI_Comparison). So, experiments in this paper, are built on top of results from our previous work.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages