Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Q] Why COMTE sometimes give cf_label same as predicted? #48

Closed
JanaSw opened this issue Sep 22, 2023 · 6 comments
Closed

[Q] Why COMTE sometimes give cf_label same as predicted? #48

JanaSw opened this issue Sep 22, 2023 · 6 comments

Comments

@JanaSw
Copy link

JanaSw commented Sep 22, 2023

Hello,

I am doing multivariate time series classification with 2 features.
I am trying to use COMTE explain function.

However I am facing issues where the returned label of the explain function is the same as the predicted one (even though I am passing orig_class=np.argmax(prob_item) and trage_class = np.argmin(prob_item)).

I tried debugging the code of the explain and I found that the deduced "other" has a target same as the orig_class and different than the target passed:

image

is this normal ? is the error from my side or do you think it might be a bug ?
Can you please explain to me if i misunderstood something.

Thank you

@JHoelli
Copy link
Contributor

JHoelli commented Sep 22, 2023

Hi @JanaSw,

I just had a look at the code and it seems fine for my test models and data. Did you check the accuracy and precision / recall of your classifier ? Sometimes it can happen that no counterfactual can be found due to missing classification capebilities.

If available, feel free to share a minimal sample of the issue, so that I can replicate the issue.

@JanaSw
Copy link
Author

JanaSw commented Sep 22, 2023

Hello,

Yes, my model's accuracy is 87%.

Actually I missed an important point in my first question :D which is:
I am doing 54-folds leave one out cross validation and this is hapenning on only 3 folds (getting CF Label same as item's label)

However these three folds had accuracy of 100%, what do you think the problem is ?

Also I just want to clarify something please:

  • I have 2 features but for the majority of the folds (more than 40) I am getting counterfactual for only one of the features (the other 10-15 I got 2 plots). It is written in your paper that COMTE only plots the "changed features" can you please elaborate on this ?

Thanks for your answers

@JHoelli
Copy link
Contributor

JHoelli commented Sep 25, 2023

Hi,

Maybe first an explanation on how COMTE works. COMTE Takes the provided dataset and your input instance. It generates a counterfactual based on your input instance by switching an input feature series with a series from the provided data set. The algorithm approximates a solution by minimizing the distance between the original and counterfactual instance and the predicted class.

Now to your question "I have 2 features but for the majority of the folds (more than 40) I am getting counterfactual for only one of the features (the other 10-15 I got 2 plots). It is written in your paper that COMTE only plots the "changed features" can you please elaborate on this ?".

The plot function only plots the feature rows where a change takes place. All other (not plotted) rows are not change and therefore consistent with the original values. The idea is that the plot is not too overwhelming if you have a large multivariate timeseries (e.g., 91 features). (To plot all 91 features if only 2 need to be changed to generate a counterfactual, would be overwhelming and the relevant information hidden in the plot.)

Regarding the issue of original instance and counterfactual having the same class. If this is the case, there are two possible reasons that come to my mind:

  • The ability to generate a counterfactual strongly relies on the background dataset given in the initialization of the algorithm. It can happen that if you provide the original labels instead of the predicted labels there that your background dataset is biased. E.g. your original labels says 1 while your classifier says 0
  • Also it is important that your background dataset contains your desired CF class

@JanaSw
Copy link
Author

JanaSw commented Sep 25, 2023

Hello,

Thanks alot for the explanation and your answer.

Regarding the last 2 points you mentioned:

  • I am sure that the original labels are the same as the predicted labels.

  • If this is the case this means that no CF was found, but isn't better if COMTE return "No counterfactual was found for this instance" instead of returning an explanation with label same to the original ??

Thank you again for your reactivity

@JHoelli
Copy link
Contributor

JHoelli commented Sep 26, 2023

Hi,

Do you also know weather all labels (especially the desired counterfactual label) are represented in the dataset ? Do you get the warning: Due to lack of true postitives for class {c} no kd-tree could be build. ?

You could also check, if increasing the parameter number_distractors helps.

Yes, we thought about adding a warning. Returning a string is unfortunately not an option. Counterfactuals are often evaluated with respect to validity (if the CF is an actual CF CF_Label != original_Label). If you run the explainer for multiple explanations and append the explanations, its a bit inconvenient if there is a string inbetween from an evaluation perspective.

@JHoelli
Copy link
Contributor

JHoelli commented Apr 3, 2024

Closed due to inacttivity

@JHoelli JHoelli closed this as completed Apr 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants