From 853bda951742e7ad62f3ce343c39613962d54ce5 Mon Sep 17 00:00:00 2001 From: "Olivier @ CREATIS" Date: Tue, 29 Oct 2024 11:01:00 +0100 Subject: [PATCH] Update collections/_posts/2024-10-20-tabular-explainability.md Co-authored-by: Nathan Painchaud <23144457+nathanpainchaud@users.noreply.github.com> --- collections/_posts/2024-10-20-tabular-explainability.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/collections/_posts/2024-10-20-tabular-explainability.md b/collections/_posts/2024-10-20-tabular-explainability.md index 26951878..cec75029 100755 --- a/collections/_posts/2024-10-20-tabular-explainability.md +++ b/collections/_posts/2024-10-20-tabular-explainability.md @@ -28,7 +28,7 @@ pdf: "https://arxiv.org/pdf/2302.14278" * The field of explainable Artificial Intelligence is named XAI and has received increasing interest over the past decade * XAI algorithms for DL can be organized into three major groups: perturbation-based, gradient-based, and, more recently, attention-based -* Transformers posses a built-in capability to provide explanations for its results via the analysis of attention matrices +* Transformers possess a built-in capability to provide explanations for their results via the analysis of attention matrices