Natural Language Generation of Explanations of Fuzzy Inference Decisions
Résumé
As Artificial Intelligence and fuzzy systems are at the center of the emergence of advanced technologies such as autonomous vehicles or medical decision support systems, a problem of trust from a human point of view is strongly appearing. In this article, we tackle the problem of explanation of a fuzzy inference system decision in its entirety: from the conception of an algorithm that produces a textual explanation to its evaluation. We define a function which is able to associate to any activated fuzzy rule, the structure responsible of its activation degree. To assess our method, we defined a protocol to evaluate AI generated explanation, and made an experiment: explanations obtained from the classification of pastas. Despite limitations, the results show a good transparency of the reasoning, consistency and good global effectiveness in generated explanations.
Domaines
Intelligence artificielle [cs.AI]
Origine : Fichiers produits par l'(les) auteur(s)