Towards a holistic approach for AI trustworthiness assessment based upon aids for multi-criteria aggregation

Juliette Mattioli, Henri Sohier, Agnès Delaborde, Gabriel Pedroza, Kahina Amokrane-Ferka, Afef Awadid, Zakaria Chihani, Souhaiel Khalfaoui

Résultats de recherche: Contribution à un journal???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???Revue par des pairs

2 Citations (Scopus)

Résumé

The assessment of AI-based systems trustworthiness is a challenging process given the complexity of the subject which involves qualitative and quantifiable concepts, a wide heterogeneity and granularity of attributes, and in some cases even the non-commensurability of the latter. Evaluating trustworthiness of AI-enabled systems is in particular decisive in safety-critical domains where AIs are expected to mostly operate autonomously. To overcome these issues, the Confiance.ai program [1] proposes an innovative solution based upon a multi-criteria decision analysis. The approach encompasses several phases: structuring trustworthiness as a set of well-defined attributes, the exploration of attributes to determine related performance metrics (or indicators), the selection of assessment methods or control points, and structuring a multi-criteria aggregation method to estimate a global evaluation of trust. The approach is illustrated by applying some performance metrics to a data-driven AI context whereas the focus on aggregation methods is left as a near-term perspective of Confiance.ai milestones.

langue originaleAnglais
journalCEUR Workshop Proceedings
Volume3381
étatPublié - 1 janv. 2023
Modification externeOui
Evénement2023 Workshop on Artificial Intelligence Safety, SafeAI 2023 - Washington, États-Unis
Durée: 13 févr. 202314 févr. 2023

Contient cette citation