TY - JOUR
T1 - Towards a holistic approach for AI trustworthiness assessment based upon aids for multi-criteria aggregation
AU - Mattioli, Juliette
AU - Sohier, Henri
AU - Delaborde, Agnès
AU - Pedroza, Gabriel
AU - Amokrane-Ferka, Kahina
AU - Awadid, Afef
AU - Chihani, Zakaria
AU - Khalfaoui, Souhaiel
N1 - Publisher Copyright:
© 2023 "Copyright © 2023 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)." CEUR Workshop Proceedings (CEUR-WS.org)
PY - 2023/1/1
Y1 - 2023/1/1
N2 - The assessment of AI-based systems trustworthiness is a challenging process given the complexity of the subject which involves qualitative and quantifiable concepts, a wide heterogeneity and granularity of attributes, and in some cases even the non-commensurability of the latter. Evaluating trustworthiness of AI-enabled systems is in particular decisive in safety-critical domains where AIs are expected to mostly operate autonomously. To overcome these issues, the Confiance.ai program [1] proposes an innovative solution based upon a multi-criteria decision analysis. The approach encompasses several phases: structuring trustworthiness as a set of well-defined attributes, the exploration of attributes to determine related performance metrics (or indicators), the selection of assessment methods or control points, and structuring a multi-criteria aggregation method to estimate a global evaluation of trust. The approach is illustrated by applying some performance metrics to a data-driven AI context whereas the focus on aggregation methods is left as a near-term perspective of Confiance.ai milestones.
AB - The assessment of AI-based systems trustworthiness is a challenging process given the complexity of the subject which involves qualitative and quantifiable concepts, a wide heterogeneity and granularity of attributes, and in some cases even the non-commensurability of the latter. Evaluating trustworthiness of AI-enabled systems is in particular decisive in safety-critical domains where AIs are expected to mostly operate autonomously. To overcome these issues, the Confiance.ai program [1] proposes an innovative solution based upon a multi-criteria decision analysis. The approach encompasses several phases: structuring trustworthiness as a set of well-defined attributes, the exploration of attributes to determine related performance metrics (or indicators), the selection of assessment methods or control points, and structuring a multi-criteria aggregation method to estimate a global evaluation of trust. The approach is illustrated by applying some performance metrics to a data-driven AI context whereas the focus on aggregation methods is left as a near-term perspective of Confiance.ai milestones.
KW - Data Quality
KW - Explainability
KW - Multi Criteria Decision Aid
KW - Robustness
KW - Trustworthiness Assessment
KW - Trustworthiness Attributes
KW - Trustworthiness Metrics and Key Performance Indicators (KPIs)
UR - http://www.scopus.com/inward/record.url?scp=85159270448&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85159270448
SN - 1613-0073
VL - 3381
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2023 Workshop on Artificial Intelligence Safety, SafeAI 2023
Y2 - 13 February 2023 through 14 February 2023
ER -