https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
When a CBR in Hand is Better than Twins in the Bush
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0003-3802-4721
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-7305-7169
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-1212-7637
Mälardalen University, School of Innovation, Design and Engineering.ORCID iD: 0000-0003-0730-4405
Show others and affiliations
2022 (English)In: CEUR Workshop Proceedings, vol. 3389 / [ed] Reuss P.; Schonborn J, CEUR-WS , 2022, p. 141-152Conference paper, Published paper (Refereed)
Abstract [en]

AI methods referred to as interpretable are often discredited as inaccurate by supporters of the existence of a trade-off between interpretability and accuracy. In many problem contexts however this trade-off does not hold. This paper discusses a regression problem context to predict flight take-off delays where the most accurate data regression model was trained via the XGBoost implementation of gradient boosted decision trees. While building an XGB-CBR Twin and converting the XGBoost feature importance into global weights in the CBR model, the resultant CBR model alone provides the most accurate local prediction, maintains the global importance to provide a global explanation of the model, and offers the most interpretable representation for local explanations. This resultant CBR model becomes a benchmark of accuracy and interpretability for this problem context, and hence it is used to evaluate the two additive feature attribute methods SHAP and LIME to explain the XGBoost regression model. The results with respect to local accuracy and feature attribution lead to potentially valuable future work. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)

Place, publisher, year, edition, pages
CEUR-WS , 2022. p. 141-152
Series
CEUR Workshop Proceedings, ISSN 16130073
Keywords [en]
Accuracy, CBR, Interpretability, LIME, SHAP, XGBoost
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-62705Scopus ID: 2-s2.0-85159778083OAI: oai:DiVA.org:mdh-62705DiVA, id: diva2:1760808
Conference
30th International Conference on Case-Based Reasoning Workshop, ICCBR-WS 2022, Virtual, Online
Available from: 2023-05-31 Created: 2023-05-31 Last updated: 2024-12-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Ahmed, Mobyen UddinBarua, ShaibalBegum, ShahinaIslam, Mir Riyanul

Search in DiVA

By author/editor
Ahmed, Mobyen UddinBarua, ShaibalBegum, ShahinaIslam, Mir Riyanul
By organisation
Embedded SystemsSchool of Innovation, Design and Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 535 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf