https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Novel Mutual Information based Feature Set for Drivers’ Mental Workload Evaluation Using Machine Learning
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0003-0730-4405
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-7305-7169
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0003-3802-4721
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-1212-7637
Show others and affiliations
2020 (English)In: Brain Sciences, E-ISSN 2076-3425, Vol. 10, no 8, p. 1-23, article id 551Article in journal (Refereed) Published
Abstract [en]

Analysis of physiological signals, electroencephalography in more specific notion, is considered as a very promising technique to obtain objective measures for mental workload evaluation, however, it requires complex apparatus to record and thus with poor usability in monitoring in-vehicle drivers’mental workload. This study proposes amethodology of constructing a novel mutual information-based feature set from the fusion of electroencephalography and vehicular signals acquired through real driving experiment and deployed in evaluating drivers’ mental workload. Mutual information of electroencephalography and vehicular signals were used as the prime factor for the fusion of features. In order to assess the reliability of the developed feature set mental workload score prediction, classification and event classification tasks were performed using different machine learning models. Moreover, features extracted from electroencephalography were used to compare the performance. In the prediction of mental workload score, expert-defined scores were used as the target values. For classification tasks, true labels were set from contextual information of the experiment. An extensive evaluation of every prediction tasks was carried out using different validation methods. In predicting mental workload score from the proposed feature set lowest mean absolute error was 0.09 and for classifying mental workload highest accuracy was 94%. According to the outcome of the study, it can be stated that the novel mutual information based features developed through the proposed approach can be employed to classify and monitor in-vehicle drivers’ mental workload.

Place, publisher, year, edition, pages
Switzerland: MDPI AG , 2020. Vol. 10, no 8, p. 1-23, article id 551
Keywords [en]
electroencephalography, feature extraction, machine learning, mental workload, mutual information, vehicular signal
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-49988DOI: 10.3390/brainsci10080551ISI: 000564149000001PubMedID: 32823582Scopus ID: 2-s2.0-85089564153OAI: oai:DiVA.org:mdh-49988DiVA, id: diva2:1466107
Projects
BRAINSAFEDRIVE: A Technology to detect Mental States During Drive for improving the Safety of the roadAvailable from: 2020-09-10 Created: 2020-09-10 Last updated: 2024-07-04Bibliographically approved
In thesis
1. Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems
Open this publication in new window or tab >>Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial Intelligence (AI) is recognized as advanced technology that assist in decision-making processes with high accuracy and precision. However, many AI models are generally appraised as black boxes due to their reliance on complex inference mechanisms.  The intricacies of how and why these AI models reach a decision are often not comprehensible to human users, resulting in concerns about the acceptability of their decisions. Previous studies have shown that the lack of associated explanation in a human-understandable form makes the decisions unacceptable to end-users. Here, the research domain of Explainable AI (XAI) provides a wide range of methods with the common theme of investigating how AI models reach to a decision or explain it. These explanation methods aim to enhance transparency in Decision Support Systems (DSS), particularly crucial in safety-critical domains like Road Safety (RS) and Air Traffic Flow Management (ATFM). Despite ongoing developments, DSSs are still in the evolving phase for safety-critical applications. Improved transparency, facilitated by XAI, emerges as a key enabler for making these systems operationally viable in real-world applications, addressing acceptability and trust issues. Besides, certification authorities are less likely to approve the systems for general use following the current mandate of Right to Explanation from the European Commission and similar directives from organisations across the world. This urge to permeate the prevailing systems with explanations paves the way for research studies on XAI concentric to DSSs.

To this end, this thesis work primarily developed explainable models for the application domains of RS and ATFM. Particularly, explainable models are developed for assessing drivers' in-vehicle mental workload and driving behaviour through classification and regression tasks. In addition, a novel method is proposed for generating a hybrid feature set from vehicular and electroencephalography (EEG) signals using mutual information (MI). The use of this feature set is successfully demonstrated to reduce the efforts required for complex computations of EEG feature extraction.  The concept of MI was further utilized in generating human-understandable explanations of mental workload classification. For the domain of ATFM, an explainable model for flight take-off time delay prediction from historical flight data is developed and presented in this thesis. The gained insights through the development and evaluation of the explainable applications for the two domains underscore the need for further research on the advancement of XAI methods.

In this doctoral research, the explainable applications for the DSSs are developed with the additive feature attribution (AFA) methods, a class of XAI methods that are popular in current XAI research. Nevertheless, there are several sources from the literature that assert that feature attribution methods often yield inconsistent results that need plausible evaluation. However, the existing body of literature on evaluation techniques is still immature offering numerous suggested approaches without a standardized consensus on their optimal application in various scenarios. To address this issue, comprehensive evaluation criteria are also developed for AFA methods as the literature on XAI suggests. The proposed evaluation process considers the underlying characteristics of the data and utilizes the additive form of Case-based Reasoning, namely AddCBR. The AddCBR is proposed in this thesis and is demonstrated to complement the evaluation process as the baseline to compare the feature attributions produced by the AFA methods. Apart from generating an explanation with feature attribution, this thesis work also proposes the iXGB-interpretable XGBoost. iXGB generates decision rules and counterfactuals to support the output of an XGBoost model thus improving its interpretability. From the functional evaluation, iXGB demonstrates the potential to be used for interpreting arbitrary tree-ensemble methods.

In essence, this doctoral thesis initially contributes to the development of ideally evaluated explainable models tailored for two distinct safety-critical domains. The aim is to augment transparency within the corresponding DSSs. Additionally, the thesis introduces novel methods for generating more comprehensible explanations in different forms, surpassing existing approaches. It also showcases a robust evaluation approach for XAI methods.

Place, publisher, year, edition, pages
Västerås: Mälardalen university, 2024
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 397
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-64909 (URN)978-91-7485-626-2 (ISBN)
Public defence
2024-01-30, Gamma, Mälardalens universitet, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2023-12-04 Created: 2023-12-01 Last updated: 2024-01-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Islam, Mir RiyanulBarua, ShaibalAhmed, Mobyen UddinBegum, Shahina

Search in DiVA

By author/editor
Islam, Mir RiyanulBarua, ShaibalAhmed, Mobyen UddinBegum, Shahina
By organisation
Embedded SystemsEmbedded Systems
In the same journal
Brain Sciences
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 116 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf