mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 81) Show all publications
Rahman, H., Ahmed, M. U. & Begum, S. (2020). Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration. IEEE Transactions on Biomedical Engineering, 67(1), 88-98, Article ID 8715455.
Open this publication in new window or tab >>Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration
2020 (English)In: IEEE Transactions on Biomedical Engineering, ISSN 0018-9294, E-ISSN 1558-2531, Vol. 67, no 1, p. 88-98, article id 8715455Article in journal (Refereed) Published
Abstract [en]

Objective: In this paper, four physiological parameters, i.e., heart rate (HR), inter-beat-interval (IBI), heart rate variability (HRV), and oxygen saturation (SpO2), are extracted from facial video recordings. Methods: Facial videos were recorded for 10 min each in 30 test subjects while driving a simulator. Four regions of interest (ROIs) are automatically selected in each facial image frame based on 66 facial landmarks. Red-green-blue color signals are extracted from the ROIs and four physiological parameters are extracted from the color signals. For the evaluation, physiological parameters are also recorded simultaneously using a traditional sensor 'cStress,' which is attached to hands and fingers of test subjects. Results: The Bland Altman plots show 95% agreement between the camera system and 'cStress' with the highest correlation coefficient R = 0.96 for both HR and SpO2. The quality index is estimated for IBI considering 100 ms R-peak error; the accumulated percentage achieved is 97.5%. HRV features in both time and frequency domains are compared and the highest correlation coefficient achieved is 0.93. One-way analysis of variance test shows that there are no statistically significant differences between the measurements by camera and reference sensors. Conclusion: These results present high degrees of accuracy of HR, IBI, HRV, and SpO2 extraction from facial image sequences. Significance: The proposed non-contact approach could broaden the dimensionality of physiological parameters extraction using cameras. This proposed method could be applied for driver monitoring application under realistic conditions, i.e., illumination, motion, movement, and vibration.

Place, publisher, year, edition, pages
IEEE Computer Society, 2020
Keywords
Ambient illumination, driver monitoring, motion, movement, non-contact, physiological parameters, vibration, Cameras, Extraction, Heart, Video recording, Physiological models
National Category
Control Engineering
Identifiers
urn:nbn:se:mdh:diva-46689 (URN)10.1109/TBME.2019.2908349 (DOI)000505526300009 ()2-s2.0-85077175941 (Scopus ID)
Available from: 2020-01-09 Created: 2020-01-09 Last updated: 2020-01-23Bibliographically approved
Rahman, H., Ahmed, M. U., Barua, S. & Begum, S. (2020). Non-contact-based driver's cognitive load classification using physiological and vehicular parameters. Biomedical Signal Processing and Control, 55, Article ID 101634.
Open this publication in new window or tab >>Non-contact-based driver's cognitive load classification using physiological and vehicular parameters
2020 (English)In: Biomedical Signal Processing and Control, ISSN 1746-8094, E-ISSN 1746-8108, Vol. 55, article id 101634Article in journal (Refereed) Published
Abstract [en]

Classification of cognitive load for vehicular drivers is a complex task due to underlying challenges of the dynamic driving environment. Many previous works have shown that physiological sensor signals or vehicular data could be a reliable source to quantify cognitive load. However, in driving situations, one of the biggest challenges is to use a sensor source that can provide accurate information without interrupting diverging tasks. In this paper, instead of traditional wire-based sensors, non-contact camera and vehicle data are used that have no physical contact with the driver and do not interrupt driving. Here, four machine learning algorithms, logistic regression (LR), support vector machine (SVM), linear discriminant analysis (LDA) and neural networks (NN), are investigated to classify the cognitive load using the collected data from a driving simulator study. In this paper, physiological parameters are extracted from facial video images, and vehicular parameters are collected from controller area networks (CAN). The data collection was performed in close collaboration with industrial partners in two separate studies, in which study-1 was designed with a 1-back task and study-2 was designed with both 1-back and 2-back task. The goal of the experiment is to investigate how accurately the machine learning algorithms can classify drivers' cognitive load based on the extracted features in complex dynamic driving environments. According to the results, for the physiological parameters extracted from the facial videos, the LR model with logistic function outperforms the other three classification methods. Here, in study-1, the achieved average accuracy for the LR classifier is 94% and in study-2 the average accuracy is 82%. In addition, the classification accuracy for the collected physiological parameters was compared with reference wire-sensor signals. It is observed that the classification accuracies between the sensor and the camera are very similar; however, better accuracy is achieved with the camera data due to having lower artefacts than the sensor data. 

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2020
Keywords
Non-contact, Physiological parameters, Vehicular parameters, Cognitive load, Classification, Logistic regression, Support vector machine, Decision tree
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-46634 (URN)10.1016/j.bspc.2019.101634 (DOI)000502893200022 ()
Available from: 2020-01-02 Created: 2020-01-02 Last updated: 2020-01-02Bibliographically approved
Altarabichi, M. G., Ahmed, M. U., Begum, S., Ciceri, M. R., Balzarotti, S., Biassoni, F., . . . Perego, P. (2020). Reaction Time Variability Association with Unsafe Driving. In: Transport Research Arena TRA2020: . Paper presented at Transport Research Arena TRA2020, 27 Apr 2020, Helsinki, Finland. Helsinki, Finland
Open this publication in new window or tab >>Reaction Time Variability Association with Unsafe Driving
Show others...
2020 (English)In: Transport Research Arena TRA2020, Helsinki, Finland, 2020Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates several human factors including visual field, reaction speed, driving behavior and personality traits based on results of a cognitive assessment test targeting drivers in a Naturalistic Driving Study (NDS). Frequency of being involved in Near Miss event (fnm) and Frequency of committing Traffic Violation (ftv) are defined as indexes of safe driving in this work. Inference of association shows statistically significant correlation between Standard Deviation of Reaction Time (σRT) and both safe driving indexes fnm and ftv. Causal relationship analysis excludes age as confounding factor as variations in behavioral responses is observed in both younger and older drivers of this study.

Place, publisher, year, edition, pages
Helsinki, Finland: , 2020
Keywords
Road Safety, Naturalistic Driving, Vienna Test, Cognitive Assessment, Reaction Time Variability.
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-45497 (URN)
Conference
Transport Research Arena TRA2020, 27 Apr 2020, Helsinki, Finland
Projects
SimuSafe : Simulator of Behavioural Aspects for Safer Transport
Available from: 2019-10-28 Created: 2019-10-28 Last updated: 2020-02-03
Ahmed, M. U., Altarabichi, M. G., Begum, S., Ginsberg, F., Glaes, R., Östgren, M., . . . Sorensen, M. (2019). A vision-based indoor navigation system for individuals with visual impairment. International Journal of Artificial Intelligence, 17(2), 188-201
Open this publication in new window or tab >>A vision-based indoor navigation system for individuals with visual impairment
Show others...
2019 (English)In: International Journal of Artificial Intelligence, ISSN 0974-0635, E-ISSN 0974-0635, Vol. 17, no 2, p. 188-201Article in journal (Refereed) Published
Abstract [en]

Navigation and orientation in an indoor environment are a challenging task for visually impaired people. This paper proposes a portable vision-based system to provide support for visually impaired persons in their daily activities. Here, machine learning algorithms are used for obstacle avoidance and object recognition. The system is intended to be used independently, easily and comfortably without taking human help. The system assists in obstacle avoidance using cameras and gives voice message feedback by using a pre-trained YOLO Neural Network for object recognition. In other parts of the system, a floor plane estimation algorithm is proposed for obstacle avoidance and fuzzy logic is used to prioritize the detected objects in a frame and generate alert to the user about possible risks. The system is implemented using the Robot Operating System (ROS) for communication on a Nvidia Jetson TX2 with a ZED stereo camera for depth calculations and headphones for user feedback, with the capability to accommodate different setup of hardware components. The parts of the system give varying results when evaluated and thus in future a large-scale evaluation is needed to implement the system and get it as a commercialized product in this area.

Place, publisher, year, edition, pages
CESER Publications, 2019
Keywords
Deep learning, Depth estimation, Indoor navigation, Object detection, Object recognition
National Category
Robotics Computer Sciences Computer Systems Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:mdh:diva-45835 (URN)2-s2.0-85073347243 (Scopus ID)
Available from: 2019-10-25 Created: 2019-10-25 Last updated: 2020-02-03
Barua, S., Ahmed, M. U., Ahlström, C. & Begum, S. (2019). Automatic driver sleepiness detection using EEG, EOG and contextual information. Expert systems with applications, 115, 121-135
Open this publication in new window or tab >>Automatic driver sleepiness detection using EEG, EOG and contextual information
2019 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 115, p. 121-135Article in journal (Refereed) Published
Abstract [en]

The many vehicle crashes that are caused by driver sleepiness each year advocates the development of automated driver sleepiness detection (ADSD) systems. This study proposes an automatic sleepiness classification scheme designed using data from 30 drivers who repeatedly drove in a high-fidelity driving simulator, both in alert and in sleep deprived conditions. Driver sleepiness classification was performed using four separate classifiers: k-nearest neighbours, support vector machines, case-based reasoning, and random forest, where physiological signals and contextual information were used as sleepiness indicators. The subjective Karolinska sleepiness scale (KSS) was used as target value. An extensive evaluation on multiclass and binary classifications was carried out using 10-fold cross-validation and leave-one-out validation. With 10-fold cross-validation, the support vector machine showed better performance than the other classifiers (79% accuracy for multiclass and 93% accuracy for binary classification). The effect of individual differences was also investigated, showing a 10% increase in accuracy when data from the individual being evaluated was included in the training dataset. Overall, the support vector machine was found to be the most stable classifier. The effect of adding contextual information to the physiological features improved the classification accuracy by 4% in multiclass classification and by and 5% in binary classification.

Place, publisher, year, edition, pages
Elsevier Ltd, 2019
Keywords
Contextual information, Driver sleepiness, Electroencephalography, Electrooculography, Machine learning, Accidents, Case based reasoning, Decision trees, Electrophysiology, Fisher information matrix, Learning systems, Nearest neighbor search, Support vector machines, 10-fold cross-validation, Binary classification, Classification accuracy, Individual Differences, Multi-class classification, Physiological features, Classification (of information)
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-40526 (URN)10.1016/j.eswa.2018.07.054 (DOI)000448097700009 ()2-s2.0-85051410923 (Scopus ID)
Available from: 2018-08-23 Created: 2018-08-23 Last updated: 2019-01-10Bibliographically approved
Islam, M. R., Barua, S., Ahmed, M. U., Begum, S. & Flumeri, G. D. (2019). Deep Learning for Automatic EEG Feature Extraction: An Application in Drivers' Mental Workload Classification. In: Communications in Computer and Information Science, Volume 1107: . Paper presented at The 3rd International Symposium on Human Mental Workload: Models and Applications H-WORKLOAD 2019, 14 Nov 2019, Rome, Italy (pp. 121-135).
Open this publication in new window or tab >>Deep Learning for Automatic EEG Feature Extraction: An Application in Drivers' Mental Workload Classification
Show others...
2019 (English)In: Communications in Computer and Information Science, Volume 1107, 2019, p. 121-135Conference paper, Published paper (Refereed)
Abstract [en]

In the pursuit of reducing traffic accidents, drivers' mental workload (MWL) has been considered as one of the vital aspects. To measure MWL in different driving situations Electroencephalography (EEG) of the drivers has been studied intensely. However, in the literature, mostly, manual analytic methods are applied to extract and select features from the EEG signals to quantify drivers' MWL. Nevertheless, the amount of time and effort required to perform prevailing feature extraction techniques leverage the need for automated feature extraction techniques. This work investigates deep learning (DL) algorithm to extract and select features from the EEG signals during naturalistic driving situations. Here, to compare the DL based and traditional feature extraction techniques, a number of classifiers have been deployed. Results have shown that the highest value of area under the curve of the receiver operating characteristic (AUC-ROC) is 0.94, achieved using the features extracted by CNN-AE and support vector machine. Whereas, using the features extracted by the traditional method, the highest value of AUC-ROC is 0.78 with the multi-layer perceptron. Thus, the outcome of this study shows that the automatic feature extraction techniques based on CNN-AE can outperform the manual techniques in terms of classification accuracy.

Keywords
Autoencoder, Convolutional Neural Networks, Electroencephalography, Feature Extraction, Mental Workload
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-45059 (URN)10.1007/978-3-030-32423-0_8 (DOI)2-s2.0-85075680380 (Scopus ID)9783030324223 (ISBN)
Conference
The 3rd International Symposium on Human Mental Workload: Models and Applications H-WORKLOAD 2019, 14 Nov 2019, Rome, Italy
Projects
BRAINSAFEDRIVE: A Technology to detect Mental States During Drive for improving the Safety of the road
Available from: 2019-08-22 Created: 2019-08-22 Last updated: 2019-12-16Bibliographically approved
Islam, M. R., Barua, S., Begum, S. & Ahmed, M. U. (2019). Hypothyroid Disease Diagnosis with Causal Explanation using Case-based Reasoning and Domain-specific Ontology. In: Workshop on CBR in the Health Science WS-HealthCBR: . Paper presented at Workshop on CBR in the Health Science WS-HealthCBR, 09 Sep 2019, Otzenhausen, Germany.
Open this publication in new window or tab >>Hypothyroid Disease Diagnosis with Causal Explanation using Case-based Reasoning and Domain-specific Ontology
2019 (English)In: Workshop on CBR in the Health Science WS-HealthCBR, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Explainability of intelligent systems in health-care domain is still in its initial state. Recently, more efforts are made to leverage machine learning in solving causal inference problems of disease diagnosis, prediction and treatments. This research work presents an ontology based causal inference model for hypothyroid disease diagnosis using case-based reasoning. The effectiveness of the proposed method is demonstrated with an example from hypothyroid disease domain. Here, the domain knowledge is mapped into an ontology and causal inference is performed based on this domain-specific ontology. The goal is to incorporate this causal inference model in traditional case-based reasoning cycle enabling explanation for each solved problem. Finally, a mechanism is defined to deduce explanation for a solution to a problem case from the combined causal statements of similar cases. The initial result shows that case-based reasoning can retrieve relevant cases with 95% accuracy.

Keywords
Case-based Reasoning, Causal Model, Explainability, Explainable Artificial Intelligence, Hypothyroid Diagnosis, Ontology
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-45058 (URN)
Conference
Workshop on CBR in the Health Science WS-HealthCBR, 09 Sep 2019, Otzenhausen, Germany
Available from: 2019-08-22 Created: 2019-08-22 Last updated: 2019-08-22Bibliographically approved
Altarabichi, M. G., Ahmed, M. U. & Begum, S. (2019). Supervised Learning for Road Junctions Identification using IMU. In: First International Conference on Advances in Signal Processing and Artificial Intelligence ASPAI' 2019: . Paper presented at First International Conference on Advances in Signal Processing and Artificial Intelligence ASPAI' 2019, 20 Mar 2019, Barcelona, Spain.
Open this publication in new window or tab >>Supervised Learning for Road Junctions Identification using IMU
2019 (English)In: First International Conference on Advances in Signal Processing and Artificial Intelligence ASPAI' 2019, 2019Conference paper, Published paper (Refereed)
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-43910 (URN)
Conference
First International Conference on Advances in Signal Processing and Artificial Intelligence ASPAI' 2019, 20 Mar 2019, Barcelona, Spain
Projects
SimuSafe : Simulator of Behavioural Aspects for Safer Transport
Available from: 2019-06-17 Created: 2019-06-17 Last updated: 2020-02-03
Ahmed, M. U., Begum, S., Catalina, C. A., Limonad, L., Hök, B. & Flumeri, G. D. (2018). Cloud-based Data Analytics on Human Factor Measurement to Improve Safer Transport. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225: . Paper presented at 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France (pp. 101-106).
Open this publication in new window or tab >>Cloud-based Data Analytics on Human Factor Measurement to Improve Safer Transport
Show others...
2018 (English)In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225, 2018, p. 101-106Conference paper, Published paper (Refereed)
Abstract [en]

Improving safer transport includes individual and collective behavioural aspects and their interaction. A system that can monitor and evaluate the human cognitive and physical capacities based on human factor measurement is often beneficial to improve safety in driving condition. However, analysis and evaluation of human factor measurement i.e. Demographics, Behavioural and Physiological in real-time is challenging. This paper presents a methodology for cloud-based data analysis, categorization and metrics correlation in real-time through a H2020 project called SimuSafe. Initial implementation of this methodology shows a step-by-step approach which can handle huge amount of data with variation and verity in the cloud.

Keywords
SimuSafe, safer transport, data-analysis, big data, human factor
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37085 (URN)10.1007/978-3-319-76213-5_14 (DOI)000476922000014 ()2-s2.0-85042536073 (Scopus ID)9783319762128 (ISBN)
Conference
4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France
Projects
SimuSafe : Simulator of Behavioural Aspects for Safer Transport
Funder
EU, Horizon 2020, 723386
Available from: 2017-10-27 Created: 2017-10-27 Last updated: 2019-08-08Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2018). Deep Learning based Person Identification using Facial Images. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225: . Paper presented at 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France (pp. 111-115).
Open this publication in new window or tab >>Deep Learning based Person Identification using Facial Images
2018 (English)In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225, 2018, p. 111-115Conference paper, Published paper (Refereed)
Abstract [en]

Person identification is an important task for many applications for example in security. A person can be identified using finger print, vocal sound, facial image or even by DNA test. However, Person identification using facial images is one of the most popular technique which is non-contact and easy to implement and a research hotspot in the field of pattern recognition and machine vision. n this paper, a deep learning based Person identification system is proposed using facial images which shows higher accuracy than another traditional machine learning, i.e. Support Vector Machine.

Keywords
Face recognition, Person Identification, Deep Learning.
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37091 (URN)10.1007/978-3-319-76213-5_17 (DOI)000476922000017 ()2-s2.0-85042545019 (Scopus ID)9783319762128 (ISBN)
Conference
4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France
Projects
SafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2017-10-26 Created: 2017-10-26 Last updated: 2019-08-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1212-7637

Search in DiVA

Show all publications