https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-1547-4386
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0003-3802-4721
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-1212-7637
2020 (English)In: IEEE Transactions on Biomedical Engineering, ISSN 0018-9294, E-ISSN 1558-2531, Vol. 67, no 1, p. 88-98, article id 8715455Article in journal (Refereed) Published
Abstract [en]

Objective: In this paper, four physiological parameters, i.e., heart rate (HR), inter-beat-interval (IBI), heart rate variability (HRV), and oxygen saturation (SpO2), are extracted from facial video recordings. Methods: Facial videos were recorded for 10 min each in 30 test subjects while driving a simulator. Four regions of interest (ROIs) are automatically selected in each facial image frame based on 66 facial landmarks. Red-green-blue color signals are extracted from the ROIs and four physiological parameters are extracted from the color signals. For the evaluation, physiological parameters are also recorded simultaneously using a traditional sensor 'cStress,' which is attached to hands and fingers of test subjects. Results: The Bland Altman plots show 95% agreement between the camera system and 'cStress' with the highest correlation coefficient R = 0.96 for both HR and SpO2. The quality index is estimated for IBI considering 100 ms R-peak error; the accumulated percentage achieved is 97.5%. HRV features in both time and frequency domains are compared and the highest correlation coefficient achieved is 0.93. One-way analysis of variance test shows that there are no statistically significant differences between the measurements by camera and reference sensors. Conclusion: These results present high degrees of accuracy of HR, IBI, HRV, and SpO2 extraction from facial image sequences. Significance: The proposed non-contact approach could broaden the dimensionality of physiological parameters extraction using cameras. This proposed method could be applied for driver monitoring application under realistic conditions, i.e., illumination, motion, movement, and vibration.

Place, publisher, year, edition, pages
IEEE Computer Society , 2020. Vol. 67, no 1, p. 88-98, article id 8715455
Keywords [en]
Ambient illumination, driver monitoring, motion, movement, non-contact, physiological parameters, vibration, Cameras, Extraction, Heart, Video recording, Physiological models
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:mdh:diva-46689DOI: 10.1109/TBME.2019.2908349ISI: 000505526300009Scopus ID: 2-s2.0-85077175941OAI: oai:DiVA.org:mdh-46689DiVA, id: diva2:1384236
Available from: 2020-01-09 Created: 2020-01-09 Last updated: 2021-02-25Bibliographically approved
In thesis
1. Artificial Intelligence for Non-Contact-Based Driver Health Monitoring
Open this publication in new window or tab >>Artificial Intelligence for Non-Contact-Based Driver Health Monitoring
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In clinical situations, a patient’s physical state is often monitored by sensors attached to the patient, and medical staff are alerted if the patient’s status changes in an undesirable or life-threatening direction. However, in unsupervised situations, such as when driving a vehicle, connecting sensors to the driver is often troublesome and wired sensors may not produce sufficient quality due to factors such as movement and electrical disturbance. Using a camera as a non-contact sensor to extract physiological parameters based on video images offers a new paradigm for monitoring a driver’s health and mental state. Due to the advanced technical features in modern vehicles, driving is now faster, safer and more comfortable than before. To enhance transport security (i.e. to avoid unexpected traffic accidents), it is necessary to consider a vehicle driver as a part of the traffic environment and thus monitor the driver’s health and mental state. Such a monitoring system is commonly developed based on two approaches: driving behaviour-based and physiological parameters-based.

This research work demonstrates a non-contact approach that classifies a driver’s cognitive load based on physiological parameters through a camera system and vehicular data collected from control area networks considering image processing, computer vision, machine learning (ML) and deep learning (DL). In this research, a camera is used as a non-contact sensor and pervasive approach for measuring and monitoring the physiological parameters. The contribution of this research study is four-fold: 1) Feature extraction approach to extract physiological parameters (i.e. heart rate [HR], respiration rate [RR], inter-beat interval [IBI], heart rate variability [HRV] and oxygen saturation [SpO2]) using a camera system in several challenging conditions (i.e. illumination, motion, vibration and movement); 2) Feature extraction based on eye-movement parameters (i.e. saccade and fixation); 3) Identification of key vehicular parameters and extraction of useful features from lateral speed (SP), steering wheel angle (SWA), steering wheel reversal rate (SWRR), steering wheel torque (SWT), yaw rate (YR), lanex (LAN) and lateral position (LP); 4) Investigation of ML and DL algorithms for a driver’s cognitive load classification. Here, ML algorithms (i.e. logistic regression [LR], linear discriminant analysis [LDA], support vector machine [SVM], neural networks [NN], k-nearest neighbours [k-NN], decision tree [DT]) and DL algorithms (i.e. convolutional neural networks [CNN], long short-term memory [LSTM] networks and autoencoders [AE]) are used. 

One of the major contributions of this research work is that physiological parameters were extracted using a camera. According to the results, feature extraction based on physiological parameters using a camera achieved the highest correlation coefficient of .96 for both HR and SpO2 compared to a reference system. The Bland Altman plots showed 95% agreement considering the correlation between the camera and the reference wired sensors. For IBI, the achieved quality index was 97.5% considering a 100 ms R-peak error. The correlation coefficients for 13 eye-movement features between non-contact approach and reference eye-tracking system ranged from .82 to .95.

For cognitive load classification using both the physiological and vehicular parameters, two separate studies were conducted: Study 1 with the 1-back task and Study 2 with the 2-back task. Finally, the highest average accuracy achieved in terms of cognitive load classification was 94% for Study 1 and 82% for Study 2 using LR algorithms considering the HRV parameter. The highest average classification accuracy of cognitive load was 92% using SVM considering saccade and fixation parameters. In both cases, k-fold cross-validation was used for the validation, where the value of k was 10. The classification accuracies using CNN, LSTM and autoencoder were 91%, 90%, and 90.3%, respectively. 

This research study shows such a non-contact-based approach using ML, DL, image processing and computer vision is suitable for monitoring a driver’s cognitive state.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2021
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 330
Keywords
Driver Monitoring, Artificial Intelligence, Machine Learning, Deep Learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-53529 (URN)978-91-7485-499-2 (ISBN)
Public defence
2021-04-07, Delta + digitalt via Zoom, Mälardalens högskola, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2021-02-26 Created: 2021-02-25 Last updated: 2021-04-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Rahman, HamidurAhmed, Mobyen UddinBegum, Shahina

Search in DiVA

By author/editor
Rahman, HamidurAhmed, Mobyen UddinBegum, Shahina
By organisation
Embedded Systems
In the same journal
IEEE Transactions on Biomedical Engineering
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 94 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf