mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 13) Show all publications
Rahman, H., Ahmed, M. U. & Begum, S. (2020). Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration. IEEE Transactions on Biomedical Engineering, 67(1), 88-98, Article ID 8715455.
Open this publication in new window or tab >>Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration
2020 (English)In: IEEE Transactions on Biomedical Engineering, ISSN 0018-9294, E-ISSN 1558-2531, Vol. 67, no 1, p. 88-98, article id 8715455Article in journal (Refereed) Published
Abstract [en]

Objective: In this paper, four physiological parameters, i.e., heart rate (HR), inter-beat-interval (IBI), heart rate variability (HRV), and oxygen saturation (SpO2), are extracted from facial video recordings. Methods: Facial videos were recorded for 10 min each in 30 test subjects while driving a simulator. Four regions of interest (ROIs) are automatically selected in each facial image frame based on 66 facial landmarks. Red-green-blue color signals are extracted from the ROIs and four physiological parameters are extracted from the color signals. For the evaluation, physiological parameters are also recorded simultaneously using a traditional sensor 'cStress,' which is attached to hands and fingers of test subjects. Results: The Bland Altman plots show 95% agreement between the camera system and 'cStress' with the highest correlation coefficient R = 0.96 for both HR and SpO2. The quality index is estimated for IBI considering 100 ms R-peak error; the accumulated percentage achieved is 97.5%. HRV features in both time and frequency domains are compared and the highest correlation coefficient achieved is 0.93. One-way analysis of variance test shows that there are no statistically significant differences between the measurements by camera and reference sensors. Conclusion: These results present high degrees of accuracy of HR, IBI, HRV, and SpO2 extraction from facial image sequences. Significance: The proposed non-contact approach could broaden the dimensionality of physiological parameters extraction using cameras. This proposed method could be applied for driver monitoring application under realistic conditions, i.e., illumination, motion, movement, and vibration.

Place, publisher, year, edition, pages
IEEE Computer Society, 2020
Keywords
Ambient illumination, driver monitoring, motion, movement, non-contact, physiological parameters, vibration, Cameras, Extraction, Heart, Video recording, Physiological models
National Category
Control Engineering
Identifiers
urn:nbn:se:mdh:diva-46689 (URN)10.1109/TBME.2019.2908349 (DOI)000505526300009 ()2-s2.0-85077175941 (Scopus ID)
Available from: 2020-01-09 Created: 2020-01-09 Last updated: 2020-01-23Bibliographically approved
Rahman, H., Ahmed, M. U., Barua, S. & Begum, S. (2020). Non-contact-based driver's cognitive load classification using physiological and vehicular parameters. Biomedical Signal Processing and Control, 55, Article ID 101634.
Open this publication in new window or tab >>Non-contact-based driver's cognitive load classification using physiological and vehicular parameters
2020 (English)In: Biomedical Signal Processing and Control, ISSN 1746-8094, E-ISSN 1746-8108, Vol. 55, article id 101634Article in journal (Refereed) Published
Abstract [en]

Classification of cognitive load for vehicular drivers is a complex task due to underlying challenges of the dynamic driving environment. Many previous works have shown that physiological sensor signals or vehicular data could be a reliable source to quantify cognitive load. However, in driving situations, one of the biggest challenges is to use a sensor source that can provide accurate information without interrupting diverging tasks. In this paper, instead of traditional wire-based sensors, non-contact camera and vehicle data are used that have no physical contact with the driver and do not interrupt driving. Here, four machine learning algorithms, logistic regression (LR), support vector machine (SVM), linear discriminant analysis (LDA) and neural networks (NN), are investigated to classify the cognitive load using the collected data from a driving simulator study. In this paper, physiological parameters are extracted from facial video images, and vehicular parameters are collected from controller area networks (CAN). The data collection was performed in close collaboration with industrial partners in two separate studies, in which study-1 was designed with a 1-back task and study-2 was designed with both 1-back and 2-back task. The goal of the experiment is to investigate how accurately the machine learning algorithms can classify drivers' cognitive load based on the extracted features in complex dynamic driving environments. According to the results, for the physiological parameters extracted from the facial videos, the LR model with logistic function outperforms the other three classification methods. Here, in study-1, the achieved average accuracy for the LR classifier is 94% and in study-2 the average accuracy is 82%. In addition, the classification accuracy for the collected physiological parameters was compared with reference wire-sensor signals. It is observed that the classification accuracies between the sensor and the camera are very similar; however, better accuracy is achieved with the camera data due to having lower artefacts than the sensor data. 

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2020
Keywords
Non-contact, Physiological parameters, Vehicular parameters, Cognitive load, Classification, Logistic regression, Support vector machine, Decision tree
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-46634 (URN)10.1016/j.bspc.2019.101634 (DOI)000502893200022 ()
Available from: 2020-01-02 Created: 2020-01-02 Last updated: 2020-01-02Bibliographically approved
Ahmed, M. U., Altarabichi, M. G., Begum, S., Ginsberg, F., Glaes, R., Östgren, M., . . . Sorensen, M. (2019). A vision-based indoor navigation system for individuals with visual impairment. International Journal of Artificial Intelligence, 17(2), 188-201
Open this publication in new window or tab >>A vision-based indoor navigation system for individuals with visual impairment
Show others...
2019 (English)In: International Journal of Artificial Intelligence, ISSN 0974-0635, E-ISSN 0974-0635, Vol. 17, no 2, p. 188-201Article in journal (Refereed) Published
Abstract [en]

Navigation and orientation in an indoor environment are a challenging task for visually impaired people. This paper proposes a portable vision-based system to provide support for visually impaired persons in their daily activities. Here, machine learning algorithms are used for obstacle avoidance and object recognition. The system is intended to be used independently, easily and comfortably without taking human help. The system assists in obstacle avoidance using cameras and gives voice message feedback by using a pre-trained YOLO Neural Network for object recognition. In other parts of the system, a floor plane estimation algorithm is proposed for obstacle avoidance and fuzzy logic is used to prioritize the detected objects in a frame and generate alert to the user about possible risks. The system is implemented using the Robot Operating System (ROS) for communication on a Nvidia Jetson TX2 with a ZED stereo camera for depth calculations and headphones for user feedback, with the capability to accommodate different setup of hardware components. The parts of the system give varying results when evaluated and thus in future a large-scale evaluation is needed to implement the system and get it as a commercialized product in this area.

Place, publisher, year, edition, pages
CESER Publications, 2019
Keywords
Deep learning, Depth estimation, Indoor navigation, Object detection, Object recognition
National Category
Robotics Computer Sciences Computer Systems Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:mdh:diva-45835 (URN)2-s2.0-85073347243 (Scopus ID)
Available from: 2019-10-25 Created: 2019-10-25 Last updated: 2020-02-03
Rahman, H., Ahmed, M. U. & Begum, S. (2018). Deep Learning based Person Identification using Facial Images. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225: . Paper presented at 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France (pp. 111-115).
Open this publication in new window or tab >>Deep Learning based Person Identification using Facial Images
2018 (English)In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225, 2018, p. 111-115Conference paper, Published paper (Refereed)
Abstract [en]

Person identification is an important task for many applications for example in security. A person can be identified using finger print, vocal sound, facial image or even by DNA test. However, Person identification using facial images is one of the most popular technique which is non-contact and easy to implement and a research hotspot in the field of pattern recognition and machine vision. n this paper, a deep learning based Person identification system is proposed using facial images which shows higher accuracy than another traditional machine learning, i.e. Support Vector Machine.

Keywords
Face recognition, Person Identification, Deep Learning.
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37091 (URN)10.1007/978-3-319-76213-5_17 (DOI)000476922000017 ()2-s2.0-85042545019 (Scopus ID)9783319762128 (ISBN)
Conference
4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France
Projects
SafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2017-10-26 Created: 2017-10-26 Last updated: 2019-08-08Bibliographically approved
Ahmed, M. U., Rahman, H. & Begum, S. (2018). Quality index analysis on camera- A sed R-eak identification considering movements and light illumination. In: Studies in Health Technology and Informatics, vol 249: . Paper presented at 15th International Conference on Wearable Micro and Nano Technologies for Personalized Health, pHealth 2018; Gjovik; Norway; 12 June 2018 through 14 June 2018 (pp. 84-92). IOS Press
Open this publication in new window or tab >>Quality index analysis on camera- A sed R-eak identification considering movements and light illumination
2018 (English)In: Studies in Health Technology and Informatics, vol 249, IOS Press , 2018, p. 84-92Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a quality index (QI) analysis on R-peak extracted by a camera system considering movements and light illumination. Here, the proposed camera system is compared with a reference system named Shimmer PPG sensor. The study considers five test subjects with a 15 minutes measurement protocol, where the protocol consists of several conditions. The conditions are: Normal sittings, head movements i.e., up/down/left/right/forward/backword, with light on/off and with moving flash on/off. A percentage of corrected R-peaks are calculated based on time difference in milliseconds (MS) between the R-peaks extracted both from camera-based and sensor-based systems. A comparison results between normal, movements, and lighting condition is presented as individual and group wise. Furthermore, the comparison is extended considering gender and origin of the subjects. According to the results, more than 90% R-peaks are correctly identified by the camera system with ±200 MS time differences, however, it decreases with while there is no light than when it is on. At the same time, the camera system shows more 95% accuracy for European than Asian men. 

Place, publisher, year, edition, pages
IOS Press, 2018
National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-40196 (URN)10.3233/978-1-61499-868-6-84 (DOI)000492875900009 ()2-s2.0-85049018248 (Scopus ID)9781614998679 (ISBN)
Conference
15th International Conference on Wearable Micro and Nano Technologies for Personalized Health, pHealth 2018; Gjovik; Norway; 12 June 2018 through 14 June 2018
Available from: 2018-07-05 Created: 2018-07-05 Last updated: 2019-11-14Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2018). Vision-Based Remote Heart Rate Variability Monitoring using Camera. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225: . Paper presented at 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France (pp. 10-18).
Open this publication in new window or tab >>Vision-Based Remote Heart Rate Variability Monitoring using Camera
2018 (English)In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225, 2018, p. 10-18Conference paper, Published paper (Refereed)
Abstract [en]

Heart Rate Variability (HRV) is one of the important physiological parameter which is used to early detect many fatal disease. In this paper a non-contact remote Heart Rate Variability (HRV) monitoring system is developed using the facial video based on color variation of facial skin caused by cardiac pulse. The lab color space of the facial video is used to extract color values of skin and signal processing algorithms i.e., Fast Fourier Transform (FFT), Independent Component Analysis (ICA), Principle Component Analysis (PCA) are applied to monitor HRV. First, R peak is detected from the color variation of skin and then Inter-Beat-Interval (IBI) is calculated for every consecutive R-R peak. HRV features are then calculated based on IBI both in time and frequency domain. MySQL and PHP programming language is used to store, monitor and display HRV parameters remotely. In this study, HRV is quantified and compared with a reference measurement where a high degree of similarities is achieved. This technology has significant potential for advancing personal health care especially for telemedicine.

Keywords
Physiological signals, Heart Rate, Inter-beat-Interval, Heart-Rate-Variability, Non-contact, Remote Monitoring.
National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-37072 (URN)10.1007/978-3-319-76213-5_2 (DOI)000476922000002 ()2-s2.0-85042538568 (Scopus ID)9783319762128 (ISBN)
Conference
4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France
Projects
SafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2017-10-31 Created: 2017-10-31 Last updated: 2019-08-08Bibliographically approved
Tomasic, I., Rahman, H., Erdem, I., Andersson, A. & Funk, P. (2016). Input-output Mapping and Sources of Variation Analysis in Fixtures for Sheet Metal Assembly Processes. In: Swedish Production Symposium 2016 SPS 2016: . Paper presented at 7th Swedish Production Symposium 2016 SPS 2016, 25 Oct 2016, Lund, Sweden.
Open this publication in new window or tab >>Input-output Mapping and Sources of Variation Analysis in Fixtures for Sheet Metal Assembly Processes
Show others...
2016 (English)In: Swedish Production Symposium 2016 SPS 2016, 2016Conference paper, Published paper (Refereed)
Abstract [en]

The assembly quality is affected by various factors within which fixture variations are the most important. For that reason an extensive research on fixture variations has already been done. In this work we propose a linear models (LMs) application for the purpose of analyzing sources of variation in the fixture as well as establishing a model between positions of ingoing parts and measured geometrical characteristics of the assemblies. Objectives: (1) To estimate the strengths of different sources of variation on the assembled parts. (2) Estimate a regression model between the positions of ingoing parts as inputs (that are defined by positions of pins holding them), and measured geometrical characteristics as outputs, that can be used to determine which measured characteristics are influenced by which input variable. Methods: The data was experimentally collected in a laboratory environment by intentionally changing positions of ingoing part, assembling the parts and subsequently measuring their geometrical characteristics. We use liner model to establish the relation between geometrical characteristics measured on the assembled parts, and the input variables of interest. Results: Presented is a modeling technique that can be used to establish which measured geometrical characteristics are influenced by input variables (i.e.pins’ positions) of interest. The natural variation in the system (i.e. not modeled variation) is quite high. The time passed between measurements has a significant influence on measured values.

Keywords
Manufacturing process chain, Quality Control, Variations in Assembly Processes, Correctiveactions
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-33805 (URN)
Conference
7th Swedish Production Symposium 2016 SPS 2016, 25 Oct 2016, Lund, Sweden
Projects
AproC, Automated Process Control
Available from: 2016-11-21 Created: 2016-11-21 Last updated: 2017-10-16Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2016). Non-contact heart rate monitoring using lab color space. In: Studies in Health Technology and Informatics: . Paper presented at 13th International Conference on Wearable Micro and Nano Technologies for Personalised Health, pHealth 2016; Heraklion, Crete; Greece; 29 May 2016 through 31 May 2016; Code 121852 (pp. 46-53). , 224
Open this publication in new window or tab >>Non-contact heart rate monitoring using lab color space
2016 (English)In: Studies in Health Technology and Informatics, 2016, Vol. 224, p. 46-53Conference paper, Published paper (Refereed)
Abstract [en]

Research progressing during the last decade focuses more on non-contact based systems to monitor Heart Rate (HR) which are simple, low-cost and comfortable to use. Most of the non-contact based systems are using RGB videos which is suitable for lab environment. However, it needs to progress considerably before they can be applied in real life applications. As luminance (light) has significance contribution on RGB videos HR monitoring using RGB videos are not efficient enough in real life applications in outdoor environment. This paper presents a HR monitoring method using Lab color facial video captured by a webcam of a laptop computer. Lab color space is device independent and HR can be extracted through facial skin color variation caused by blood circulation considering variable environmental light. Here, three different signal processing methods i.e., Fast Fourier Transform (FFT), Independent Component Analysis (ICA) and Principal Component Analysis (PCA) have been applied on the color channels in video recordings and blood volume pulse (BVP) has been extracted from the facial regions. In this study, HR is subsequently quantified and compare with a reference measurement. The result shows that high degrees of accuracy have been achieved compared to the reference measurements. Thus, this technology has significant potential for advancing personal health care, telemedicine and many real life applications such as driver monitoring.

Keywords
Heart rate, Lab color space, Signal processing, blood volume, circulation, driver, face, Fourier transformation, human, independent component analysis, luminance, monitoring, principal component analysis, quantitative study, skin color, telemedicine, videorecording
National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-32175 (URN)10.3233/978-1-61499-653-8-46 (DOI)000385238500008 ()2-s2.0-84973483708 (Scopus ID)9781614996521 (ISBN)
Conference
13th International Conference on Wearable Micro and Nano Technologies for Personalised Health, pHealth 2016; Heraklion, Crete; Greece; 29 May 2016 through 31 May 2016; Code 121852
Available from: 2016-06-23 Created: 2016-06-23 Last updated: 2017-01-25Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2016). Non-Contact Physiological Parameters Extraction Using Camera. In: Internet of Things. IoT Infrastructures: Second International Summit, IoT 360° 2015 Rome, Italy, October 27–29, 2015. Revised Selected Papers, Part I. Paper presented at Second International Summit, IoT 360° 2015 Rome, Italy, October 27–29, 2015. 1st Workshop on Embedded Sensor Systems for Health. (pp. 448-453). , 169
Open this publication in new window or tab >>Non-Contact Physiological Parameters Extraction Using Camera
2016 (English)In: Internet of Things. IoT Infrastructures: Second International Summit, IoT 360° 2015 Rome, Italy, October 27–29, 2015. Revised Selected Papers, Part I, 2016, Vol. 169, p. 448-453Conference paper, Published paper (Refereed)
Abstract [en]

Physiological parameters such as Heart Rate (HR), Beat-to-Beat Interval (IBI) and Respiration Rate (RR) are vital indicators of people’s physiological state and important to monitor. However, most of the measurements methods are connection based, i.e. sensors are connected to the body which is often complicated and requires personal assistance. This paper proposed a simple, low-cost and non-contact approach for measuring multiple physiological parameters using a web camera in real time. Here, the heart rate and respiration rate are obtained through facial skin colour variation caused by body blood circulation. Three different signal processing methods such as Fast Fourier Transform (FFT), independent component analysis (ICA) and Principal component analysis (PCA) have been applied on the colour channels in video recordings and the blood volume pulse (BVP) is extracted from the facial regions. HR, IBI and RR are subsequently quantified and compared to corresponding reference measurements. High degrees of agreement are achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine. 

Series
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, ISSN 1867-8211 ; 169
Keywords
Autonomous Car Driver Monitoring Physiological signals Camera Non contact
National Category
Other Engineering and Technologies
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-30021 (URN)10.1007/978-3-319-47063-4_47 (DOI)000398616500047 ()2-s2.0-85000783926 (Scopus ID)978-331947062-7 (ISBN)
Conference
Second International Summit, IoT 360° 2015 Rome, Italy, October 27–29, 2015. 1st Workshop on Embedded Sensor Systems for Health.
Projects
ESS-H - Embedded Sensor Systems for Health Research ProfileVDM - Vehicle Driver MonitoringSafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2015-12-20 Created: 2015-12-18 Last updated: 2017-01-25Bibliographically approved
Rahman, H., Iyer, S., Meusburger, C., Dobrovoljski, K., Stoycheva, M., Turkulov, V., . . . Ahmed, M. U. (2016). SmartMirror: An Embedded Non-contact System for Health Monitoring at Home. In: : . Paper presented at The 3rd EAI International Conference on IoT Technologies for HealthCare HealthyIoT'16, 18 Oct 2016, Västerås, Sweden (pp. 133-137). , 187
Open this publication in new window or tab >>SmartMirror: An Embedded Non-contact System for Health Monitoring at Home
Show others...
2016 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The ‘Smart Mirror’ project introduces non-contact based technological innovations at our homes where its usage can be as ubiquitous as ‘looking at a mirror’ while providing critical actionable insights thereby leading to improved care and outcomes. The key objectives is to detect key physiological markers like Heart Rate (HR), Respiration Rate (RR), Inter-beat-interval (IBI) and Blood Pressure (BP) and also drowsiness using the video input of the individual standing in front of the mirror and display the results in real-time. A satisfactory level of accuracy has been attained with respect to the reference sensors signal.

Series
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, ISSN 1867-8211
Keywords
Physiological Parameters, Photo plethysmography, Heart Rate, Respiration Rate, Inter-beat-interval, Blood Pressure and Drowsiness.
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-33802 (URN)10.1007/978-3-319-51234-1_21 (DOI)000428954100021 ()2-s2.0-85011269719 (Scopus ID)
Conference
The 3rd EAI International Conference on IoT Technologies for HealthCare HealthyIoT'16, 18 Oct 2016, Västerås, Sweden
Projects
SafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2016-11-21 Created: 2016-11-21 Last updated: 2018-12-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1547-4386

Search in DiVA

Show all publications