https://www.mdu.se/

mdu.sePublications
Change search
Link to record
Permanent link

Direct link
Rahman, Hamidur, Doctoral StudentORCID iD iconorcid.org/0000-0002-1547-4386
Publications (10 of 20) Show all publications
Degas, A., Islam, M. R., Hurter, C., Barua, S., Rahman, H., Poudel, M., . . . Aricó, P. (2022). A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management: Current Trends and Development with Future Research Trajectory. Applied Sciences, 12(3), Article ID 1295.
Open this publication in new window or tab >>A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management: Current Trends and Development with Future Research Trajectory
Show others...
2022 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 12, no 3, article id 1295Article, review/survey (Refereed) Published
Abstract [en]

Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
MDPI, 2022
Keywords
Air traffic management (ATM), Artificial intelligence (AI), Explainable artificial intelligence (XAI), User-centric XAI (UCXAI)
National Category
Computer Sciences
Identifiers
urn:nbn:se:mdh:diva-57255 (URN)10.3390/app12031295 (DOI)000756561800001 ()2-s2.0-85123696145 (Scopus ID)
Available from: 2022-02-09 Created: 2022-02-09 Last updated: 2024-04-10Bibliographically approved
Rahman, H., D'Cruze, R. S., Ahmed, M. U., Sohlberg, R., Sakao, T. & Funk, P. (2022). Artificial Intelligence-Based Life Cycle Engineering in Industrial Production: A Systematic Literature Review. IEEE Access, 10, 133001-133015
Open this publication in new window or tab >>Artificial Intelligence-Based Life Cycle Engineering in Industrial Production: A Systematic Literature Review
Show others...
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 133001-133015Article, review/survey (Refereed) Published
Abstract [en]

For the last few years, cases of applying artificial intelligence (AI) to engineering activities towards sustainability have been reported. Life Cycle Engineering (LCE) provides a potential to systematically reach higher and productivity levels, owing to its holistic perspective and consideration of economic and environmental targets. To address the current gap to more systematic deployment of AI with LCE (AI-LCE) we have performed a systematic literature review emphasizing the three aspects:(1) the most prevalent AI techniques, (2) the current AI-improved LCE subfields and (3) the subfields with highly enhanced by AI. A specific set of inclusion and exclusion criteria were used to identify and select academic papers from several fields, i.e. production, logistics, marketing and supply chain and after the selection process described in the paper we ended up with 42 scientific papers. The study and analysis show that there are many AI-LCE papers addressing Sustainable Development Goals mainly addressing: Industry, Innovation, and Infrastructure; Sustainable Cities and Communities; and Responsible Consumption and Production. Overall, the papers give a picture of diverse AI techniques used in LCE. Production design and Maintenance and Repair are the top explored LCE subfields whereas logistics and Procurement are the least explored subareas. Research in AI-LCE is concentrated in a few dominating countries and especially countries with a strong research funding and focus on Industry 4.0; Germany is standing out with numbers of publications. The in-depth analysis of selected and relevant scientific papers are helpful in getting a more correct picture of the area which enables a more systematic approach to AI-LCE in the future.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022
Keywords
Artificial intelligence, life cycle engineering, machine learning, sustainable development, sustainable development goal
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-62977 (URN)10.1109/ACCESS.2022.3230637 (DOI)000905683300001 ()2-s2.0-85146250639 (Scopus ID)
Available from: 2023-06-08 Created: 2023-06-08 Last updated: 2023-06-08Bibliographically approved
Rahman, H. (2021). Artificial Intelligence for Non-Contact-Based Driver Health Monitoring. (Doctoral dissertation). Västerås: Mälardalen University
Open this publication in new window or tab >>Artificial Intelligence for Non-Contact-Based Driver Health Monitoring
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In clinical situations, a patient’s physical state is often monitored by sensors attached to the patient, and medical staff are alerted if the patient’s status changes in an undesirable or life-threatening direction. However, in unsupervised situations, such as when driving a vehicle, connecting sensors to the driver is often troublesome and wired sensors may not produce sufficient quality due to factors such as movement and electrical disturbance. Using a camera as a non-contact sensor to extract physiological parameters based on video images offers a new paradigm for monitoring a driver’s health and mental state. Due to the advanced technical features in modern vehicles, driving is now faster, safer and more comfortable than before. To enhance transport security (i.e. to avoid unexpected traffic accidents), it is necessary to consider a vehicle driver as a part of the traffic environment and thus monitor the driver’s health and mental state. Such a monitoring system is commonly developed based on two approaches: driving behaviour-based and physiological parameters-based.

This research work demonstrates a non-contact approach that classifies a driver’s cognitive load based on physiological parameters through a camera system and vehicular data collected from control area networks considering image processing, computer vision, machine learning (ML) and deep learning (DL). In this research, a camera is used as a non-contact sensor and pervasive approach for measuring and monitoring the physiological parameters. The contribution of this research study is four-fold: 1) Feature extraction approach to extract physiological parameters (i.e. heart rate [HR], respiration rate [RR], inter-beat interval [IBI], heart rate variability [HRV] and oxygen saturation [SpO2]) using a camera system in several challenging conditions (i.e. illumination, motion, vibration and movement); 2) Feature extraction based on eye-movement parameters (i.e. saccade and fixation); 3) Identification of key vehicular parameters and extraction of useful features from lateral speed (SP), steering wheel angle (SWA), steering wheel reversal rate (SWRR), steering wheel torque (SWT), yaw rate (YR), lanex (LAN) and lateral position (LP); 4) Investigation of ML and DL algorithms for a driver’s cognitive load classification. Here, ML algorithms (i.e. logistic regression [LR], linear discriminant analysis [LDA], support vector machine [SVM], neural networks [NN], k-nearest neighbours [k-NN], decision tree [DT]) and DL algorithms (i.e. convolutional neural networks [CNN], long short-term memory [LSTM] networks and autoencoders [AE]) are used. 

One of the major contributions of this research work is that physiological parameters were extracted using a camera. According to the results, feature extraction based on physiological parameters using a camera achieved the highest correlation coefficient of .96 for both HR and SpO2 compared to a reference system. The Bland Altman plots showed 95% agreement considering the correlation between the camera and the reference wired sensors. For IBI, the achieved quality index was 97.5% considering a 100 ms R-peak error. The correlation coefficients for 13 eye-movement features between non-contact approach and reference eye-tracking system ranged from .82 to .95.

For cognitive load classification using both the physiological and vehicular parameters, two separate studies were conducted: Study 1 with the 1-back task and Study 2 with the 2-back task. Finally, the highest average accuracy achieved in terms of cognitive load classification was 94% for Study 1 and 82% for Study 2 using LR algorithms considering the HRV parameter. The highest average classification accuracy of cognitive load was 92% using SVM considering saccade and fixation parameters. In both cases, k-fold cross-validation was used for the validation, where the value of k was 10. The classification accuracies using CNN, LSTM and autoencoder were 91%, 90%, and 90.3%, respectively. 

This research study shows such a non-contact-based approach using ML, DL, image processing and computer vision is suitable for monitoring a driver’s cognitive state.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2021
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 330
Keywords
Driver Monitoring, Artificial Intelligence, Machine Learning, Deep Learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-53529 (URN)978-91-7485-499-2 (ISBN)
Public defence
2021-04-07, Delta + digitalt via Zoom, Mälardalens högskola, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2021-02-26 Created: 2021-02-25 Last updated: 2021-04-13Bibliographically approved
Rahman, H., Ahmed, M. U., Begum, S., Fridberg, M. & Hoflin, A. (2021). Deep Learning in Remote Sensing: An Application to Detect Snow and Water in Construction Sites. In: Proceedings - 2021 4th International Conference on Artificial Intelligence for Industries, AI4I 2021: . Paper presented at 2021 4th International Conference on Artificial Intelligence for Industries (AI4I), 20-22 Sept. 2021 (pp. 52-56).
Open this publication in new window or tab >>Deep Learning in Remote Sensing: An Application to Detect Snow and Water in Construction Sites
Show others...
2021 (English)In: Proceedings - 2021 4th International Conference on Artificial Intelligence for Industries, AI4I 2021, 2021, p. 52-56Conference paper, Published paper (Refereed)
Abstract [en]

It is important for a construction and property development company to know weather conditions in their daily operation. In this paper, a deep learning-based approach is investigated to detect snow and rain conditions in construction sites using drone imagery. A Convolutional Neural Network (CNN) is developed for the feature extraction and performing classification on those features using machine learning (ML) algorithms. Well-known existing deep learning algorithms AlexNet and VGG16 models are also deployed and tested on the dataset. Results show that smaller CNN architecture with three convolutional layers was sufficient at extracting relevant features to the classification task at hand compared to the larger state-of-the-art architectures. The proposed model reached a top accuracy of 97.3% in binary classification and 96.5% while also taking rain conditions into consideration. It was also found that ML algorithms,i.e., support vector machine (SVM), logistic regression and k-nearest neighbors could be used as classifiers using feature maps extracted from CNNs and a top accuracy of 90% was obtained using SVM algorithms.

Keywords
Support vector machines;Deep learning;Training;Rain;Machine learning algorithms;Snow;Feature extraction;Classification;deep learning;convolutional neural networks
National Category
Computer Sciences
Identifiers
urn:nbn:se:mdh:diva-56384 (URN)10.1109/AI4I51902.2021.00021 (DOI)2-s2.0-85126069912 (Scopus ID)978-1-6654-3410-2 (ISBN)
Conference
2021 4th International Conference on Artificial Intelligence for Industries (AI4I), 20-22 Sept. 2021
Available from: 2021-11-09 Created: 2021-11-09 Last updated: 2022-03-18Bibliographically approved
Rahman, H., Ahmed, M. U., Barua, S., Funk, P. & Begum, S. (2021). Vision-based driver’s cognitive load classification considering eye movement using machine learning and deep learning. Sensors, 21(23), Article ID 8019.
Open this publication in new window or tab >>Vision-based driver’s cognitive load classification considering eye movement using machine learning and deep learning
Show others...
2021 (English)In: Sensors, E-ISSN 1424-8220, Vol. 21, no 23, article id 8019Article in journal (Refereed) Published
Abstract [en]

Due to the advancement of science and technology, modern cars are highly technical, more activity occurs inside the car and driving is faster; however, statistics show that the number of road fatalities have increased in recent years because of drivers’ unsafe behaviors. Therefore, to make the traffic environment safe it is important to keep the driver alert and awake both in human and autonomous driving cars. A driver’s cognitive load is considered a good indication of alertness, but determining cognitive load is challenging and the acceptance of wire sensor solutions are not preferred in real-world driving scenarios. The recent development of a non-contact approach through image processing and decreasing hardware prices enables new solutions and there are several interesting features related to the driver’s eyes that are currently explored in research. This paper presents a vision-based method to extract useful parameters from a driver’s eye movement signals and manual feature extraction based on domain knowledge, as well as automatic feature extraction using deep learning architectures. Five machine learning models and three deep learning architectures are developed to classify a driver’s cognitive load. The results show that the highest classification accuracy achieved is 92% by the support vector machine model with linear kernel function and 91% by the convolutional neural networks model. This non-contact technology can be a potential contributor in advanced driver assistive systems. 

Place, publisher, year, edition, pages
MDPI, 2021
Keywords
Cognitive load, Eye-movement, Machine learning, Non-contact
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-56825 (URN)10.3390/s21238019 (DOI)000735129000001 ()2-s2.0-85120864501 (Scopus ID)
Available from: 2021-12-23 Created: 2021-12-23 Last updated: 2022-02-10Bibliographically approved
Ahmed, M. U., Begum, S., Gestlöf, R., Rahman, H. & Sörman, J. (2020). Machine learning for cognitive load classification: A case study on contact-free approach. In: IFIP Advances in Information and Communication Technology: . Paper presented at 5 June 2020 through 7 June 2020 (pp. 31-42). Springer
Open this publication in new window or tab >>Machine learning for cognitive load classification: A case study on contact-free approach
Show others...
2020 (English)In: IFIP Advances in Information and Communication Technology, Springer , 2020, p. 31-42Conference paper, Published paper (Refereed)
Abstract [en]

The most common ways of measuring Cognitive Load (CL) is using physiological sensor signals e.g., Electroencephalography (EEG), or Electrocardiogram (ECG). However, these signals are problematic in situations e.g., in dynamic moving environments where the user cannot relax with all the sensors attached to the body and it provides significant noises in the signals. This paper presents a case study using a contact-free approach for CL classification based on Heart Rate Variability (HRV) collected from ECG signal. Here, a contact-free approach i.e., a camera-based system is compared with a contact-based approach i.e., Shimmer GSR+ system in detecting CL. To classify CL, two different Machine Learning (ML) algorithms, mainly, Support Vector Machine (SVM) and k-Nearest-Neighbor (k-NN) have been applied. Based on the gathered Inter-Beat-Interval (IBI) values from both the systems, 13 different HRV features were extracted in a controlled study to determine three levels of CL i.e., S0: low CL, S1: normal CL and S2: high CL. To get the best classification accuracy with the ML algorithms, different optimizations such as kernel functions were chosen with different feature matrices both for binary and combined class classifications. According to the results, the highest average classification accuracy was achieved as 84% on the binary classification i.e. S0 vs S2 using k-NN. The highest F1 score was achieved 88% using SVM for the combined class considering S0 vs (S1 and S2) for contact-free approach i.e. the camera system. Thus, all the ML algorithms achieved a higher classification accuracy while considering the contact-free approach than contact-based approach. © IFIP International Federation for Information Processing 2020.

Place, publisher, year, edition, pages
Springer, 2020
Keywords
Cognitive Load (CL), Contact-free approach, k-Nearest-Neighbor (k-NN), Machine Learning (ML), Support Vector Machines (SVM), Cameras, Electrocardiography, Electroencephalography, Electrophysiology, Learning systems, Nearest neighbor search, Psychophysiology, Support vector machines, Binary classification, Classification accuracy, Cognitive loads, Feature matrices, Heart rate variability, K-nearest neighbors, Kernel function, Physiological sensors, Biomedical signal processing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-48938 (URN)10.1007/978-3-030-49161-1_3 (DOI)2-s2.0-85086243083 (Scopus ID)9783030491604 (ISBN)
Conference
5 June 2020 through 7 June 2020
Available from: 2020-06-18 Created: 2020-06-18 Last updated: 2020-06-18Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2020). Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration. IEEE Transactions on Biomedical Engineering, 67(1), 88-98, Article ID 8715455.
Open this publication in new window or tab >>Non-Contact Physiological Parameters Extraction Using Facial Video Considering Illumination, Motion, Movement and Vibration
2020 (English)In: IEEE Transactions on Biomedical Engineering, ISSN 0018-9294, E-ISSN 1558-2531, Vol. 67, no 1, p. 88-98, article id 8715455Article in journal (Refereed) Published
Abstract [en]

Objective: In this paper, four physiological parameters, i.e., heart rate (HR), inter-beat-interval (IBI), heart rate variability (HRV), and oxygen saturation (SpO2), are extracted from facial video recordings. Methods: Facial videos were recorded for 10 min each in 30 test subjects while driving a simulator. Four regions of interest (ROIs) are automatically selected in each facial image frame based on 66 facial landmarks. Red-green-blue color signals are extracted from the ROIs and four physiological parameters are extracted from the color signals. For the evaluation, physiological parameters are also recorded simultaneously using a traditional sensor 'cStress,' which is attached to hands and fingers of test subjects. Results: The Bland Altman plots show 95% agreement between the camera system and 'cStress' with the highest correlation coefficient R = 0.96 for both HR and SpO2. The quality index is estimated for IBI considering 100 ms R-peak error; the accumulated percentage achieved is 97.5%. HRV features in both time and frequency domains are compared and the highest correlation coefficient achieved is 0.93. One-way analysis of variance test shows that there are no statistically significant differences between the measurements by camera and reference sensors. Conclusion: These results present high degrees of accuracy of HR, IBI, HRV, and SpO2 extraction from facial image sequences. Significance: The proposed non-contact approach could broaden the dimensionality of physiological parameters extraction using cameras. This proposed method could be applied for driver monitoring application under realistic conditions, i.e., illumination, motion, movement, and vibration.

Place, publisher, year, edition, pages
IEEE Computer Society, 2020
Keywords
Ambient illumination, driver monitoring, motion, movement, non-contact, physiological parameters, vibration, Cameras, Extraction, Heart, Video recording, Physiological models
National Category
Control Engineering
Identifiers
urn:nbn:se:mdh:diva-46689 (URN)10.1109/TBME.2019.2908349 (DOI)000505526300009 ()2-s2.0-85077175941 (Scopus ID)
Available from: 2020-01-09 Created: 2020-01-09 Last updated: 2021-02-25Bibliographically approved
Rahman, H., Ahmed, M. U., Barua, S. & Begum, S. (2020). Non-contact-based driver's cognitive load classification using physiological and vehicular parameters. Biomedical Signal Processing and Control, 55, Article ID 101634.
Open this publication in new window or tab >>Non-contact-based driver's cognitive load classification using physiological and vehicular parameters
2020 (English)In: Biomedical Signal Processing and Control, ISSN 1746-8094, E-ISSN 1746-8108, Vol. 55, article id 101634Article in journal (Refereed) Published
Abstract [en]

Classification of cognitive load for vehicular drivers is a complex task due to underlying challenges of the dynamic driving environment. Many previous works have shown that physiological sensor signals or vehicular data could be a reliable source to quantify cognitive load. However, in driving situations, one of the biggest challenges is to use a sensor source that can provide accurate information without interrupting diverging tasks. In this paper, instead of traditional wire-based sensors, non-contact camera and vehicle data are used that have no physical contact with the driver and do not interrupt driving. Here, four machine learning algorithms, logistic regression (LR), support vector machine (SVM), linear discriminant analysis (LDA) and neural networks (NN), are investigated to classify the cognitive load using the collected data from a driving simulator study. In this paper, physiological parameters are extracted from facial video images, and vehicular parameters are collected from controller area networks (CAN). The data collection was performed in close collaboration with industrial partners in two separate studies, in which study-1 was designed with a 1-back task and study-2 was designed with both 1-back and 2-back task. The goal of the experiment is to investigate how accurately the machine learning algorithms can classify drivers' cognitive load based on the extracted features in complex dynamic driving environments. According to the results, for the physiological parameters extracted from the facial videos, the LR model with logistic function outperforms the other three classification methods. Here, in study-1, the achieved average accuracy for the LR classifier is 94% and in study-2 the average accuracy is 82%. In addition, the classification accuracy for the collected physiological parameters was compared with reference wire-sensor signals. It is observed that the classification accuracies between the sensor and the camera are very similar; however, better accuracy is achieved with the camera data due to having lower artefacts than the sensor data. 

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2020
Keywords
Non-contact, Physiological parameters, Vehicular parameters, Cognitive load, Classification, Logistic regression, Support vector machine, Decision tree
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-46634 (URN)10.1016/j.bspc.2019.101634 (DOI)000502893200022 ()2-s2.0-85071533851 (Scopus ID)
Available from: 2020-01-02 Created: 2020-01-02 Last updated: 2021-02-25Bibliographically approved
Ahmed, M. U., Altarabichi, M. G., Begum, S., Ginsberg, F., Glaes, R., Östgren, M., . . . Sorensen, M. (2019). A vision-based indoor navigation system for individuals with visual impairment. International Journal of Artificial Intelligence, 17(2), 188-201
Open this publication in new window or tab >>A vision-based indoor navigation system for individuals with visual impairment
Show others...
2019 (English)In: International Journal of Artificial Intelligence, E-ISSN 0974-0635, Vol. 17, no 2, p. 188-201Article in journal (Refereed) Published
Abstract [en]

Navigation and orientation in an indoor environment are a challenging task for visually impaired people. This paper proposes a portable vision-based system to provide support for visually impaired persons in their daily activities. Here, machine learning algorithms are used for obstacle avoidance and object recognition. The system is intended to be used independently, easily and comfortably without taking human help. The system assists in obstacle avoidance using cameras and gives voice message feedback by using a pre-trained YOLO Neural Network for object recognition. In other parts of the system, a floor plane estimation algorithm is proposed for obstacle avoidance and fuzzy logic is used to prioritize the detected objects in a frame and generate alert to the user about possible risks. The system is implemented using the Robot Operating System (ROS) for communication on a Nvidia Jetson TX2 with a ZED stereo camera for depth calculations and headphones for user feedback, with the capability to accommodate different setup of hardware components. The parts of the system give varying results when evaluated and thus in future a large-scale evaluation is needed to implement the system and get it as a commercialized product in this area.

Place, publisher, year, edition, pages
CESER Publications, 2019
Keywords
Deep learning, Depth estimation, Indoor navigation, Object detection, Object recognition
National Category
Robotics Computer Sciences Computer Systems Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:mdh:diva-45835 (URN)2-s2.0-85073347243 (Scopus ID)
Available from: 2019-10-25 Created: 2019-10-25 Last updated: 2023-12-13Bibliographically approved
Rahman, H., Ahmed, M. U. & Begum, S. (2018). Deep Learning based Person Identification using Facial Images. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225: . Paper presented at 4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France (pp. 111-115).
Open this publication in new window or tab >>Deep Learning based Person Identification using Facial Images
2018 (English)In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Volume 225, 2018, p. 111-115Conference paper, Published paper (Refereed)
Abstract [en]

Person identification is an important task for many applications for example in security. A person can be identified using finger print, vocal sound, facial image or even by DNA test. However, Person identification using facial images is one of the most popular technique which is non-contact and easy to implement and a research hotspot in the field of pattern recognition and machine vision. n this paper, a deep learning based Person identification system is proposed using facial images which shows higher accuracy than another traditional machine learning, i.e. Support Vector Machine.

Keywords
Face recognition, Person Identification, Deep Learning.
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37091 (URN)10.1007/978-3-319-76213-5_17 (DOI)000476922000017 ()2-s2.0-85042545019 (Scopus ID)9783319762128 (ISBN)
Conference
4th EAI International Conference on IoT Technologies for HealthCare HealthyIOT'17, 24 Oct 2017, Angers, France
Projects
SafeDriver: A Real Time Driver's State Monitoring and Prediction System
Available from: 2017-10-26 Created: 2017-10-26 Last updated: 2019-08-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1547-4386

Search in DiVA

Show all publications