https://www.mdu.se/

mdu.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 190) Show all publications
Abdullah, S., Hafid, A., Folke, M., Lindén, M. & Kristoffersson, A. (2023). A Novel Fiducial Point Extraction Algorithm to Detect C and D Points from the Acceleration Photoplethysmogram (CnD). Electronics, 12(5), Article ID 1174.
Open this publication in new window or tab >>A Novel Fiducial Point Extraction Algorithm to Detect C and D Points from the Acceleration Photoplethysmogram (CnD)
Show others...
2023 (English)In: Electronics, E-ISSN 2079-9292, Vol. 12, no 5, article id 1174Article in journal (Refereed) Published
Abstract [en]

The extraction of relevant features from the photoplethysmography signal for estimating certain physiological parameters is a challenging task. Various feature extraction methods have been proposed in the literature. In this study, we present a novel fiducial point extraction algorithm to detect c and d points from the acceleration photoplethysmogram (APG), namely “CnD”. The algorithm allows for the application of various pre-processing techniques, such as filtering, smoothing, and removing baseline drift; the possibility of calculating first, second, and third photoplethysmography derivatives; and the implementation of algorithms for detecting and highlighting APG fiducial points. An evaluation of the CnD indicated a high level of accuracy in the algorithm’s ability to identify fiducial points. Out of 438 APG fiducial c and d points, the algorithm accurately identified 434 points, resulting in an accuracy rate of 99%. This level of accuracy was consistent across all the test cases, with low error rates. These findings indicate that the algorithm has a high potential for use in practical applications as a reliable method for detecting fiducial points. Thereby, it provides a valuable new resource for researchers and healthcare professionals working in the analysis of photoplethysmography signals.

National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-62004 (URN)10.3390/electronics12051174 (DOI)000947098400001 ()2-s2.0-85149747017 (Scopus ID)
Available from: 2023-03-03 Created: 2023-03-03 Last updated: 2023-04-12Bibliographically approved
Abdelakram, H., Abdullah, S., Lindén, M., Kristoffersson, A. & Folke, M. (2023). Impact of Activities in Daily Living on Electrical Bioimpedance Measurements for Bladder Monitoring. In: : . Paper presented at 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), 22-24 June 2023, L'Aquila, Italy.
Open this publication in new window or tab >>Impact of Activities in Daily Living on Electrical Bioimpedance Measurements for Bladder Monitoring
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Accurate bladder monitoring is critical in the management of conditions such as urinary incontinence, voiding dysfunction, and spinal cord injuries. Electrical bioimpedance (EBI) has emerged as a cost-effective and non-invasive approach to monitoring bladder activity in daily life, with particular relevance to patient groups who require measurement of bladder urine volume (BUV) to prevent urinary leakage. However, the impact of activities in daily living (ADLs) on EBI measurements remains incompletely characterized. In this study, we investigated the impact of normal ADLs such as sitting, standing, and walking on EBI measurements using the MAX30009evkit system with four electrodes placed on the lower abdominal area. We developed an algorithm to identify artifacts caused by the different activities from the EBI signals. Our findings demonstrate that various physical activities clearly affected the EBI measurements, indicating the necessity of considering them during bladder monitoring with EBI technology performed during physical activity (or normal ADLs). We also observed that several specific activities could be distinguished based on their impedance values and waveform shapes. Thus, our results provide a better understanding of the impact of physical activity on EBI measurements and highlight the importance of considering such physical activities during EBI measurements in order to enhance the reliability and effectiveness of EBI technology for bladder monitoring.

National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-64033 (URN)10.1109/CBMS58004.2023.00316 (DOI)001037777900135 ()2-s2.0-85166470920 (Scopus ID)979-8-3503-1224-9 (ISBN)
Conference
2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), 22-24 June 2023, L'Aquila, Italy
Available from: 2023-08-16 Created: 2023-08-16 Last updated: 2023-10-25Bibliographically approved
Abdullah, S., Abdelakram, H., Lindén, M., Folke, M. & Kristoffersson, A. (2023). Machine Learning-Based Classification of Hypertension using CnD Features from Acceleration Photoplethysmography and Clinical Parameters. In: Proceedings - IEEE Symposium on Computer-Based Medical Systems: . Paper presented at 36th IEEE International Symposium on Computer-Based Medical Systems, CBMS 2023, Aquila, 22 June 2023 through 24 June 2023 (pp. 923-924). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Machine Learning-Based Classification of Hypertension using CnD Features from Acceleration Photoplethysmography and Clinical Parameters
Show others...
2023 (English)In: Proceedings - IEEE Symposium on Computer-Based Medical Systems, Institute of Electrical and Electronics Engineers Inc. , 2023, p. 923-924Conference paper, Published paper (Refereed)
Abstract [en]

Cardiovascular diseases (CVDs) are a leading cause of death worldwide, and hypertension is a major risk factor for acquiring CVDs. Early detection and treatment of hypertension can significantly reduce the risk of developing CVDs and related complications. In this study, a linear SVM machine learning model was used to classify subjects as normal or at different stages of hypertension. The features combined statistical parameters derived from the acceleration plethysmography waveforms and clinical parameters extracted from a publicly available dataset. The model achieved an overall accuracy of 87.50% on the validation dataset and 95.35% on the test dataset. The model's true positive rate and positive predictivity was high in all classes, indicating a high accuracy, and precision. This study represents the first attempt to classify cardiovascular conditions using a combination of acceleration photoplethysmogram (APG) features and clinical parameters The study demonstrates the potential of APG analysis as a valuable tool for early detection of hypertension.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
Keywords
acceleration photoplethysmography, cardiovascular, fiducial points, hypertension, PPG, Acceleration, Classification (of information), Learning systems, Statistical tests, Support vector machines, Cardiovascular disease, Causes of death, Clinical parameters, Machine-learning, Photoplethysmogram, Photoplethysmography
National Category
Cardiac and Cardiovascular Systems Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-63964 (URN)10.1109/CBMS58004.2023.00344 (DOI)001037777900162 ()2-s2.0-85166469701 (Scopus ID)9798350312249 (ISBN)
Conference
36th IEEE International Symposium on Computer-Based Medical Systems, CBMS 2023, Aquila, 22 June 2023 through 24 June 2023
Available from: 2023-08-16 Created: 2023-08-16 Last updated: 2023-10-25Bibliographically approved
Abdullah, S., Hafid, A., Folke, M., Lindén, M. & Kristoffersson, A. (2023). PPGFeat: a novel MATLAB toolbox for extracting PPG fiducial points. Frontiers in Bioengineering and Biotechnology, 11, Article ID 1199604.
Open this publication in new window or tab >>PPGFeat: a novel MATLAB toolbox for extracting PPG fiducial points
Show others...
2023 (English)In: Frontiers in Bioengineering and Biotechnology, E-ISSN 2296-4185, Vol. 11, article id 1199604Article in journal (Refereed) Published
Abstract [en]

Photoplethysmography is a non-invasive technique used for measuring several vital signs and for the identification of individuals with an increased disease risk. Its principle of work is based on detecting changes in blood volume in the microvasculature of the skin through the absorption of light. The extraction of relevant features from the photoplethysmography signal for estimating certain physiological parameters is a challenging task, where various feature extraction methods have been proposed in the literature. In this work, we present PPGFeat, a novel MATLAB toolbox supporting the analysis of raw photoplethysmography waveform data. PPGFeat allows for the application of various preprocessing techniques, such as filtering, smoothing, and removal of baseline drift; the calculation of photoplethysmography derivatives; and the implementation of algorithms for detecting and highlighting photoplethysmography fiducial points. PPGFeat includes a graphical user interface allowing users to perform various operations on photoplethysmography signals and to identify, and if required also adjust, the fiducial points. Evaluating the PPGFeat’s performance in identifying the fiducial points present in the publicly available PPG-BP dataset, resulted in an overall accuracy of 99% and 3038/3066 fiducial points were correctly identified. PPGFeat significantly reduces the risk of errors in identifying inaccurate fiducial points. Thereby, it is providing a valuable new resource for researchers for the analysis of photoplethysmography signals.

Keywords
photoplethysmography, PPG features, fiducial points, MATLAB, toolbox, signal processing, acceleration photoplethysmography, velocity photoplethysmography
National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-63035 (URN)10.3389/fbioe.2023.1199604 (DOI)001020124900001 ()2-s2.0-85163601193 (Scopus ID)
Available from: 2023-06-09 Created: 2023-06-09 Last updated: 2023-07-26Bibliographically approved
Abdelakram, H., Difallah, S., Alves, C., Abdullah, S., Folke, M., Lindén, M. & Kristoffersson, A. (2023). State of the Art of Non-Invasive Technologies for Bladder Monitoring: A Scoping Review. Sensors, 23(5), Article ID 2758.
Open this publication in new window or tab >>State of the Art of Non-Invasive Technologies for Bladder Monitoring: A Scoping Review
Show others...
2023 (English)In: Sensors, E-ISSN 1424-8220, Vol. 23, no 5, article id 2758Article, review/survey (Refereed) Published
Abstract [en]

Bladder monitoring, including urinary incontinence management and bladder urinary volume monitoring, is a vital part of urological care. Urinary incontinence is a common medical condition affecting the quality of life of more than 420 million people worldwide, and bladder urinary volume is an important indicator to evaluate the function and health of the bladder. Previous studies on non-invasive techniques for urinary incontinence management technology, bladder activity and bladder urine volume monitoring have been conducted. This scoping review outlines the prevalence of bladder monitoring with a focus on recent developments in smart incontinence care wearable devices and the latest technologies for non-invasive bladder urine volume monitoring using ultrasound, optical and electrical bioimpedance techniques. The results found are promising and their application will improve the well-being of the population suffering from neurogenic dysfunction of the bladder and the management of urinary incontinence. The latest research advances in bladder urinary volume monitoring and urinary incontinence management have significantly improved existing market products and solutions and will enable the development of more effective future solutions.

National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-62006 (URN)10.3390/s23052758 (DOI)000947664800001 ()36904965 (PubMedID)2-s2.0-85149769899 (Scopus ID)
Available from: 2023-03-03 Created: 2023-03-03 Last updated: 2023-05-10Bibliographically approved
Kristoffersson, A. & Lindén, M. (2022). A Systematic Review of Wearable Sensors for Monitoring Physical Activity. Sensors, 22(2), Article ID 573.
Open this publication in new window or tab >>A Systematic Review of Wearable Sensors for Monitoring Physical Activity
2022 (English)In: Sensors, E-ISSN 1424-8220, Vol. 22, no 2, article id 573Article, review/survey (Refereed) Published
Abstract [en]

This article reviews the use of wearable sensors for the monitoring of physical activity (PA)for different purposes, including assessment of gait and balance, prevention and/or detection of falls,recognition of various PAs, conduction and assessment of rehabilitation exercises and monitoringof neurological disease progression. The article provides in-depth information on the retrievedarticles and discusses study shortcomings related to demographic factors, i.e., age, gender, healthyparticipants vs patients, and study conditions. It is well known that motion patterns change with ageand the onset of illnesses, and that the risk of falling increases with age. Yet, studies including olderpersons are rare. Gender distribution was not even provided in several studies, and others includedonly, or a majority of, men. Another shortcoming is that none of the studies were conducted inreal-life conditions. Hence, there is still important work to be done in order to increase the usefulnessof wearable sensors in these areas. The article highlights flaws in how studies based on previouslycollected datasets report on study samples and the data collected, which makes the validity andgeneralizability of those studies low. Exceptions exist, such as the promising recently reported opendataset FallAllD, wherein a longitudinal study with older adults is ongoing. 

National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-57024 (URN)10.3390/s22020573 (DOI)000758446500001 ()2-s2.0-85122671604 (Scopus ID)
Funder
Knowledge Foundation, 20180158
Note

This research was conducted within the scope of the scope of the ESS-H+ (Embedded Sensor Systems for Health Plus). The project is funded by the Swedish Knowledge Foundation (project number: 20180158).

Available from: 2022-01-21 Created: 2022-01-21 Last updated: 2022-03-09Bibliographically approved
Åkerberg, A., Arwald, J., Söderlund, A. & Lindén, M. (2022). An Approach to a Novel Device Agnostic Model Illustrating the Relative Change in Physical Behavior Over Time to Support Behavioral Change. Journal of Technology in Behavioral Science, 7(2), 240-251
Open this publication in new window or tab >>An Approach to a Novel Device Agnostic Model Illustrating the Relative Change in Physical Behavior Over Time to Support Behavioral Change
2022 (English)In: Journal of Technology in Behavioral Science, ISSN 2366-5963, Vol. 7, no 2, p. 240-251Article in journal (Refereed) Published
Abstract [en]

Today, there is a lack of useful visual presentations of data showing progress over long time periods for users of physical activity self-monitoring devices. The aim of this paper was to present a novel theoretical model that describes the relative change in physical behavior over time and to provide examples of model application with previously collected data. Physical behavior, which includes both sedentary behavior and physical activity, was categorized into four dimensions and further processed and adjusted to fit the novel model. The model was visualized both theoretically and by using example data for two out of 20 participants, illustrating the relative change compared to baseline and trendlines for all dimensions. This approach to a novel device agnostic model can visualize the data over time and is intended to be used on an individual basis by users that need support for physical behavioral change. The model, which is based on earlier research, has flexibility and was developed to be used as a complement for data processing, to future and currently available self-monitoring devices within this arena. In the future, the novel model should be studied to see if it is valid, tested with larger samples over longer study periods, and tested for use with other self-monitoring devices to ensure its usefulness and trustworthiness.

Place, publisher, year, edition, pages
Springer, 2022
Keywords
Behavioral change, Dimension, Model, Physical activity, Physical behavior, Sedentary behavior
National Category
Medical and Health Sciences
Identifiers
urn:nbn:se:mdh:diva-60390 (URN)10.1007/s41347-022-00246-6 (DOI)2-s2.0-85139797621 (Scopus ID)
Available from: 2022-10-26 Created: 2022-10-26 Last updated: 2022-10-26Bibliographically approved
Kaur, R., GholamHosseini, H., Sinha, R. & Lindén, M. (2022). Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Medical Imaging, 22(1), Article ID 103.
Open this publication in new window or tab >>Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images
2022 (English)In: BMC Medical Imaging, ISSN 1471-2342, E-ISSN 1471-2342, Vol. 22, no 1, article id 103Article in journal (Refereed) Published
Abstract [en]

Background Melanoma is the most dangerous and aggressive form among skin cancers, exhibiting a high mortality rate worldwide. Biopsy and histopathological analysis are standard procedures for skin cancer detection and prevention in clinical settings. A significant step in the diagnosis process is the deep understanding of the patterns, size, color, and structure of lesions based on images obtained through dermatoscopes for the infected area. However, the manual segmentation of the lesion region is time-consuming because the lesion evolves and changes its shape over time, making its prediction challenging. Moreover, it is challenging to predict melanoma at the initial stage as it closely resembles other skin cancer types that are not malignant as melanoma; thus, automatic segmentation techniques are required to design a computer-aided system for accurate and timely detection. Methods As deep learning approaches have gained significant attention in recent years due to their remarkable performance, therefore, in this work, we proposed a novel design of a convolutional neural network (CNN) framework based on atrous convolutions for automatic lesion segmentation. This architecture is built based on the concept of atrous/dilated convolutions which are effective for semantic segmentation. A deep neural network is designed from scratch employing several building blocks consisting of convolutional, batch normalization, leakyReLU layer, and fine-tuned hyperparameters contributing altogether towards higher performance. Conclusion The network was tested on three benchmark datasets provided by International Skin Imaging Collaboration (ISIC), i.e., ISIC 2016, ISIC 2017, and ISIC 2018. The experimental results showed that the proposed network achieved an average Jaccard index of 90.4% on ISIC 2016, 81.8% on ISIC 2017, and 89.1% on ISIC 2018 datasets, respectively which is recorded as higher than the top three winners of the ISIC challenge and other state-of-the-art methods. Also, the model successfully extracts lesions from the whole image in one pass in less time, requiring no pre-processing step. The conclusions yielded that network is accurate in performing lesion segmentation on adopted datasets.

Place, publisher, year, edition, pages
BMC, 2022
Keywords
Skin cancer, Lesion segmentation, CNN, Deep learning
National Category
Clinical Medicine
Identifiers
urn:nbn:se:mdh:diva-58622 (URN)10.1186/s12880-022-00829-y (DOI)000802078700003 ()2-s2.0-85130898555 (Scopus ID)
Available from: 2022-06-08 Created: 2022-06-08 Last updated: 2022-06-15Bibliographically approved
Lindén, M., Kristoffersson, A. & Björkman, M. (2022). Embedded Sensor Systems for Health – experiences from over nine years research incollaboration with industry and healthcare. In: Medicinteknikdagarna 2022: Abstracts. Paper presented at Medicinteknikdagarna 2022, Luleå, Sweden, 4-6 October, 2022 (pp. 25-25).
Open this publication in new window or tab >>Embedded Sensor Systems for Health – experiences from over nine years research incollaboration with industry and healthcare
2022 (English)In: Medicinteknikdagarna 2022: Abstracts, 2022, p. 25-25Conference paper, Oral presentation with published abstract (Refereed)
National Category
Medical Engineering
Identifiers
urn:nbn:se:mdh:diva-60576 (URN)
Conference
Medicinteknikdagarna 2022, Luleå, Sweden, 4-6 October, 2022
Available from: 2022-11-07 Created: 2022-11-07 Last updated: 2022-11-07Bibliographically approved
Kaur, R., Gholamhosseini, H., Sinha, R. & Lindén, M. (2022). Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. Sensors, 22(3), Article ID 1134.
Open this publication in new window or tab >>Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images
2022 (English)In: Sensors, E-ISSN 1424-8220, Vol. 22, no 3, article id 1134Article in journal (Refereed) Published
Abstract [en]

Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
MDPI, 2022
Keywords
Classification, Deep convolutional neural networks, Melanoma, Skin cancer, Classification (of information), Complex networks, Convolution, Convolutional neural networks, Deep neural networks, Dermatology, Diseases, Image classification, Dermoscopic images, High-level features, Learning approach, Low-to-high, Machine-vision, Melanoma detection, Skin cancers, Skin images, Skin samples, Vision tools, Oncology
National Category
Computer Sciences
Identifiers
urn:nbn:se:mdh:diva-57254 (URN)10.3390/s22031134 (DOI)000760185700001 ()2-s2.0-85123859617 (Scopus ID)
Available from: 2022-02-09 Created: 2022-02-09 Last updated: 2022-03-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1940-1747

Search in DiVA

Show all publications