The aim of this work is to apply and evaluate different chemometric approaches employing several machine learning techniques in order to characterize the moisture content in biomass from data obtained by Near Infrared (NIR) spectroscopy. The approaches include three main parts: a) data pre-processing, b) wavelength selection and c) development of a regression model enabling moisture content measurement. Standard Normal Variate (SNV), Multiplicative Scatter Correction and Savitzky-Golay first (SGi) and second (SG2) derivatives and its combinations were applied for data pre-processing. Genetic algorithm (GA) and iterative PLS (iPLS) were used for wavelength selection. Artificial Neural Network (ANN), Gaussian Process Regression (GPR), Support Vector Regression (SVR) and traditional Partial Least Squares (PLS) regression, were employed as machine learning regression methods. Results shows that SNV combined with SG1 first derivative performs the best in data pre-processing. The GA is the most effective methods for variable selection and GPR achieved a high accuracy in regression modeling while having low demands on computation time. Overall, the machine learning techniques demonstrate a great potential to be used in future NIR spectroscopy applications.
This thesis delves into the use of Artificial Intelligence (AI) for quality control in manufacturing systems, with a particular focus on anomaly detection through the analysis of torque measurements in rotating mechanical systems. The research specifically examines the effectiveness of torque measurements in quality control of locks, challenging the traditional method that relies on human tactile sense for detecting mechanical anomalies. This conventional approach, while widely used, has been found to yield inconsistent results and poses physical strain on operators. A key aspect of this study involves conducting experiments on locks using torque measurements to identify mechanical anomalies. This method represents a shift from the subjective and physically demanding practice of manually testing each lock. The research aims to demonstrate that an automated, AI-driven approach can offer more consistent and reliable results, thereby improving overall product quality. The development of a machine learning model for this purpose starts with the collection of training data, a process that can be costly and disruptive to normal workflow. Therefore, this thesis also investigates strategies for predicting and minimizing the sample size used for training. Additionally, it addresses the critical need of trustworthiness in AI systems used for final quality control. The research explores how to utilize machine learning models that are not only effective in detecting anomalies but also offers a level of interpretability, avoiding the pitfalls of black box AI models. Overall, this thesis contributes to advancing automated quality control by exploring the state-of-the-art machine learning algorithms for mechanical fault detection, focusing on sample size prediction and minimization and also model interpretability. To the best of the author’s knowledge, it is the first study that evaluates an AI-driven solution for quality control of mechanical locks, marking an innovation in the field.
Artificial intelligence in manufacturing systems is currently most used for quality control and predictive maintenance. In the lock industry, quality control of final assembled cylinder lock is still done by hand, wearing out the operators' wrists and introducing subjectivity which negatively affects reliability. Studies have shown that quality control can be automated using machine-learning to analyse torque measurements from the locks. The resulting performance of the approach depends on the dimensionality and size of the training dataset but unfortunately, the process of gathering data can be expensive so the amount collected data should therefore be minimized with respect to an acceptable performance measure. The dimensionality can be reduced with a method called Principal Component Analysis and the training dataset size can be estimated by repeated testing of the algorithms with smaller datasets of different sizes, which then can be used to extrapolate the expected performance for larger datasets. The purpose of this study is to evaluate the state-of-the-art methods to predict and minimize the needed sample size for commonly used machine-learning algorithms to reach an acceptable anomaly detection accuracy using torque measurements from locks. The results show that the learning curve with the best fit to the training data does not always give the best predictions. Instead, performance depends on the amount of data used to create the curve and the particular machine-learning algorithm used. Overall, the exponential and power-law functions gave the most reliable predictions and the use of principal component analysis greatly reduced the learning effort for the machine-learning algorithms. With torque measurements from 50-150 locks, we predicted a detection accuracy of over 95% while the current method of using the human tactile sense gives only 16% accuracy.
This paper presents an interpretable machinelearning model for anomaly detection in door locks using torque data. The model aims to replace the human tactile sense in the quality control process, reducing repetitive tasks and improving reliability. The model achieved an accuracy of 96%, however, to gain social acceptance and operators' trust, interpretability of the model is crucial. The purpose of this study was to evaluate anapproach that can improve interpretability of anomalousclassifications obtained from an anomaly detection model. Weevaluate four instance-based counterfactual explanators, three of which, employ optimization techniques and one uses, a less complex, weighted nearest neighbor approach, which serve as ourbaseline. The former approaches, leverage a latent representation of the data, using a weighted principal component analysis, improving plausibility of the counter factual explanations andreduces computational cost. The explanations are presentedtogether with the 5-50-95th percentile range of the training data, acting as a frame of reference to improve interpretability. All approaches successfully presented valid and plausible counterfactual explanations. However, instance-based approachesemploying optimization techniques yielded explanations withgreater similarity to the observations and was therefore concluded to be preferable despite the higher execution times (4-16s) compared to the baseline approach (0.1s). The findings of this study hold significant value for the lock industry and can potentially be extended to other industrial settings using timeseries data, serving as a valuable point of departure for further research.
Historically, cylinder locks’ quality has been tested manually by human operators after full assembly. The frequency and the characteristics of the testing procedure for these locks wear the operators’ wrists and lead to varying results of the quality control. The consistency in the quality control is an important factor for the expected lifetime of the locks which is why the industry seeks an automated solution. This study evaluates how consistently the operators can classify a collection of locks, using their tactile sense, compared to a more objective approach, using torque measurements and Machine Learning (ML). These locks were deliberately chosen because they are prone to get inconsistent classifications, which means that there is no ground truth of how to classify them. The ML algorithms were therefore evaluated with two different labeling approaches, one based on the results from the operators, using their tactile sense to classify into ‘working’ or ‘faulty’ locks, and a second approach by letting an unsupervised learner create two clusters of the data which were then labeled by an expert using visual inspection of the torque diagrams. The results show that an ML-solution, trained with the second approach, can classify mechanical anomalies, based on torque data, more consistently compared to operators, using their tactile sense. These findings are a crucial milestone for the further development of a fully automated test procedure that has the potential to increase the reliability of the quality control and remove an injury-prone task from the operators.
A common problem for autonomous vehicles is to define a coherent round boundary of unstructured roads. To solve this problem an evolutionary approach has been evaluated, by using a modified ant optimization algorithm to define a coherent road edge along the unstructured road in night conditions. The work presented in this paper involved pre-processing, perfecting the edges in an autonomous fashion and developing an algorithm to find the best candidates of starting positions for the ant colonies. All together these efforts enable ant colony optimization (ACO) to perform successfully in this application scenario. The experiment results show that the best paths well followed the edges and that the mid-points between the paths stayed centered on the road.