[Context and Motivation] Content-based recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a new requirement is proposed by a stakeholder, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. [Question/problem] Several NLP approaches for similarity computation are available, and there is little empirical evidence on the adoption of an effective technique in recommender systems specifically oriented to requirements-based code reuse. [Principal ideas/results] This study compares different state-of-the-art NLP approaches and correlates the similarity among requirements with the similarity of their source code. The evaluation is conducted on real-world requirements from two industrial projects in the railway domain. Results show that requirements similarity computed with the traditional tf-idf approach has the highest correlation with the actual software similarity in the considered context. Furthermore, results indicate a moderate positive correlation with Spearman's rank correlation coefficient of more than 0.5. [Contribution] Our work is among the first ones to explore the relationship between requirements similarity and software similarity. In addition, we also identify a suitable approach for computing requirements similarity that reflects software similarity well in an industrial context. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and categorization.
Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.
Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm’s efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
The use of requirements’ information in testing is a well-recognized practice in the software development life cycle. Literature reveals that existing tests prioritization and selection approaches neglected vital factors affecting tests priorities, like interdependencies between requirement specifications. We believe that models may play a positive role in specifying these inter-dependencies and prioritizing tests based on these inter-dependencies. However, till date, few studies can be found that make use of requirements inter-dependencies for test case prioritization. This paper uses a meta-model to aid modeling requirements, their related tests, and inter-dependencies between them. The instance of this meta-model is then processed by our modified PageRank algorithm to prioritize the requirements. The requirement priorities are then propagated to related test cases in the test model and test cases are selected based on coverage of extra-functional properties. We have demonstrated the applicability of our proposed approach on a small example case.
The software system controlling a train is typically deployed on various hardware architectures and is required to process various signals across those deployments. Increases of such customization scenarios, as well as the needed adherence of the software to various safety standards in different application domains, has led to the adoption of product line engineering within the railway domain. This paper explores the current state-of-practice of software product line development within a team developing industrial embedded software for a train propulsion control system. Evidence is collected by means of a focus group session with several engineers and through inspection of archival data. We report several benefits and challenges experienced during product line adoption and deployment. Furthermore, we identify and discuss research opportunities, focusing in particular on the areas of product line evolution and test automation.
Categorizing existing test specifications can provide insights on coverage of the test suite to extra-functional properties. Manual approaches for test categorization can be time-consuming and prone to error. In this short paper, we propose a semi-automated approach for semantic keywords-based textual test categorization for extra-functional properties. The approach is the first step towards coverage-based test case selection based on extra-functional properties. We report a preliminary evaluation of industrial data for test categorization for safety aspects. Results show that keyword-based approaches can be used to categorize tests for extra-functional properties and can be improved by considering contextual information of keywords.
This tutorial explores requirements-based reuse recommendation for product line assets in the context of clone-and-own product lines.
Software product lines (SPLs) are based on reuse rationale to aid quick and quality delivery of complex products at scale. Deriving a new product from a product line requires reuse analysis to avoid redundancy and support a high degree of assets reuse. In this paper, we propose and evaluate automated support for recommending SPL assets that can be reused to realize new customer requirements. Using the existing customer requirements as input, the approach applies natural language processing and clustering to generate reuse recommendations for unseen customer requirements in new projects. The approach is evaluated both quantitatively and qualitatively in the railway industry. Results show that our approach can recommend reuse with 74% accuracy and 57.4% exact match. The evaluation further indicates that the recommendations are relevant to engineers and can support the product derivation and feasibility analysis phase of the projects. The results encourage further study on automated reuse analysis on other levels of abstractions.
With the development of multicore hardware, concurrent, parallel and multicore software are becoming increasingly popular. Software companies are spending a huge amount of time and resources to nd and debug the bugs. Among all types of software bugs, concurrency bugs are also important and troublesome. This type of bugs is increasingly becoming an issue particularly due to the growing prevalence of multicore hardware. In this position paper, we propose a model for monitoring and debugging Starvation bugs as a type of concurrency bugs in multicore software. The model is composed into three phases: monitoring, detecting and debugging. The monitoring phase can support detecting phase by storing collected data from the system execution. The detecting phase can support debugging phase by comparing the stored data with starvation bug's properties, and the debugging phase can help in reproducing and removing the Starvation bug from multicore software. Our intention is that our model is the basis for developing tool(s) to enable solving Starvation bugs in software for multicore platforms.
Despite the availability of static analysis methods to achieve a correct-by-construction design for different systems in terms of timing behavior, violations of timing constraints can still occur at run-time due to different reasons. The aim of monitoring of system performance with respect to the timing constraints is to detect the violations of timing specifications, or to predict them based on the current system performance data. Considerable work has been dedicated to suggesting efficient performance monitoring approaches during the past years. This paper presents a survey and classification of those approaches in order to help researchers gain a better view over different methods and developments in monitoring of timing behavior of systems. Classifications of the mentioned approaches are given based on different items that are seen as important in developing a monitoring system, i.e. the use of additional hardware, the data collection approach, etc. Moreover, a description of how these different methods work is presented in this paper along with the advantages and downsides of each of them.
[Context and Motivation] Requirements in tender documents are often mixed with other supporting information. Identifying requirements in large tender documents could aid the bidding process and help estimate the risk associated with the project. [Question/problem] Manual identification of requirements in large documents is a resource-intensive activity that is prone to human error and limits scalability. This study compares various state-of-the-art approaches for requirements identification in an industrial context. For generalizability, we also present an evaluation on a real-world public dataset. [Principal ideas/results] We formulate the requirement identification problem as a binary text classification problem. Various state-of-the-art classifiers based on traditional machine learning, deep learning, and few-shot learning are evaluated for requirements identification based on accuracy, precision, recall, and F1 score. Results from the evaluation show that the transformer-based BERT classifier performs the best, with an average F1 score of 0.82 and 0.87 on industrial and public datasets, respectively. Our results also confirm that few-shot classifiers can achieve comparable results with an average F1 score of 0.76 on significantly lower samples, i.e., only 20% of the data. [Contribution] There is little empirical evidence on the use of large language models and few-shots classifiers for requirements identification. This paper fills this gap by presenting an industrial empirical evaluation of the state-of-the-art approaches for requirements identification in large tender documents. We also provide a running tool and a replication package for further experimentation to support future research in this area.
Software product line engineering emerged as an effective approach for the development of families of software-intensive systems in several industries. Although its use has been widely discussed and researched, there are still several open challenges for its industrial adoption and application. One of these is how to efficiently develop and reuse shared software artifacts, which have dependencies on the underlying electrical and hardware systems of products in a family. In this work, we report on our experience in tackling such a challenge in the railway industry and present a model-based approach for the automatic generation of test scripts for product variants in software product lines. The proposed approach is the result of an effort leveraging the experiences and results from the technology transfer activities with our industrial partner Alstom SA in Sweden. We applied and evaluated the proposed approach on the Aventra software product line from Alstom SA. The evaluation showed that the proposed approach mitigates the development effort, development time, and consistency drawbacks associated with the traditional, manual creation of test scripts. We performed an online survey involving 37 engineers from Alstom SA for collecting feedback on the approach. The result of the survey further confirms the aforementioned benefits.
In this work, we report on our experience indefining and applying a model-based approach for the automaticgeneration of test scripts for product variants in software productlines. The proposed approach is the result of an effort leveragingthe experiences and results from the technology transfer activitieswith our industrial partner Bombardier Transportation. Theproposed approach employs metamodelling and model transfor-mations for representing different testing artefacts and makingtheir generation automatic. We demonstrate the industrial ap-plicability and efficiency of the proposed approach using theBombardier Transportation Aventra software product line. Weobserve that the proposed approach mitigates the developmenteffort, time consumption and consistency drawbacks typical oftraditional strategies.
Component-based development is a software engineering paradigm that can facilitate the construction of embedded systems and tackle its complexities. The modern embedded systems have more and more demanding requirements. One way to cope with such versatile and growing set of requirements is to employ heterogeneous processing power, i.e., CPU-GPU architectures. The new CPU-GPU embedded boards deliver an increased performance but also introduce additional complexity and challenges. In this work, we address the component-to-hardware allocation for CPU-GPU embedded systems. The allocation for such systems is much complex due to the increased amount of GPU-related information. For example, while in traditional embedded systems the allocation mechanism may consider only the CPU memory usage of components to find an appropriate allocation scheme, in heterogeneous systems, the GPU memory usage needs also to be taken into account in the allocation process. This paper aims at decreasing the component-to-hardware allocation complexity by introducing a 2-layer component-based architecture for heterogeneous embedded systems. The detailed CPU-GPU information of the system is abstracted at a high-layer by compacting connected components into single units that behave as regular components. The allocator, based on the compacted information received from the high-level layer, computes, with a decreased complexity, feasible allocation schemes. In the last part of the paper, the 2-layer allocation method is evaluated using an existing embedded system demonstrator; namely, an underwater robot.
Traditional embedded systems are evolving into heterogeneous systems in order to address new and more demanding software requirements. Modern embedded systems are constructed by combining different computation units, such as traditional CPUs with Graphics Processing Units (GPUs). Adding GPUs to conventional CPU-based embedded systems enhances the computation power but also increases the complexity in developing software applications. A method that can help to tackle and address the software complexity issue of heterogeneous systems is component-based development. The allocation of the software application onto the appropriate computation node is greatly influenced by the system information load. The allocation process is increased in difficulty when we use, instead of common CPU-based systems, complex CPU-GPU systems. This paper presents a 2-layer component-based architecture for heterogeneous embedded systems, which has the purpose to ease the software-to-hardware allocation process. The solution abstracts the CPU-GPU detailed component-based design into single software components in order to decrease the amount of information delivered to the allocator. The last part of the paper describes the activities of the allocation process while using our proposed solution, when applied on a real system demonstrator.
Nowadays, many of the modern embedded applications such as vehicles and robots, interact with the environment and receive huge amount of data through various sensors such as cameras and radars. The challenge of processing large amount of data, within an acceptable performance, is solvedby employing embedded systems that incorporate complementary attributes of CPUs and Graphics Processing Units (GPUs), i.e., sequential and parallel execution models. Component-based development (CBD) is a software engineering methodology that augments the applications development through reuse of software blocks known as components. In developing a CPU-GPU embedded application using CBD, allocation of components to different processing units of the platform is an important activity which can affect the overall performance of the system. In this context, there is also often the need to support and achieve run-time component allocation due to various factors and situations that can happen during system execution, such as switching off parts of the system for energy saving. In this paper, we provide a solution that dynamically allocates components using various system information such as the available resources (e.g., available GPU memory) and the software behavior (e.g., in terms of GPU memory usage). The novelty of our work is a formal allocation model that considers GPU system characteristics computed on-the-fly through software monitoring solutions. For the presentation and validation of our solution, we utilize an existing underwater robot demonstrator.
Synergies between model-driven and component-based software engineering have been indicated as promising to mitigate complexity in development of embedded systems. In this work we evaluate the usefulness of a model-driven round-trip approach to aid deployment optimization in the development of embedded component-based systems. The round-trip approach is composed of the following steps: modelling the system, generation of full code from the models, execution and monitoring the code execution, and finally back-propagation of monitored values to the models. We illustrate the usefulness of the round-trip approach exploiting an industrial case-study from the telecom-domain. We use a code-generator that can realise different deployment strategies, as well as special monitoring code injected into the generated code, and monitoring primitives defined at operating system level. Given this infrastructure we can evaluate extra-functional properties of the system and thus compare different deployment strategies.
This paper presents an extended version of Deeper, a search-based simulation-integrated test solution that generates failure-revealing test scenarios for testing a deep neural network-based lane-keeping system. In the newly proposed version, we utilize a new set of bio-inspired search algorithms, genetic algorithm (GA), (μ+ λ) and (μ,λ) evolution strategies(ES), and particle swarm optimization (PSO), that leverage a quality population seed and domain-specific crossover and mutation operations tailored for the presentation model used for modeling the test scenarios. In order to demonstrate the capabilities of the new test generators within Deeper, we carry out an empirical evaluation and comparison with regard to the results of five participating tools in the cyber-physical systems testing competition at SBST 2021. Our evaluation shows the newly proposed test generators in Deeper not only represent a considerable improvement on the previous version but also prove to be effective and efficient in provoking a considerable number of diverse failure-revealing test scenarios for testing an ML-driven lane-keeping system. They can trigger several failures while promoting test scenario diversity, under a limited test time budget, high target failure severity, and strict speed limit constraints.
Timing requirements such as constraints on response time are key characteristics of real-time systems and violations of these requirements might cause a total failure, particularly in hard real-time systems. Runtime monitoring of the system properties is of great importance to detect and mitigate such failures. Thus, a runtime control to preserve the system properties could improve the robustness of the system with respect to timing violations. Common control approaches may require a precise analytical model of the system which is difficult to be provided at design time. Reinforcement learning is a promising technique to provide adaptive model-free control when the environment is stochastic, and the control problem could be formulated as a Markov Decision Process. In this paper, we propose an adaptive runtime control using reinforcement learning for real-time programs based on Programmable Logic Controllers (PLCs), to meet the response time requirements. We demonstrate through multiple experiments that our approach could control the response time efficiently to satisfy the timing requirements.
Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learnt by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learnt policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments on a simulated environment, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process, and performs adaptively without access to source code and performance models.
Response time analysis is an essential task to verify the behavior of real-time systems. Several response time analysis methods have been proposed to address this challenge, particularly for real-time systems with different levels of complexity. Static analysis is a popular approach in this context, but its practical applicability is limited due to the high complexity of the industrial real-time systems, as well as many unpredictable runtime events in these systems. In this work-in-progress paper, we propose a simulationbased response time analysis approach using reinforcement learning to find the execution scenarios leading to the worst-case response time. The approach learns how to provide a practical estimation of the worst-case response time through simulating the program without performing static analysis. Our initial study suggests that the proposed approach could be applicable in the simulation environments of the industrial real-time control systems to provide a practical estimation of the execution scenarios leading to the worst-case response time.
Providing an adaptive runtime assurance technique to meet the performance requirements of a real-time system without the need for a precise model could be a challenge. Adaptive performance assurance based on monitoring the status of timing properties can bring more robustness to the underlying platform. At the same time, the results or the achieved policy of this adaptive procedure could be used as feedback to update the initial model, and consequently for producing proper test cases. Reinforcement-learning has been considered as a promising adaptive technique for assuring the satisfaction of the performance properties of software-intensive systems in recent years. In this work-in-progress paper, we propose an adaptive runtime timing assurance procedure based on reinforcement learning to satisfy the performance requirements in terms of response time. The timing control problem is formulated as a Markov Decision Process and the details of applying the proposed learning-based timing assurance technique are described.
Satisfying performance requirements is of great importance for performance-critical software systems. Performance analysis to provide an estimation of performance indices and ascertain whether the requirements are met is essential for achieving this target. Model-based analysis as a common approach might provide useful information but inferring a precise performance model is challenging, especially for complex systems. Performance testing is considered as a dynamic approach for doing performance analysis. In this work-in-progress paper, we propose a self-adaptive learning-based test framework which learns how to apply stress testing as one aspect of performance testing on various software systems to find the performance breaking point. It learns the optimal policy of generating stress test cases for different types of software systems, then replays the learned policy to generate the test cases with less required effort. Our study indicates that the proposed learning-based framework could be applied to different types of software systems and guides towards autonomous performance testing.
Performance testing involving performance test case generation and execution remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of the software under test. Common approaches for generating the platform-based and workload-based test conditions are often based on system model or source code analysis, real usage modelling and use-case based design techniques. Nonetheless, those artifacts might not be always available during the testing. Moreover, creating a detailed performance model is often difficult. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate the test cases can be learnt by the testing system, then the learnt policy can be reused in further testing situations such as testing variants or evolved versions of the software, and upon changeable factors of testing process. This capability can lead to additional cost and computation time saving in the testing process. In this research, we have developed an autonomous performance testing framework using model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate different platform-based and workload-based test conditions without access to the system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments showed that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.
Load testing with the aim of generating an effective workload to identify performance issues is a time-consuming and complex challenge, particularly for evolving software systems. Current automated approaches mainly rely on analyzing system models and source code, or modeling of the real system usage. However, that information might not be available all the time or obtaining it might require considerable effort. On the other hand, if the optimal policy for generating the proper test workload resulting in meeting the objectives of the testing can be learned by the testing system, testing would be possible without access to system models or source code. We propose a self-adaptive reinforcement learning-driven load testing agent that learns the optimal policy for test workload generation. The agent can reuse the learned policy in subsequent testing activities such as meeting different types of testing targets. It generates an efficient test workload resulting in meeting the objective of the testing adaptively without access to system models or source code. Our experimental evaluation shows that the proposed self-adaptive intelligent load testing can reach the testing objective with lower cost in terms of the workload size, i.e. the number of generated users, compared to a typical load testing process, and results in productivity benefits in terms of higher efficiency.
Architectural models, such as those described in the EAST-ADL language, represent convenient abstractions to reason about automotive embedded software systems. To enjoy the fully-fledged advantages of reasoning, EAST-ADL models could benefit from a component-aware analysis framework that provides, ideally, both verification and model-based test-case generation capabilities. While different verification techniques have been developed for architectural models, only a few target EAST-ADL. In this paper, we present a methodology for code validation, starting from EAST-ADL artifacts. The methodology relies on: (i) automated model-based test-case generation for functional requirements criteria based on the EAST-ADL model extended with timed automata semantics, and (ii) validation of system implementation by generating Python test scripts based on the abstract test-cases, which represent concrete test-cases that are executable on the system implementation. We apply our methodology to analyze the ABS function implementation of a Brake-by-Wire system prototype.
Architectural models, such as those described in the EAST-ADL language, represent convenient abstractions to reason about embedded software systems. To enjoy the fully-fledged advantages of reasoning, EAST-ADL models require a component-aware analysis framework that provide, ideally, both verification and model-based test-case generation capabilities. In this paper, we extend ViTAL, our recently developed tool-supported framework for model-checking EAST-ADL models in Uppaal Port, with automated model-based test-case generation for functional requirements criteria. To validate the actual system implementation and exercise the feasibility of the abstract test-cases, we also show how to generate Python test scripts, from the ViTAL generated abstract test-cases. The scripts define the concrete test-cases that are executable on the system implementation, within the Farkle testing environment. Tool interoperability between ViTAL and Farkle is ensured by implementing a corresponding interface, compliant with the Open Services for Lifecycle collaboration (OSLC) standard. We apply our methodology to validate the ABS function implementation of a Brake-by-Wire system prototype.
Plagiarism is one of the leading problems in academic and industrial environments, which its goal is to find the similar items in a typical document or source code. This paper proposes an architecture based on a Long Short-Term Memory (LSTM) and attention mechanism called LSTM-AM-ABC boosted by a population-based approach for parameter initialization. Gradient-based optimization algorithms such as back-propagation (BP) are widely used in the literature for learning process in LSTM, attention mechanism, and feed-forward neural network, while they suffer from some problems such as getting stuck in local optima. To tackle this problem, population-based metaheuristic (PBMH) algorithms can be used. To this end, this paper employs a PBMH algorithm, artificial bee colony (ABC), to moderate the problem. Our proposed algorithm can find the initial values for model learning in all LSTM, attention mechanism, and feed-forward neural network, simultaneously. In other words, ABC algorithm finds a promising point for starting BP algorithm. For evaluation, we compare our proposed algorithm with both conventional and population-based methods. The results clearly show that the proposed method can provide competitive performance.
Differential evolution (DE) is widely used for global optimisation problems due to its simplicity and efficiency. L-SHADE is a state-of-the-art variant of DE algorithm that incorporates external archive, success-history-based parameter adaptation, and linear population size reduction. L-SHADE uses a current-to-pbest/1/bin strategy for mutation operator, while all individuals have the same probability to be selected. In this paper, we propose a novel L-SHADE algorithm, RWS-L-SHADE, based on a roulette wheel selection strategy so that better individuals have a higher priority and worse individuals are less likely to be selected. Our extensive experiments on the CEC-2017 benchmark functions and dimensionalities of 30, 50 and 100 indicate that RWS-L-SHADE outperforms L-SHADE.
The human mental search (HMS) algorithm is a relatively recent population-based metaheuristic algorithm, which has shown competitive performance in solving complex optimisation problems. It is based on three main operators: mental search, grouping, and movement. In the original HMS algorithm, a clustering algorithm is used to group the current population in order to identify a promising region in search space, while candidate solutions then move towards the best candidate solution in the promising region. In this paper, we propose a novel HMS algorithm, HMS-OS, which is based on clustering in both objective and search space, where clustering in objective space finds a set of best candidate solutions whose centroid is then also used in updating the population. For further improvement, HMS-OS benefits from an adaptive selection of the number of mental processes in the mental search operator. Experimental results on CEC-2017 benchmark functions with dimensionalities of 50 and 100, and in comparison to other optimisation algorithms, indicate that HMS-OS yields excellent performance, superior to those of other methods.
Software systems typically consist of various interacting components and units. While these components can be tested and shown to work correctly in isolation, when integrated and start interacting with each other, they may fail to produce the desired behaviors and results. Integration testing plays an important role in revealing issues in interactions among cooperating components. Identifying different interaction scenarios, however, is not a trivial task when performing integration testing. On the other hand, most of the integration testing solutions proposed in the literature are manual which hinders their scalability and applicability when it comes to large industrial systems. In this paper we introduce IntegrationDistiller as an automated solution and tool to identify integration scenarios and generate test cases (in the form of method call sequences) for .NET applications. It works by analyzing the code and automatically identifying class couplings, interacting methods, as well as invocation points. Moreover, the tool also helps and supports testers in identifying timing issues at integration level by automatic code instrumentation at invocation points. The code analysis engine of IntegrationDistiller is built and automated using .NET compiler platform, known as Roslyn. Hence, this work is the first in utilizing Roslyn features for automatic integration analysis and integration test case generation. This work has been done as part of our collaboration with ABB Industrial Automation Control Technologies (IACT) in Västerås-Sweden to address the integration testing challenges of the software part of the ABB Ability⢠800xA distributed control systems.
The interaction of embedded systems with their environments and their resource limitations make it important to take into account properties such as timing, security, and resource consumption in designing such systems. These so-called Extra-Functional Properties (EFPs) capture and describe the quality and characteristics of a system, and they need to be taken into account from early phases of development and throughout the system's lifecycle. An important challenge in this context is to ensure that the EFPs that are defined at early design phases are actually preserved throughout detailed design phases as well as during the execution of the system on its platform. In this thesis, we provide solutions to help with the preservation of EFPs; targeting both system design phases and system execution on the platform. Starting from requirements, which form the constraints of EFPs, we propose an approach for modeling Non-Functional Requirements (NFRs) and evaluating different design alternatives with respect to the satisfaction of the NFRs. Considering the relationship and trade-off among EFPs, an approach for balancing timing versus security properties is introduced. Our approach enables balancing in two ways: in a static way resulting in a fixed set of components in the design model that are analyzed and thus verified to be balanced with respect to the timing and security properties, and also in a dynamic way during the execution of the system through runtime adaptation. Considering the role of the platform in preservation of EFPs and mitigating possible violations of them, an approach is suggested to enrich the platform with necessary mechanisms to enable monitoring and enforcement of timing properties. In the thesis, we also identify and demonstrate the issues related to accuracy in monitoring EFPs, how accuracy can affect the decisions that are made based on the collected information, and propose a technique to tackle this problem. As another contribution, we also show how runtime monitoring information collected about EFPs can be used to fine-tune design models until a desired set of EFPs are achieved. We have also developed a testing framework which enables automatic generation of test cases in order verify the actual behavior of a system against its desired behavior. On a high level, the contributions of the thesis are thus twofold: proposing methods and techniques to 1) improve maintenance of EFPs within their correct range of values during system design, 2) identify and mitigate possible violations of EFPs at runtime.
Design of real-time embedded systems is a complex and challenging task. Part of this complexity originates from their limited resources which incurs handling a big range of Non-Functional Requirements (NFRs). Therefore, satisfaction of NFRs plays an important role in the correctness of the design of these systems. Model-driven development has the potential to reduce the design complexity of real-time embedded systems by increasing the abstraction level, enabling analysis at earlier phases of development and code generation. In this thesis, we identify some of the challenges that exist in model-driven development of real-time embedded systems with respect to NFRs, and provide techniques and solutions that aim to help with the satisfaction of NFRs. Our end goal is to ensure that the set of NFRs defined for a system is not violated at runtime.
First, we identify and highlight the challenges of modeling NFRs in telecommunication systems and discuss the application of a UML-based approach for modeling them. Since NFRs have dependencies, and the design decisions to satisfy them cannot be considered in isolation, we propose a model-based approach for trade-off analysis of NFRs to help with the comparison of different design models with respect to the satisfaction level of their NFRs. Following the issue of evaluating the interdependencies of NFRs, we also propose solutions for establishing and maintaining balance between different NFRs. In this regard, we categorize our suggested solutions into static and dynamic. The former refers to a static design and set of features which ensures and guarantees the balance of NFRs, while the latter means establishing balance at runtime by reconfiguring the system and runtime adaptation. Finally, we discuss the role of the execution platform in preservation and monitoring of timing properties in real-time embedded systems and propose an approach to enrich the platform with necessary mechanisms for monitoring them.
The increasing complexity and size of software products combined with pressure to have shorter time-to-market is making manual testing techniques too costly and unscalable. This is particularly observed in industrial systems where continuous integration and deployment are applied. Therefore, there is a growing need to automate the testing process and make it scalable with respect to the context of real-world and large industrial applications. While there are already some solutions for generation of unit level test cases, automatic generation of integration level test cases to verify interaction of software components poses specific challenges especially in object-oriented applications. In this paper, we describe our on going work in introducing a solution to automate generation of integration test cases for C# applications by exploiting the code analysis capabilities of Microsoft .NET compiler platform known as Roslyn. This is done in collaboration with ABB Process Automation Control Technologies (PACT) in Västerås-Sweden, where the software for 800xA distributed control system is developed.
OSLC serves a new standard for the integration of tools used in different phases of software development. It enables to establish relationships among different data artifacts throughout the life cycle of an application. OSLC aims to provide seamless integration of life cycle management tools and it enables to have explicit relationships among data artifacts from the early development phases, i.e., requirements. This helps to gain a better holistic view over the development of software as a system development activity. Systems engineering is in essence an interdisciplinary approach to understand, design, and manage the complexity of different projects and phenomena throughout their life cycle. In this context, to have a holistic view of the system is not a desirable, but a fundamental prerequisite. In this work, we i) investigate how OSLC can strengthen a systemic view in tool integration scenarios and ii) discuss also how systems engineering concepts and principles can be relevant to describe such scenarios. This is done by identifying the relationships among systems engineering and OSLC key concepts. Finally, we show, as a proof of concept, a concrete application of OSLC in building an integrated tool chain.
Addressing non-functional requirements in Real-Time Embedded Systems (RTES) is of critical importance. Proper functionality of the whole system is heavily dependent on satisfying these requirements. In model-based approaches for development of the systems in RTES domain, there are several methods and languages for modeling and analysis of non-functional requirements. However, in this domain there are different types of systems that have different sets of non-functionalrequirements. The problem is that the general modeling approaches for RTES may not cover all the needs of these sub domains such as telecommunication. In this poster paper, we suggest an approach to complement and apply general RTES modeling languages to better cover different non-functional requirements oftelecommunication systems.
Bringing security aspects in earlier phases of development is one of the major shifts in software development trend. Model-driven development which helps with raising the abstraction level and facilitating earlier analysis and verification is a promising approach in this regard and there have been several efforts on modeling security aspects. However, the issue is that when it comes to embedded systems, non-functional requirements such as security are so interconnected that in order to satisfy one, trade-off analysis with other ones are necessary. Energy consumption is one of these requirements which is of great importance in embedded systems domain due to resource limitations that these systems have. In this paper, focusing on security and energy consumptions we propose a new methodology for model-driven design of embedded systems to bring energy measurements and estimations earlier in development phases and thus identify security design decisions that cause violations of specified energy requirements.
Introducing security features in a system is not free and brings along its costs and impacts. Considering this fact is essential in the design of real-time embedded systems which have limited resources. To ensure correct design of these systems, it is important to also take into account impacts of security features on other non-functional requirements, such as performance and energy consumption. Therefore, it is necessary to perform trade-off analysis among non-functional requirements to establish balance among them. In this paper, we target the timing requirements of real-time embedded systems, and introduce an approach for choosing appropriate encryption algorithms at runtime, to achieve satisfaction of timing requirements in an adaptive way, by monitoring and keeping a log of their behaviors. The approach enables the system to adopt a less or more time consuming (but presumably stronger) encryption algorithm, based on the feedback on previous executions of encryption processes. This is particularly important for systems with high degree of complexity which are hard to analyze statistically.
Satisfaction of Non-Functional Requirements (NFR), is a key factor in successful design of embedded systems. This is mainly due to the constraints and resource limitations in these systems. A design that cannot achieve functionality of the system under these limitations is actually a failure. Therefore, NFRs in design of embedded systems deserve special attention. However, one big issue is that NFRs are interconnected and cannot be considered in isolation; especially that they can have direct impacts on each other such as security and performance. This means that a careful balance and trade-off analysis among NFRs is necessary. In this paper, we focus on this need and identify what information about NFRs is required in order to perform trade-off analysis. We propose and explain our in-progress approach to incorporate this information into system models in order to enable trade-off analysis. Our approach is based on UML profiling method to annotate model elements with necessary information.
One common goal followed by software engineers is to deliver a product which satisfies the requirements of different stakeholders. Software requirements are generally categorized into functional and Non-Functional Requirements (NFRs). While NFRs may not be the main focus in developing some applications, there are systems and domains where the satisfaction of NFRs is even critical and one of the main factors which can determine the success or failure of the delivered product, notably in embedded systems. While the satisfaction of functional requirements can be decomposed and determined locally, NFRs are interconnected and have impacts on each other. For this reason, they cannot be considered in isolation and a careful balance and trade-off among them needs to be established. We provide a generic model-based approach to evaluate the satisfaction of NFRs taking into account their mutual impacts and dependencies. By providing indicators regarding the satisfaction level of NFRs in the system, the approach enables to compare different system design models and also identify parts of the system which can be good candidates for modification in order to achieve better satisfaction levels.
In this paper we introduce a generic approach to analyze system design models with regard to the satisfaction of their Non-Functional Requirements (NFRs) to enable the evaluation of their NFRs' trade-offs. NFRs and their satisfaction become especially critical and deserve more attention in certain application domains such as real-time and embedded systems. This is mainly due to the constraints and resource limitations in these systems. A design that cannot achieve the functionality of the system under these limitations can mean a failure. However, one big issue is that NFRs are interconnected and cannot be considered in isolation as they can have direct impacts on each other like security and performance. This means that a careful balance and trade-off analysis among NFRs is necessary. In doing so, the role of functional parts that contribute and are implemented to satisfy an NFR should also be taken into account. We focus on these needs and identify what information about NFRs is required in order to perform trade-off analysis and comparison of design models. We propose and explain our approach to incorporate this information into system models using UML profiling method to annotate model elements with necessary information and then calculate satisfaction values of NFRs using model transformation technique.
Successful design of real-time embedded systems relies heavily on the successful satisfaction of their non-functional requirements. Model-driven engineering is a promising approach for coping with the design complexity of embedded systems. However, when it comes to modeling non-functional requirements and covering specific aspects of different domains and types of embedded systems, general modeling languages for real-time embedded systems may not be able to cover all of these aspects. One solution is to use a combination of modeling languages for modeling different non-functional requirements as is done in the definition of EAST-ADL modeling language for automotive domain. In this paper, we propose a UML-based solution, consisting of different modeling languages, to model non-functional requirements in telecommunication domain, and discuss different challenges and issues in the design of telecommunication systems that are related to these requirements.
Model Driven Engineering (MDE) and Component Based Software Development (CBSD) are promising approaches to deal with the increasing complexity of Distributed Real-Time Critical Embedded Systems. On one hand, the functionality complexity of embedded systems is rapidly growing. On the other hand, extra-functional properties (EFP) must be taken into account and resource consumption must be optimized due to limited resources. However, EFP are not independent and impact each other. This paper introduces concepts and mechanisms that allow to model security specifications and derive automatically the corresponding security implementations by transforming the original component model into a secured one taking into account sensitive data flow in the system. The resulted architecture ensures security requirements by construction and is expressed in the original meta model; therefore, it enables using the same timing analysis and synthesis as with the original component model.
Considering security as an afterthought and addingsecurity aspects to a system late in the development processhas now been realized to be an inefficient and bad approach tosecurity. The trend is to bring security considerations as earlyas possible in the design of systems. This is especially criticalin certain domains such as real-time embedded systems. Due todifferent constraints and resource limitations that these systemshave, the costs and implications of security features should becarefully evaluated in order to find appropriate ones whichrespect the constraints of the system. Model-Driven Development(MDD) and Component-Based Development (CBD) are twosoftware engineering disciplines which help to cope with theincreasing complexity of real-time embedded systems. WhileCBD enables the reuse of functionality and analysis results bybuilding systems out of already existing components, MDD helpsto increase the abstraction level, perform analysis at earlierphases of development, and also promotes automatic codegeneration. By using these approaches and including securityaspects in the design models, it becomes possible to considersecurity from early phases of development and also identifythe implications of security features. Timing issues are one ofthe most important factors for successful design of real-timeembedded systems. In this paper, we provide an approach usingMDD and CBD methods to make it easier for system designersto include security aspects in the design of systems and identifyand manage their timing implications and costs. Among differentsecurity mechanisms to satisfy security requirements, our focusin this paper is mainly on using encryption and decryptionalgorithms and consideration of their timing costs to designsecure systems.
In this paper we introduce a method for runtime veri cation of the behavior of a system against state machines models in order to identify inconsistencies between the two. This is achieved by tracking states and transitions at runtime and comparing with the expected behavior of the system captured in the form of state machine models. The goal is to increase our con dence that the order of states at runtime matches what is speci ed by the models. The method also provides for defect localization by identifying that in the transition between which states a deviation from the expected behavior has occurred. The necessity and importance of the method lies in the fact that in model-based development, models are also used to perform analysis. Therefore, if there is any discrepancy between the behavior of the system at runtime and the models, then the result of model-based analyses which are performed may also be invalid and not applicable for the system anymore. For this purpose, in our method we create executable test cases from state machine models to test the runtime behavior of the system.
Testing a computer system is a challenging task, both due to the large number of possible test cases and the limited resources allocated for testing activities. This means that only a subset of all possible test cases can be chosen to test a system, and therefore the decision on the selection of test cases becomes important. The result of static analysis of a system can be used to help with this decision, in the context of model-based development of systems, this means that the analysis performed on a system model can be used to prioritize and guide the testing efforts. Furthermore, since models allow expression of non-functional requirements (such as performance, timing and security), model-guided testing can be used to direct testing towards specific parts of the system which have large impact on such requirements. In this paper, we focus on modeling and trade-off analysis of non-functional requirements and how static analysis helps to identify problematic parts of a system and thus guide the selection of test cases to target such parts.