mdh.sePublications
123451 of 5
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Machine Learning-Assisted Performance Assurance
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. RISE Research Institutes of Sweden.ORCID iD: 0000-0003-3354-1463
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

With the growing involvement of software systems in our life, assurance of performance, as an important quality characteristic, rises to prominence for the success of software products. Performance testing, preservation, and improvement all contribute to the realization of performance assurance. Common approaches to tackle challenges in testing, preservation, and improvement of performance mainly involve techniques relying on performance models or using system models or source code. Although modeling provides a deep insight into the system behavior, drawing a well-detailed model is challenging. On the other hand, those artifacts such as models and source code might not be available all the time. These issues are the motivations for using model-free machine learning techniques such as model-free reinforcement learning to address the related challenges in performance assurance.

Reinforcement learning implies that if the optimal policy (way) for achieving the intended objective in a performance assurance process could instead be learnt by the acting system (e.g., the tester system), then the intended objective could be accomplished without advanced performance models. Furthermore, the learnt policy could later be reused in similar situations, which leads to efficiency improvement by saving computation time while reducing the dependency on the models and source code.

In this thesis, our research goal is to develop adaptive and efficient performance assurance techniques meeting the intended objectives without access to models and source code. We propose three model-free learning-based approaches to tackle the challenges; efficient generation of performance test cases, runtime performance (response time) preservation, and performance improvement in terms of makespan (completion time) reduction. We demonstrate the efficiency and adaptivity of our approaches based on experimental evaluations conducted on the research prototype tools, i.e. simulation environments that we developed or tailored for our problems, in different application areas.

Place, publisher, year, edition, pages
Västerås: Mälardalen University , 2020.
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 289
National Category
Computer Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:mdh:diva-47501ISBN: 978-91-7485-463-3 (print)OAI: oai:DiVA.org:mdh-47501DiVA, id: diva2:1423434
Presentation
2020-06-02, Online/Zoom, Västerås, 09:15 (English)
Opponent
Supervisors
Available from: 2020-04-17 Created: 2020-04-14 Last updated: 2020-04-30Bibliographically approved
List of papers
1. Machine Learning to Guide Performance Testing: An Autonomous Test Framework
Open this publication in new window or tab >>Machine Learning to Guide Performance Testing: An Autonomous Test Framework
Show others...
2019 (English)In: ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 2019, p. 164-167Conference paper, Published paper (Refereed)
Abstract [en]

Satisfying performance requirements is of great importance for performance-critical software systems. Performance analysis to provide an estimation of performance indices and ascertain whether the requirements are met is essential for achieving this target. Model-based analysis as a common approach might provide useful information but inferring a precise performance model is challenging, especially for complex systems. Performance testing is considered as a dynamic approach for doing performance analysis. In this work-in-progress paper, we propose a self-adaptive learning-based test framework which learns how to apply stress testing as one aspect of performance testing on various software systems to find the performance breaking point. It learns the optimal policy of generating stress test cases for different types of software systems, then replays the learned policy to generate the test cases with less required effort. Our study indicates that the proposed learning-based framework could be applied to different types of software systems and guides towards autonomous performance testing.

Keywords
performance requirements, performance testing, test case generation, reinforcement learning, autonomous testing
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-43918 (URN)10.1109/ICSTW.2019.00046 (DOI)000477742600022 ()2-s2.0-85068406208 (Scopus ID)
Conference
ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 22 Apr 2019, Xi’an, China
Available from: 2019-06-14 Created: 2019-06-14 Last updated: 2020-04-14Bibliographically approved
2. An Autonomous Performance Testing Framework using Self-Adaptive Fuzzy Reinforcement Learning
Open this publication in new window or tab >>An Autonomous Performance Testing Framework using Self-Adaptive Fuzzy Reinforcement Learning
Show others...
(English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367Article in journal (Refereed) Submitted
Abstract [en]

Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learnt by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learnt policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments on a simulated environment, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process, and performs adaptively without access to source code and performance models.

Place, publisher, year, edition, pages
Springer
Keywords
Performance testing, Stress testing, Test case generation, Reinforcement learning, Autonomous testing
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-47471 (URN)
Available from: 2020-04-06 Created: 2020-04-06 Last updated: 2020-05-08Bibliographically approved
3. Intelligent Load Testing: Self-adaptive Reinforcement Learning-driven Load Runner
Open this publication in new window or tab >>Intelligent Load Testing: Self-adaptive Reinforcement Learning-driven Load Runner
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Load testing with the aim of generating an effective workload to identify performance issues is a time-consuming and complex challenge, particularly for evolving software systems. Current automated approaches mainly rely on analyzing system models and source code, or modeling of the real system usage. However, that information might not be available all the time or obtaining it might require considerable effort. On the other hand, if the optimal policy for generating the proper test workload resulting in meeting the objectives of the testing can be learned by the testing system, testing would be possible without access to system models or source code. We propose a self-adaptive reinforcement learning-driven load testing agent that learns the optimal policy for test workload generation. The agent can reuse the learned policy in subsequent testing activities such as meeting different types of testing targets. It generates an efficient test workload resulting in meeting the objective of the testing adaptively without access to system models or source code. Our experimental evaluation shows that the proposed self-adaptive intelligent load testing can reach the testing objective with lower cost in terms of the workload size, i.e. the number of generated users, compared to a typical load testing process, and results in productivity benefits in terms of higher efficiency.

Keywords
performance testing, load testing, workload generation, reinforcement learning, autonomous testing
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-47500 (URN)
Available from: 2020-04-13 Created: 2020-04-13 Last updated: 2020-04-17Bibliographically approved
4. Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning
Open this publication in new window or tab >>Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning
Show others...
2018 (English)In: ACM/IEEE 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2018, , co-located with International Conference on Software Engineering, ICSE 2018; Gothenburg; Sweden; 28 May 2018 through 29 May 2018; Code 138312, 2018, Vol. 28 May, p. 217-223Conference paper, Published paper (Refereed)
Abstract [en]

Timing requirements such as constraints on response time are key characteristics of real-time systems and violations of these requirements might cause a total failure, particularly in hard real-time systems. Runtime monitoring of the system properties is of great importance to detect and mitigate such failures. Thus, a runtime control to preserve the system properties could improve the robustness of the system with respect to timing violations. Common control approaches may require a precise analytical model of the system which is difficult to be provided at design time. Reinforcement learning is a promising technique to provide adaptive model-free control when the environment is stochastic, and the control problem could be formulated as a Markov Decision Process. In this paper, we propose an adaptive runtime control using reinforcement learning for real-time programs based on Programmable Logic Controllers (PLCs), to meet the response time requirements. We demonstrate through multiple experiments that our approach could control the response time efficiently to satisfy the timing requirements.

Series
Proceedings - International Conference on Software Engineering, ISSN 0270-5257
Keywords
Adaptive response time control, PLC-based real-time programs, Runtime monitoring, Reinforcement learning
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-38955 (URN)10.1145/3194133.3194153 (DOI)000458799600029 ()2-s2.0-85051555083 (Scopus ID)
Conference
13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems SEAMS 18, 28 May 2018, Gothenburg, Sweden
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2020-04-14Bibliographically approved
5. Makespan reduction for dynamic workloads in cluster-based data grids using reinforcement-learning based scheduling
Open this publication in new window or tab >>Makespan reduction for dynamic workloads in cluster-based data grids using reinforcement-learning based scheduling
2018 (English)In: Journal of Computational Science, ISSN 1877-7503, E-ISSN 1877-7511, Vol. 24, p. 402-412Article in journal (Refereed) Published
Abstract [en]

Scheduling is one of the important problems within the scope of control and management in grid and cloud-based systems. Data grid still as a primary solution to process data-intensive tasks, deals with managing large amounts of distributed data in multiple nodes. In this paper, a two-phase learning-based scheduling algorithm is proposed for data-intensive tasks scheduling in cluster-based data grids. In the proposed scheduling algorithm, a hierarchical multi agent system, consisting of one global broker agent and several local agents, is applied to scheduling procedure in the cluster-based data grids. At the first step of the proposed scheduling algorithm, the global broker agent selects the cluster with the minimum data cost based on the data communication cost measure, then an adaptive policy based on Q-learning is used by the local agent of the selected cluster to schedule the task to the proper node of the cluster. The impacts of three action selection strategies have been investigated in the proposed scheduling algorithm, and the performance of different versions of the scheduling algorithm regarding different action selection strategies, has been evaluated under three types of workloads with heterogeneous tasks. Experimental results show that for dynamic workloads with varying task submission patterns, the proposed learning-based scheduling algorithm gives better performance compared to four common scheduling algorithm, Queue Length (Shortest Queue), Access Cost, Queue Access Cost (QAC) and HCS, which use regular combinations of primary parameters such as, data communication cost and queue length. Applying a learning-based strategy provides the scheduling algorithm with more adaptability to the changing conditions in the environment.

Place, publisher, year, edition, pages
Netherlands: Elsevier, 2018
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-46607 (URN)10.1016/j.jocs.2017.09.016 (DOI)
Available from: 2019-12-20 Created: 2019-12-20 Last updated: 2020-04-14Bibliographically approved

Open Access in DiVA

fulltext(1654 kB)8 downloads
File information
File name FULLTEXT02.pdfFile size 1654 kBChecksum SHA-512
7ebd6e044378f23142970c930a0cc4facaa1016bb20ce075e1566797e1aede197ce06c468936714a99fb528a7ebe5888be315a3b6bfae981fcf4699872b74860
Type fulltextMimetype application/pdf

Authority records BETA

Helali Moghadam, Mahshid

Search in DiVA

By author/editor
Helali Moghadam, Mahshid
By organisation
Embedded Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 8 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 59 hits
123451 of 5
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf