https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Performance Testing Using a Smart Reinforcement Learning-Driven Test Agent
RISE Research Institutes of Sweden, Västerås, Sweden.
Mälardalen University.
RISE Research Institutes of Sweden, Västerås, Sweden.
RISE Research Institutes of Sweden, Västerås, Sweden.
Show others and affiliations
2021 (English)In: 2021 IEEE Congress on Evolutionary Computation (CEC), 2021, p. 2385-2394Conference paper, Published paper (Refereed)
Abstract [en]

Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.

Place, publisher, year, edition, pages
2021. p. 2385-2394
Keywords [en]
Analytical models;Automation;Transfer learning;Decision making;Reinforcement learning;Knowledge representation;Evolutionary computation;performance testing;load testing;workload generation;reinforcement learning;autonomous testing
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-56402DOI: 10.1109/CEC45853.2021.9504763ISI: 000703866100301Scopus ID: 2-s2.0-85124600414ISBN: 978-1-7281-8393-0 (electronic)OAI: oai:DiVA.org:mdh-56402DiVA, id: diva2:1610017
Conference
2021 IEEE Congress on Evolutionary Computation (CEC 2021), 28 June - 1 July 2021, Krakow, Poland.
Available from: 2021-11-09 Created: 2021-11-09 Last updated: 2023-09-13Bibliographically approved
In thesis
1. Intelligence-Driven Software Performance Assurance
Open this publication in new window or tab >>Intelligence-Driven Software Performance Assurance
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Software performance assurance is of great importance for the success of software products, which are nowadays involved in many parts of our life. Performance evaluation approaches such as performance modeling, testing, as well as runtime performance control methods, all can contribute to the realization of software performance assurance. Many of the common approaches to tackle challenges in this area involve relying on performance models or using system models and source code. Although modeling provides a deep insight into the system behavior, developing a  detailed model is challenging.  Furthermore, software artifacts such as models and source code might not be readily available at all times in the development lifecycle. This thesis focuses on leveraging the potential of machine learning (ML) and evolutionary search-based techniques to provide viable solutions for addressing the challenges in different aspects of software performance assurance efficiently and effectively.

In this thesis, we first investigate the capabilities of model-free reinforcement learning to address the objectives in robustness testing problems. We develop two self-adaptive reinforcement learning-driven test agents called SaFReL and RELOAD. They generate effective platform-based test scenarios and test workloads, respectively. The output scenarios and workloads help testers and software engineers meet their objectives efficiently without relying on models or source code. SaFReL and RELOAD learn the optimal policies (ways) to meet the test objectives and can reuse the learned policies adaptively in other testing settings. Policy reuse can lead to higher test efficiency and cost savings, for example, when testing similar test objectives or software systems with comparable performance sensitivity.

Next, we leverage the potential of evolutionary computation algorithms, i.e., genetic algorithms, evolution strategies, and particle swarm optimization, to generate failure-revealing test scenarios for robustness testing of AI systems. In this part, we choose autonomous driving systems as a prevailing example of contemporary AI systems. We study the efficacy of the proposed evolutionary search-based test generation techniques and evaluate primarily to what extent they can trigger failures. Moreover, we investigate the diversity of those failures and compare them to existing baseline solutions. 

Finally, we again use the potential of model-free reinforcement learning to develop adaptive ML-driven runtime performance control approaches. We present a response time preservation method for a sample type of industrial applications and a resource allocation technique for dynamic workloads in a data grid application. The proposed ML-driven techniques learn how to adjust the tunable parameters and resource configuration at runtime to keep the performance continually compliant with the requirements and to further optimize the runtime performance. We evaluate the efficacy of the approaches and show how effectively they can improve the performance and keep the performance requirements satisfied under varying conditions such as dynamic workloads and the occurrence of runtime events that lead to substantial response time deviations.

Place, publisher, year, edition, pages
Västerås: Mälardalens universitet, 2022
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 358
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-58065 (URN)978-91-7485-549-4 (ISBN)
Public defence
2022-06-03, Alfa, Mälardalens universitet, Västerås, 14:00 (English)
Opponent
Supervisors
Available from: 2022-04-20 Created: 2022-04-20 Last updated: 2022-11-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bohlin, MarkusLisper, Björn

Search in DiVA

By author/editor
Hamidi, GolrokhBohlin, MarkusLisper, Björn
By organisation
Mälardalen UniversityInnovation and Product RealisationEmbedded Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 122 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf