mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
RISE SICS, Sweden.ORCID iD: 0000-0002-1512-0844
RISE SICS, Sweden.ORCID iD: 0000-0003-1597-6738
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0001-5297-6548
Show others and affiliations
2018 (English)In: ACM/IEEE 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2018, , co-located with International Conference on Software Engineering, ICSE 2018; Gothenburg; Sweden; 28 May 2018 through 29 May 2018; Code 138312, 2018, Vol. 28 May, p. 217-223Conference paper, Published paper (Refereed)
Abstract [en]

Timing requirements such as constraints on response time are key characteristics of real-time systems and violations of these requirements might cause a total failure, particularly in hard real-time systems. Runtime monitoring of the system properties is of great importance to detect and mitigate such failures. Thus, a runtime control to preserve the system properties could improve the robustness of the system with respect to timing violations. Common control approaches may require a precise analytical model of the system which is difficult to be provided at design time. Reinforcement learning is a promising technique to provide adaptive model-free control when the environment is stochastic, and the control problem could be formulated as a Markov Decision Process. In this paper, we propose an adaptive runtime control using reinforcement learning for real-time programs based on Programmable Logic Controllers (PLCs), to meet the response time requirements. We demonstrate through multiple experiments that our approach could control the response time efficiently to satisfy the timing requirements.

Place, publisher, year, edition, pages
2018. Vol. 28 May, p. 217-223
Series
Proceedings - International Conference on Software Engineering, ISSN 0270-5257
Keywords [en]
Adaptive response time control, PLC-based real-time programs, Runtime monitoring, Reinforcement learning
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-38955DOI: 10.1145/3194133.3194153ISI: 000458799600029Scopus ID: 2-s2.0-85051555083OAI: oai:DiVA.org:mdh-38955DiVA, id: diva2:1205964
Conference
13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems SEAMS 18, 28 May 2018, Gothenburg, Sweden
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2020-04-14Bibliographically approved
In thesis
1. Machine Learning-Assisted Performance Assurance
Open this publication in new window or tab >>Machine Learning-Assisted Performance Assurance
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

With the growing involvement of software systems in our life, assurance of performance, as an important quality characteristic, rises to prominence for the success of software products. Performance testing, preservation, and improvement all contribute to the realization of performance assurance. Common approaches to tackle challenges in testing, preservation, and improvement of performance mainly involve techniques relying on performance models or using system models or source code. Although modeling provides a deep insight into the system behavior, drawing a well-detailed model is challenging. On the other hand, those artifacts such as models and source code might not be available all the time. These issues are the motivations for using model-free machine learning techniques such as model-free reinforcement learning to address the related challenges in performance assurance.

Reinforcement learning implies that if the optimal policy (way) for achieving the intended objective in a performance assurance process could instead be learnt by the acting system (e.g., the tester system), then the intended objective could be accomplished without advanced performance models. Furthermore, the learnt policy could later be reused in similar situations, which leads to efficiency improvement by saving computation time while reducing the dependency on the models and source code.

In this thesis, our research goal is to develop adaptive and efficient performance assurance techniques meeting the intended objectives without access to models and source code. We propose three model-free learning-based approaches to tackle the challenges; efficient generation of performance test cases, runtime performance (response time) preservation, and performance improvement in terms of makespan (completion time) reduction. We demonstrate the efficiency and adaptivity of our approaches based on experimental evaluations conducted on the research prototype tools, i.e. simulation environments that we developed or tailored for our problems, in different application areas.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2020
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 289
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-47501 (URN)978-91-7485-463-3 (ISBN)
Presentation
2020-06-02, Online/Zoom, Västerås, 09:15 (English)
Opponent
Supervisors
Available from: 2020-04-17 Created: 2020-04-14 Last updated: 2020-04-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Helali Moghadam, MahshidSaadatmand, MehrdadLisper, Björn

Search in DiVA

By author/editor
Helali Moghadam, MahshidSaadatmand, MehrdadBohlin, MarkusLisper, Björn
By organisation
Embedded Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 183 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf