https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
PEAS: A Performance Evaluation Framework for Auto-Scaling Strategies in Cloud Applications
Lund University.ORCID iD: 0000-0002-1364-8127
Umeå University.
Lund University.
Umeå University.
Show others and affiliations
2016 (English)In: ACM Trans. Model. Perform. Eval. Comput. Syst. TOMPECS, ISSN 2376-3639, Vol. 1, no 4, p. 1-31Article in journal (Refereed) Published
Abstract [en]

Numerous auto-scaling strategies have been proposed in the past few years for improving various Quality of Service (QoS) indicators of cloud applications, for example, response time and throughput, by adapting the amount of resources assigned to the application to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved through experiments under specific conditions and seldom includes extensive testing to account for uncertainties in the workloads and unexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in general conditions. In this article, we present a Performance Evaluation framework for Auto-Scaling (PEAS) strategies in the presence of uncertainties. The evaluation is formulated as a chance constrained optimization problem, which is solved using scenario theory. The adoption of such a technique allows one to give probabilistic guarantees of the obtainable performance. Six different auto-scaling strategies have been selected from the literature for extensive test evaluation and compared using the proposed framework. We build a discrete event simulator and parameterize it based on real experiments. Using the simulator, each auto-scaler’s performance is evaluated using 796 distinct real workload traces from projects hosted on the Wikimedia foundations’ servers, and their performance is compared using PEAS. The evaluation is carried out using different performance metrics, highlighting the flexibility of the framework, while providing probabilistic bounds on the evaluation and the performance of the algorithms. Our results highlight the problem of generalizing the conclusions of the original published studies and show that based on the evaluation criteria, a controller can be shown to be better than other controllers.

Place, publisher, year, edition, pages
United States: ACM , 2016. Vol. 1, no 4, p. 1-31
Keywords [en]
Performance evaluation, auto-scaling, cloud computing, elasticity, randomized optimization
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:mdh:diva-33786DOI: 10.1145/2930659Scopus ID: 2-s2.0-85074674305OAI: oai:DiVA.org:mdh-33786DiVA, id: diva2:1048547
Available from: 2016-11-21 Created: 2016-11-21 Last updated: 2020-10-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Papadopoulos, Alessandro
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 39 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf