mdh.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
PEAS: A Performance Evaluation Framework for Auto-Scaling Strategies in Cloud Applications
Lund University.ORCID-id: 0000-0002-1364-8127
Umeå University.
Lund University.
Umeå University.
Visa övriga samt affilieringar
2016 (Engelska)Ingår i: ACM Trans. Model. Perform. Eval. Comput. Syst. TOMPECS, ISSN 2376-3639, Vol. 1, nr 4, s. 1-31Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Numerous auto-scaling strategies have been proposed in the past few years for improving various Quality of Service (QoS) indicators of cloud applications, for example, response time and throughput, by adapting the amount of resources assigned to the application to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved through experiments under specific conditions and seldom includes extensive testing to account for uncertainties in the workloads and unexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in general conditions. In this article, we present a Performance Evaluation framework for Auto-Scaling (PEAS) strategies in the presence of uncertainties. The evaluation is formulated as a chance constrained optimization problem, which is solved using scenario theory. The adoption of such a technique allows one to give probabilistic guarantees of the obtainable performance. Six different auto-scaling strategies have been selected from the literature for extensive test evaluation and compared using the proposed framework. We build a discrete event simulator and parameterize it based on real experiments. Using the simulator, each auto-scaler’s performance is evaluated using 796 distinct real workload traces from projects hosted on the Wikimedia foundations’ servers, and their performance is compared using PEAS. The evaluation is carried out using different performance metrics, highlighting the flexibility of the framework, while providing probabilistic bounds on the evaluation and the performance of the algorithms. Our results highlight the problem of generalizing the conclusions of the original published studies and show that based on the evaluation criteria, a controller can be shown to be better than other controllers.

Ort, förlag, år, upplaga, sidor
United States: ACM , 2016. Vol. 1, nr 4, s. 1-31
Nyckelord [en]
Performance evaluation, auto-scaling, cloud computing, elasticity, randomized optimization
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
URN: urn:nbn:se:mdh:diva-33786DOI: 10.1145/2930659OAI: oai:DiVA.org:mdh-33786DiVA, id: diva2:1048547
Tillgänglig från: 2016-11-21 Skapad: 2016-11-21 Senast uppdaterad: 2016-12-01Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltext

Sök vidare i DiVA

Av författaren/redaktören
Papadopoulos, Alessandro
Elektroteknik och elektronik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 20 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf