mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Similarity-Based Prioritization of Test Case Automation
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. (Software Testing Laboratory)ORCID iD: 0000-0001-8096-3592
RISE.
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. (Software Testing Laboratory)
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. (Software Testing Laboratory)
Show others and affiliations
(English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367Article in journal (Refereed) Published
Abstract [en]

The importance of efficient software testing procedures is driven by an ever increasing system complexity as well as global competition. In the particular case of manual test cases at the system integration level, where thousands of test cases may be executed before release, time must be well spent in order to test the system as completely and as efficiently as possible. Automating a subset of the manual test cases, i.e, translating the manual instructions to automatically executable code, is one way of decreasing the test effort. It is further common that test cases exhibit similarities, which can be exploited through reuse when automating a test suite. In this paper, we investigate the potential for reducing test effort by ordering the test cases before such automation, given that we can reuse already automated parts of test cases. In our analysis, we investigate several approaches for prioritization in a case study at a large Swedish vehicular manufacturer. The study analyzes the effects with respect to test effort, on four projects with a total of 3919 integration test cases constituting 35,180 test steps, written in natural language. The results show that for the four projects considered, the difference in expected manual effort between the best and the worst order found is on average 12 percentage points. The results also show that our proposed prioritization method is nearly as good as more resource demanding meta-heuristic approaches at a fraction of the computational time. Based on our results, we conclude that the order of automation is important when the set of test cases contain similar steps (instructions) that cannot be removed, but are possible to reuse. More precisely, the order is important with respect to how quickly the manual test execution effort decreases for a set of test cases that are being automated.

National Category
Software Engineering
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:mdh:diva-35055DOI: 10.1007/s11219-017-9401-7Scopus ID: 2-s2.0-85043389019OAI: oai:DiVA.org:mdh-35055DiVA, id: diva2:1083850
Projects
IMPRINT
Funder
VINNOVA, 2014-03397Knowledge Foundation, 20130085Knowledge Foundation, 20160139
Available from: 2017-03-22 Created: 2017-03-22 Last updated: 2018-05-08Bibliographically approved
In thesis
1. Similarity-Based Test Effort Reduction
Open this publication in new window or tab >>Similarity-Based Test Effort Reduction
2017 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Embedded computer systems are all around us. We find them in everything, from dishwashers to cars and airplanes. They must always work correctly and moreover, often within certain time constraints. The software of such a system can be very large and complex, e.g. in the case of a car or a train. Hence, we develop the software for embedded systems in smaller, manageable, parts. These parts can be successively integrated, until they form the complete software for the embedded system, possibly at different levels. This phase of the development process is called the system integration phase and is one of the most critical phases in the development of embedded systems. In this phase, substantial effort is spent on testing activities.

Studies have found that a considerable amount of test effort is wasteful due to people, unknowingly or by necessity, performing similar (or even overlapping) test activities. Consequently, test cases may end up to be similar, partially or wholly. We identified such test similarities in a case study of 2500 test cases, written in natural language, from four different projects in the embedded vehicular domain. Such information can be used for reducing effort when maintaining or automating similar test cases.

In another case study in the same domain, we investigated several approaches for prioritizing test cases to automate with the objective to reduce manual test effort as quick as possible given that similar automated tests could be reused (similarity-based reuse). We analyzed how the automation order affects the test effort for four projects with a total of 3919 integration test cases, written in natural language. The results showed that similarity-based reuse of automated test case script code, and the best-performing automation order can reduce the expected manual test effort with 20 percentage points.

Another way of reducing test effort is to reuse test artifacts from one level of integration to another, instead of duplicating them. We studied such reuse methods, that we denote vertical reuse, in a systematic mapping study. While the results from of our systematic mapping study showed the viability of vertical test reuse methods, our industrial case studies showed that keeping track of similarities and test overlaps is both possible and feasible for test effort reduction. We further conclude that the test case automation order affects the manual test execution effort when there exist similar steps that cannot be removed, but are possible to reuse with respect to test script code.

Abstract [sv]

Inbyggda datorsystem finns överallt omkring oss idag; i allt från diskmaskiner till bilar och flygplan. Ofta finns det stränga krav på såväl korrekt funktion som svarstider. Eftersom programvaran i det inbyggda systemet i till exempel en bil är väldigt stor och komplex, utvecklar man programvaran i mindre delar som sedan successivt integreras till det färdiga datorsystemet. Denna del av utvecklingsprocessen kallas systemintegrationsfasen och är en av de största och mest kritiska faserna när man utvecklar inbyggda system. Integrationen kan utföras i ett antal nivåer med utförliga tester av mjukvaran på varje nivå. Detta innebär att det krävs ett avsevärt arbete för att testa mjukvaran. Om man kan minska detta arbete, givetvis utan att ge avkall på testningens kvalitet, förväntas det få en stor effekt på det totala utvecklingsarbetet.

Studier har visat att en icke försumbar del av testarbetet är slöseri på grund av att människor, omedvetet eller av nödvändighet, utför likartade (eller till och med överlappande) testaktiviteter. En konsekvens är att testfall riskerar att bli helt eller delvis lika. Vi identifierade sådana likheter i en fallstudie med 2500 manuella testfall, skrivna på vanligt språk, från fyra projekt inom fordonsindustrin. Information om likheter och överlapp kan användas för att, till exempel, minska arbetsåtgången vid underhåll eller när man översätter testfall till kod så att de kan utföras automatiskt av en dator (automatisering). I en annan studie inom samma domän, undersökte vi flera metoder för att prioritera arbetsordningen vid automatisering av testfall där liknande testfall kunde återanvändas. I studien analyserade vi hur denna ordning påverkar arbetsmängden för fyra industriprojekt med totalt 3919 integrationstestfall skrivna på vanligt språk. Resultaten visar att den bästa ordningen kan minska testarbetet avsevärt. En förutsättning för detta är att redan översatta delar av testfall kan återanvändas för att slippa översätta liknande testfall igen.

En annan väg för att minska testarbetet är att återanvända testfall och information mellan integrationsnivåer. Vi har kartlagt metoder och motiv kring sådan återanvändning i en systematisk mappningsstudie. Våra fallstudier visar vidare att det både är genomförbart och lönsamt att hålla reda på likheter i testfallen. Slutligen konstaterar vi att arbetsinsatsen för manuell testning påverkas av automationsordningen när det är möjligt att återanvända redan översatta delar av liknande testfall.

Place, publisher, year, edition, pages
Västerås: Mälardalen University Press, 2017
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 257
Keyword
Software Testing
National Category
Software Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-35053 (URN)978-91-7485-318-6 (ISBN)
Presentation
2017-04-21, Pi, Mälardalen University, Västerås, 13:15 (English)
Opponent
Supervisors
Projects
IMPRINT
Funder
VINNOVA, 2014-03397
Available from: 2017-03-23 Created: 2017-03-22 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

fulltext(3386 kB)3 downloads
File information
File name FULLTEXT01.pdfFile size 3386 kBChecksum SHA-512
bc60c73a36979e1971e4fc49d695b859ab4aa3f26b4698dd5c3ce3a772bdb46c276173863220e604a2cb269b83186dd676616aad6010a3df188079c419b6c221
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records BETA

Flemström, Daniel

Search in DiVA

By author/editor
Flemström, Daniel
By organisation
Embedded Systems
In the same journal
Software quality journal
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 3 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 198 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf