mdh.sePublications
12 2 of 2
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-Criteria Optimization of System Integration Testing
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. RISE SICS Västerås.ORCID iD: 0000-0002-8724-9049
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Optimizing software testing process has received much attention over the last few decades. Test optimization is typically seen as a multi-criteria decision making problem. One aspect of test optimization involves test selection, prioritization and execution scheduling. Having an efficient test process can result in the satisfaction of many objectives such as cost and time minimization. It can also lead to on-time delivery and a better quality of the final software product. To achieve the goal of test efficiency, a set of criteria, having an impact on the test cases, need to be identified. The analysis of several industrial case studies and also state of the art in this thesis, indicate that the dependency between integration test cases is one such criterion, with a direct impact on the test execution results. Other criteria of interest include requirement coverage and test execution time. In this doctoral thesis, we introduce, apply and evaluate a set of approaches and tools for test execution optimization at industrial integration testing level in embedded software development. Furthermore, ESPRET (Estimation and Prediction of Execution Time) and sOrTES (Stochastic Optimizing of Test Case Scheduling) are our proposed supportive tools for predicting the execution time and the scheduling of manual integration test cases, respectively. All proposed methods and tools in this thesis, have been evaluated at industrial testing projects at Bombardier Transportation (BT) in Sweden. As a result of the scientific contributions made in this doctoral thesis, employing the proposed approaches has led to an improvement in terms of reducing redundant test execution failures of up to 40% with respect to the current test execution approach at BT. Moreover, an increase in the requirements coverage of up to 9.6% is observed at BT. In summary, the application of the proposed approaches in this doctoral thesis has shown to give considerable gains by optimizing test schedules in system integration testing of embedded software development.

Place, publisher, year, edition, pages
Västerås: Mälardalen University , 2018.
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 281
National Category
Embedded Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:mdh:diva-41273ISBN: 978-91-7485-414-5 (print)OAI: oai:DiVA.org:mdh-41273DiVA, id: diva2:1260297
Public defence
2018-12-21, Lambda, Mälardalens högskola, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2018-11-02 Created: 2018-11-01 Last updated: 2018-11-20Bibliographically approved
List of papers
1. Dynamic Integration Test Selection Based on Test Case Dependencies
Open this publication in new window or tab >>Dynamic Integration Test Selection Based on Test Case Dependencies
Show others...
2016 (English)In: 2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW), Chicago, United States, 2016, p. 277-286Conference paper, Published paper (Refereed)
Abstract [en]

Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.

Place, publisher, year, edition, pages
Chicago, United States: , 2016
Keywords
Software testing, Integration testing, Test selection, Test prioritization, Fuzzy, AHP, Optimization
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-33116 (URN)10.1109/ICSTW.2016.14 (DOI)000382490200038 ()2-s2.0-84992215253 (Scopus ID)978-1-5090-3674-5 (ISBN)
Conference
9th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsTOCSYC - Testing of Critical System Characteristics (KKS)IMPRINT - Innovative Model-Based Product Integration Testing (Vinnova)
Available from: 2016-09-08 Created: 2016-09-08 Last updated: 2018-11-01Bibliographically approved
2. Cost-Benefit Analysis of Using Dependency Knowledge at Integration Testing
Open this publication in new window or tab >>Cost-Benefit Analysis of Using Dependency Knowledge at Integration Testing
Show others...
2016 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, Vol. 10027, p. 268-284Conference paper, Published paper (Refereed)
Abstract [en]

In software system development, testing can take considerable time and resources, and there are numerous examples in the literature of how to improve the testing process. In particular, methods for selection and prioritization of test cases can play a critical role in efficient use of testing resources. This paper focuses on the problem of selection and ordering of integration-level test cases. Integration testing is performed to evaluate the correctness of several units in composition. Further, for reasons of both effectiveness and safety, many embedded systems are still tested manually. To this end, we propose a process, supported by an online decision support system, for ordering and selection of test cases based on the test result of previously executed test cases. To analyze the economic efficiency of such a system, a customized return on investment (ROI) metric tailored for system integration testing is introduced. Using data collected from the development process of a large-scale safety-critical embedded system, we perform Monte Carlo simulations to evaluate the expected ROI of three variants of the proposed new process. The results show that our proposed decision support system is beneficial in terms of ROI at system integration testing and thus qualifies as an important element in improving the integration testing process.

Keywords
Process improvement, Software testing, Decision support system, Integration testing, Test case selection, Prioritization, Optimization, Return on investment
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-32887 (URN)10.1007/978-3-319-49094-6_17 (DOI)2-s2.0-84998880972 (Scopus ID)
Conference
THE 17TH INTERNATIONAL CONFERENCE ON PRODUCT-FOCUSED SOFTWARE PROCESS IMPROVEMENT PROFES'16, 22-24 Nov 2016, TRONDHEIM, Norway
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsTOCSYC - Testing of Critical System Characteristics (KKS)IMPRINT - Innovative Model-Based Product Integration Testing (Vinnova)
Available from: 2016-08-29 Created: 2016-08-24 Last updated: 2018-11-01Bibliographically approved
3. ESPRET: A Tool for Execution Time Estimation of Manual Test Cases
Open this publication in new window or tab >>ESPRET: A Tool for Execution Time Estimation of Manual Test Cases
Show others...
2018 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 146, p. 26-41Article in journal (Refereed) Published
Abstract [en]

Manual testing is still a predominant and an important approach for validation of computer systems, particularly in certain domains such as safetycritical systems. Knowing the execution time of test cases is important to perform test scheduling, prioritization and progress monitoring. In this work, we present, apply and evaluate ESPRET (EStimation and PRediction of Execution Time) as our tool for estimating and predicting the execution time of manual test cases based on their test specifications. Our approach works by extracting timing information for various steps in manual test specifcation. This information is then used to estimate the maximum time for test steps that have not previously been executed, but for which textual specifications exist. As part of our approach, natural language parsing of the specifications is performed to identify word combinations to check whether existing timing information on various test steps is already available or not. Since executing test cases on the several machines may take different time, we predict the actual execution time for test cases by a set of regression models. Finally, an empirical evaluation of the approach and tool has been performed on a railway use case at Bombardier Transportation (BT) in Sweden.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-40905 (URN)10.1016/j.jss.2018.09.003 (DOI)2-s2.0-85053193472 (Scopus ID)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsMegaMaRt2 - Megamodelling at Runtime (ECSEL/Vinnova)TESTOMAT Project - The Next Level of Test Automation
Available from: 2018-09-11 Created: 2018-09-11 Last updated: 2018-11-01Bibliographically approved
4. Functional Dependency Detection for Integration Test Cases
Open this publication in new window or tab >>Functional Dependency Detection for Integration Test Cases
Show others...
2018 (English)In: Proceedings - 2018 IEEE 18th International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 207-214Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a natural language processing (NLP) based approach that, given software requirements specification, allows the functional dependency detection between integration test cases. We analyze a set of internal signals to the implemented modules for detecting dependencies between requirements and thereby identifying dependencies between test cases such that: module 2 depends on module 1 if an output internal signal from module 1 enters as an input internal signal to the module 2. Consequently, all requirements (and thereby test cases) for module 2 are dependent on all the designed requirements (and test cases) for module 1. The dependency information between requirements (and thus corresponding test cases) can be utilized for test case prioritization and scheduling. We have implemented our approach as a tool and the feasibility is evaluated through an industrial use case in the railway domain at Bombardier Transportation (BT), Sweden.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Dependency, Internal Signals, NLP, Optimization, Software Requirement, Software Testing, C (programming language), Computer software selection and evaluation, Integral equations, Natural language processing systems, Requirements engineering, Software reliability, Testing, Bombardier Transportation, Dependency informations, Functional dependency, Software requirements, Software requirements specifications, Test case prioritization
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-40742 (URN)10.1109/QRS-C.2018.00047 (DOI)2-s2.0-85052305334 (Scopus ID)9781538678398 (ISBN)
Conference
18th IEEE International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018, 16 July 2018 through 20 July 2018
Available from: 2018-09-07 Created: 2018-09-07 Last updated: 2018-11-01Bibliographically approved
5. Automated Functional Dependency DetectionBetween Test Cases Using Text Semantic Similarity
Open this publication in new window or tab >>Automated Functional Dependency DetectionBetween Test Cases Using Text Semantic Similarity
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Embedded Systems
Identifiers
urn:nbn:se:mdh:diva-41272 (URN)
Available from: 2018-11-01 Created: 2018-11-01 Last updated: 2018-12-05Bibliographically approved
6. sOrTES: A Supportive Tool forStochastic Scheduling of ManualIntegration Test Cases
Open this publication in new window or tab >>sOrTES: A Supportive Tool forStochastic Scheduling of ManualIntegration Test Cases
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Embedded Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-41271 (URN)
Available from: 2018-11-01 Created: 2018-11-01 Last updated: 2018-12-05Bibliographically approved

Open Access in DiVA

fulltext(981 kB)15 downloads
File information
File name FULLTEXT03.pdfFile size 981 kBChecksum SHA-512
44e15eac975d227bf90c61fb698d645adba55cebfae17d7349850f83d7f58a47b46d6093f96cda80c576842e836523e85dfcb0766a9d08cfab96af627362993d
Type fulltextMimetype application/pdf

Authority records BETA

Tahvili, Sahar

Search in DiVA

By author/editor
Tahvili, Sahar
By organisation
Embedded Systems
Embedded Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 15 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 123 hits
12 2 of 2
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf