mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (6 of 6) Show all publications
Tahvili, S., Hatvani, L., Felderer, M., Afzal, W. & Bohlin, M. (2019). Automated Functional Dependency Detection Between Test Cases Using Text Semantic Similarity. In: 2019 IEEE International Conference On Artificial Intelligence Testing (AITest): . Paper presented at 2019 IEEE International Conference On Artificial Intelligence Testing (AITest), 4-9 April 2019, Newark, CA, USA (pp. 19-26). , Article ID 8718215.
Open this publication in new window or tab >>Automated Functional Dependency Detection Between Test Cases Using Text Semantic Similarity
Show others...
2019 (English)In: 2019 IEEE International Conference On Artificial Intelligence Testing (AITest), 2019, p. 19-26, article id 8718215Conference paper, Published paper (Other academic)
Abstract [en]

Knowing about dependencies and similarities between test cases is beneficial for prioritizing them for cost-effective test execution. This holds especially true for the time consuming, manual execution of integration test cases written in natural language. Test case dependencies are typically derived from requirements and design artifacts. However, such artifacts are not always available, and the derivation process can be very time-consuming. In this paper, we propose, apply and evaluate a novel approach that derives test cases' similarities and functional dependencies directly from the test specification documents written in natural language, without requiring any other data source. Our approach uses an implementation of Doc2Vec algorithm to detect text-semantic similarities between test cases and then groups them using two clustering algorithms HDBSCAN and FCM. The correlation between test case text-semantic similarities and their functional dependencies is evaluated in the context of an on-board train control system from Bombardier Transportation AB in Sweden. For this system, the dependencies between the test cases were previously derived and are compared to the results our approach. The results show that of the two evaluated clustering algorithms, HDBSCAN has better performance than FCM or a dummy classifier. The classification methods' results are of reasonable quality and especially useful from an industrial point of view. Finally, performing a random undersampling approach to correct the imbalanced data distribution results in an F1 Score of up to 75% when applying the HDBSCAN clustering algorithm.

National Category
Embedded Systems
Identifiers
urn:nbn:se:mdh:diva-41272 (URN)10.1109/AITest.2019.00-13 (DOI)000470916100004 ()2-s2.0-85067096441 (Scopus ID)9781728104928 (ISBN)
Conference
2019 IEEE International Conference On Artificial Intelligence Testing (AITest), 4-9 April 2019, Newark, CA, USA
Available from: 2018-11-01 Created: 2018-11-01 Last updated: 2019-06-27Bibliographically approved
Tahvili, S., Afzal, W., Saadatmand, M., Bohlin, M. & Hasan Ameerjan, S. (2018). ESPRET: A Tool for Execution Time Estimation of Manual Test Cases. Journal of Systems and Software, 146, 26-41
Open this publication in new window or tab >>ESPRET: A Tool for Execution Time Estimation of Manual Test Cases
Show others...
2018 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 146, p. 26-41Article in journal (Refereed) Published
Abstract [en]

Manual testing is still a predominant and an important approach for validation of computer systems, particularly in certain domains such as safetycritical systems. Knowing the execution time of test cases is important to perform test scheduling, prioritization and progress monitoring. In this work, we present, apply and evaluate ESPRET (EStimation and PRediction of Execution Time) as our tool for estimating and predicting the execution time of manual test cases based on their test specifications. Our approach works by extracting timing information for various steps in manual test specifcation. This information is then used to estimate the maximum time for test steps that have not previously been executed, but for which textual specifications exist. As part of our approach, natural language parsing of the specifications is performed to identify word combinations to check whether existing timing information on various test steps is already available or not. Since executing test cases on the several machines may take different time, we predict the actual execution time for test cases by a set of regression models. Finally, an empirical evaluation of the approach and tool has been performed on a railway use case at Bombardier Transportation (BT) in Sweden.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-40905 (URN)10.1016/j.jss.2018.09.003 (DOI)000451488900004 ()2-s2.0-85053193472 (Scopus ID)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsMegaMaRt2 - Megamodelling at Runtime (ECSEL/Vinnova)TESTOMAT Project - The Next Level of Test Automation
Available from: 2018-09-11 Created: 2018-09-11 Last updated: 2019-01-16Bibliographically approved
Tahvili, S. (2018). Multi-Criteria Optimization of System Integration Testing. (Doctoral dissertation). Västerås: Mälardalen University
Open this publication in new window or tab >>Multi-Criteria Optimization of System Integration Testing
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Optimizing software testing process has received much attention over the last few decades. Test optimization is typically seen as a multi-criteria decision making problem. One aspect of test optimization involves test selection, prioritization and execution scheduling. Having an efficient test process can result in the satisfaction of many objectives such as cost and time minimization. It can also lead to on-time delivery and a better quality of the final software product. To achieve the goal of test efficiency, a set of criteria, having an impact on the test cases, need to be identified. The analysis of several industrial case studies and also state of the art in this thesis, indicate that the dependency between integration test cases is one such criterion, with a direct impact on the test execution results. Other criteria of interest include requirement coverage and test execution time. In this doctoral thesis, we introduce, apply and evaluate a set of approaches and tools for test execution optimization at industrial integration testing level in embedded software development. Furthermore, ESPRET (Estimation and Prediction of Execution Time) and sOrTES (Stochastic Optimizing of Test Case Scheduling) are our proposed supportive tools for predicting the execution time and the scheduling of manual integration test cases, respectively. All proposed methods and tools in this thesis, have been evaluated at industrial testing projects at Bombardier Transportation (BT) in Sweden. As a result of the scientific contributions made in this doctoral thesis, employing the proposed approaches has led to an improvement in terms of reducing redundant test execution failures of up to 40% with respect to the current test execution approach at BT. Moreover, an increase in the requirements coverage of up to 9.6% is observed at BT. In summary, the application of the proposed approaches in this doctoral thesis has shown to give considerable gains by optimizing test schedules in system integration testing of embedded software development.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2018
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 281
National Category
Embedded Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-41273 (URN)978-91-7485-414-5 (ISBN)
Public defence
2018-12-21, Lambda, Mälardalens högskola, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2018-11-02 Created: 2018-11-01 Last updated: 2018-11-20Bibliographically approved
Tahvili, S., Saadatmand, M., Bohlin, M., Afzal, W. & Hasan Ameerjan, S. (2017). Towards Execution Time Prediction for Test Cases from Test Specification. In: 2017 43RD EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA): . Paper presented at 43rd Euromicro Conference on Software Engineering and Advanced Applications SEAA'17, 30 Aug 2017, Vienna, Austria (pp. 421-425). Vienna, Austria
Open this publication in new window or tab >>Towards Execution Time Prediction for Test Cases from Test Specification
Show others...
2017 (English)In: 2017 43RD EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA), Vienna, Austria, 2017, p. 421-425Conference paper, Published paper (Refereed)
Abstract [en]

Knowing the execution time of test cases is important to perform test scheduling, prioritization and progress monitoring. This short paper presents a novel approach for predicting the execution time of test cases based on test specifications and available historical data on previously executed test cases. Our approach works by extracting timing information (measured and maximum execution time) for various steps in manual test cases. This information is then used to estimate the maximum time for test steps that have not previously been executed, but for which textual specifications exist. As part of our approach natural language parsing of the specifications is performed to identify word combinations to check whether existing timing information on various test activities already exists or not. Finally, linear regression is used to predict the actual execution time for test cases. A proof-of-concept use-case at Bombardier transportation serves to evaluate the proposed approach.

Place, publisher, year, edition, pages
Vienna, Austria: , 2017
Keywords
Software TestingOptimizationExecution TimeLinear RegressionNLPTest SpecificationEstimation
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35512 (URN)10.1109/SEAA.2017.10 (DOI)000426074600062 ()978-1-5386-2141-7 (ISBN)
Conference
43rd Euromicro Conference on Software Engineering and Advanced Applications SEAA'17, 30 Aug 2017, Vienna, Austria
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsTOCSYC - Testing of Critical System Characteristics (KKS)MegaMaRt2 - Megamodelling at Runtime (ECSEL/Vinnova)
Available from: 2017-06-05 Created: 2017-06-05 Last updated: 2018-03-15Bibliographically approved
Tahvili, S. (2016). A Decision Support System for Integration Test Selection. (Licentiate dissertation). Västerås: Mälardalen University
Open this publication in new window or tab >>A Decision Support System for Integration Test Selection
2016 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Software testing generally suffers from time and budget limitations. Indiscriminately executing all available test cases leads to sub-optimal exploitation of testing resources. Selecting too few test cases for execution on the other hand might leave a large number of faults undiscovered. Test case selection and prioritization techniques can lead to more efficient usage of testing resources and also early detection of faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Test case prioritization schedules test cases for execution in an order to increase their effectiveness at achieving some performance goals such as: earlier fault detection, optimal allocation of testing resources and reducing overall testing effort. In practice, prioritized selection of test cases requires the evaluation of different test case criteria, and therefore, this problem can be formulated as a multi-criteria decision making problem. As the number of decision criteria grows, application of a systematic decision making solution becomes a necessity. In this thesis, we propose a tool-supported framework using a decision support system, for prioritizing and selecting integration test cases in embedded system development. The framework provides a complete loop for selecting the best candidate test case for execution based on a finite set of criteria. The results of multiple case studies, done on a train control management subsystem from Bombardier Transportation AB in Sweden, demonstrate how our approach helps to select test cases in a systematic way. This can lead to early detection of faults while respecting various criteria. Also, we have evaluated a customized return on investment metric to quantify the economic benefits in optimizing system integration testing using our framework.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2016
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 242
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-33118 (URN)978-91-7485-282-0 (ISBN)
Presentation
2016-10-25, Omega, Mälardalens högskola, Västerås, 13:15 (English)
Opponent
Supervisors
Available from: 2016-09-12 Created: 2016-09-09 Last updated: 2016-10-04Bibliographically approved
Tahvili, S., Saadatmand, M., Larsson, S., Afzal, W., Bohlin, M. & Sundmark, D. (2016). Dynamic Integration Test Selection Based on Test Case Dependencies. In: 2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW): . Paper presented at 9th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW) (pp. 277-286). Chicago, United States
Open this publication in new window or tab >>Dynamic Integration Test Selection Based on Test Case Dependencies
Show others...
2016 (English)In: 2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW), Chicago, United States, 2016, p. 277-286Conference paper, Published paper (Refereed)
Abstract [en]

Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.

Place, publisher, year, edition, pages
Chicago, United States: , 2016
Keywords
Software testing, Integration testing, Test selection, Test prioritization, Fuzzy, AHP, Optimization
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-33116 (URN)10.1109/ICSTW.2016.14 (DOI)000382490200038 ()2-s2.0-84992215253 (Scopus ID)978-1-5090-3674-5 (ISBN)
Conference
9th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsTOCSYC - Testing of Critical System Characteristics (KKS)IMPRINT - Innovative Model-Based Product Integration Testing (Vinnova)
Available from: 2016-09-08 Created: 2016-09-08 Last updated: 2018-11-01Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8724-9049

Search in DiVA

Show all publications