https://www.mdu.se/

mdu.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Techniques
Mälardalen University, Department of Computer Science and Electronics.ORCID iD: 0000-0002-5070-9312
Mälardalen University, Department of Computer Science and Electronics.ORCID iD: 0000-0002-7235-6888
Mälardalen University, Department of Computer Science and Electronics.ORCID iD: 0000-0001-5269-3900
Mälardalen University, Department of Computer Science and Electronics.
Show others and affiliations
2006 (English)In: Proceedings - Testing: Academic and Industrial Conference - Practice and Research Techniques, TAIC PART 2006, 2006, p. 159-170, article id 1691683Conference paper, Published paper (Refereed)
Abstract [en]

Software testing is expensive for the industry, and always constrained by time and effort. Although there is a multitude of test techniques, there are currently no scientifically based guidelines for the selection of appropriate techniques of different domains and contexts. For large complex systems, some techniques are more efficient in finding failures than others and some are easier to apply than others are. From an industrial perspective, it is important to find the most effective and efficient test design technique that is possible to automate and apply. In this paper, we propose an experimental framework for comparison of test techniques with respect to efficiency, effectiveness and applicability. We also plan to evaluate ease of automation, which has not been addressed by previous studies. We highlight some of the problems of evaluating or comparingtest techniques in an objective manner. We describe our planned process for this multi-phase experimental study. This includes presentation of some of the important measurements to be collected with the dual goals of analyzing the properties of the test technique, as well as validating our experimental framework.

Place, publisher, year, edition, pages
2006. p. 159-170, article id 1691683
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-4145DOI: 10.1109/TAIC-PART.2006.1Scopus ID: 2-s2.0-80053523364OAI: oai:DiVA.org:mdh-4145DiVA, id: diva2:120995
Conference
1st Testing: Academic and Industrial Conference - Practice and Research Techniques, TAIC PART 2006; Windsor; United Kingdom; 29 August 2006 through 31 August 2006
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2015-06-01Bibliographically approved
In thesis
1. On Evaluating Test Techniques in an Industrial Setting
Open this publication in new window or tab >>On Evaluating Test Techniques in an Industrial Setting
2007 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

Testing is a costly and an important activity in the software industry today. The systems are becoming more complex and the amount of code is constantly increasing. The majority of systems need to rely on its testing to show that it works, is reliable, and performs according to user expectations and specifications.

Testing is performed in a multitude of ways, using different test approaches. How testing is conducted becomes essential, when time is limited, since exhaustive testing is not an option in large complex systems, Therefore, the design of the individual test case – and what part and aspect of the system it exercises, is the main focus of testing. Not only do we need to create, and execute test cases efficiently, but we also want them to expose important faults in the system. This main topic of testing has long been a focus of practitioners in industry, and there exists over 70 test techniques that aim to describe how to design a test case. Unfortunately, despite the industrial needs, research on test techniques are seldom performed in large complex systems.

The main purpose of this licentiate thesis is to create an environment and framework where it is possible to evaluate test techniques. Our overall goal is to investigate suitable test techniques for different levels, (e.g. component, integration and system level) and to provide guidelines to industry on what is effective, efficient and applicable to test, based on knowledge of failure-fault distribution in a particular domain. In this thesis, our research has been described through four papers that start from a broad overview of typical industrial systems and arrive at a specific focus on how to set up a controlled experiment in an industrial environment. Our initial paper has stated the status of testing in industry, and aided in identifying specific issues as well as underlined the need for further research. We then made experiments with component test improvements, by simple utilization of known approaches (e.g. static analysis, code reviews and statement coverage). This resulted in a substantial cost-reduction and increased quality, and provided us better understanding of the difficulties in deploying known test techniques in reality, which are described in our second paper. These works lead us to our third paper, which describes the framework and process for evaluating test techniques. The first sub-process in this framework deals with how to prepare the experiment with a known set of faults. We aimed to investigate fault classifications to get a useful set of faults of different types to inject. In addition, we investigated real faults reported in an industrial system, performed controlled experiments, and the results were published in our fourth paper.

The main contributions of this Licentiate thesis are the valuable insights in the context of evaluation of test techniques, specifically the problems of creating a useful experiment in an industrial setting, in addition to the survey of the state of practice of software testing in Industry. We want to better understand what needs to be done to create efficient evaluations of test techniques, and secondly what is the relation between faults/failures and test techniques. Though our experiments have not yet been able to create ‘the ultimate’ classification for such an aim, the results indicate the appropriateness of this approach. With these valuable insights, we believe that we will be able to direct our future research, to make better evaluations that have a larger potential to generalize and scale.

Place, publisher, year, edition, pages
Institutionen för datavetenskap och elektronik, 2007. p. 116
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 78
Keywords
Fault, Failure, Fault injection, Test Techniques,
National Category
Computer Sciences
Research subject
Datavetenskap
Identifiers
urn:nbn:se:mdh:diva-470 (URN)978-91-85485-68-0 (ISBN)
Presentation
2007-12-18, Delta, Mälardalens Högskola, Rosenhill, Högskoleplan 1, Västerås, 14:00
Opponent
Supervisors
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2018-01-13

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Eldh, SigridHansson, HansPunnekkat, SasikumarSundmark, Daniel

Search in DiVA

By author/editor
Eldh, SigridHansson, HansPunnekkat, SasikumarSundmark, Daniel
By organisation
Department of Computer Science and Electronics
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 628 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf