https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On Evaluating Test Techniques in an Industrial Setting
Mälardalen University, Department of Computer Science and Electronics.ORCID iD: 0000-0002-5070-9312
2007 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

Testing is a costly and an important activity in the software industry today. The systems are becoming more complex and the amount of code is constantly increasing. The majority of systems need to rely on its testing to show that it works, is reliable, and performs according to user expectations and specifications.

Testing is performed in a multitude of ways, using different test approaches. How testing is conducted becomes essential, when time is limited, since exhaustive testing is not an option in large complex systems, Therefore, the design of the individual test case – and what part and aspect of the system it exercises, is the main focus of testing. Not only do we need to create, and execute test cases efficiently, but we also want them to expose important faults in the system. This main topic of testing has long been a focus of practitioners in industry, and there exists over 70 test techniques that aim to describe how to design a test case. Unfortunately, despite the industrial needs, research on test techniques are seldom performed in large complex systems.

The main purpose of this licentiate thesis is to create an environment and framework where it is possible to evaluate test techniques. Our overall goal is to investigate suitable test techniques for different levels, (e.g. component, integration and system level) and to provide guidelines to industry on what is effective, efficient and applicable to test, based on knowledge of failure-fault distribution in a particular domain. In this thesis, our research has been described through four papers that start from a broad overview of typical industrial systems and arrive at a specific focus on how to set up a controlled experiment in an industrial environment. Our initial paper has stated the status of testing in industry, and aided in identifying specific issues as well as underlined the need for further research. We then made experiments with component test improvements, by simple utilization of known approaches (e.g. static analysis, code reviews and statement coverage). This resulted in a substantial cost-reduction and increased quality, and provided us better understanding of the difficulties in deploying known test techniques in reality, which are described in our second paper. These works lead us to our third paper, which describes the framework and process for evaluating test techniques. The first sub-process in this framework deals with how to prepare the experiment with a known set of faults. We aimed to investigate fault classifications to get a useful set of faults of different types to inject. In addition, we investigated real faults reported in an industrial system, performed controlled experiments, and the results were published in our fourth paper.

The main contributions of this Licentiate thesis are the valuable insights in the context of evaluation of test techniques, specifically the problems of creating a useful experiment in an industrial setting, in addition to the survey of the state of practice of software testing in Industry. We want to better understand what needs to be done to create efficient evaluations of test techniques, and secondly what is the relation between faults/failures and test techniques. Though our experiments have not yet been able to create ‘the ultimate’ classification for such an aim, the results indicate the appropriateness of this approach. With these valuable insights, we believe that we will be able to direct our future research, to make better evaluations that have a larger potential to generalize and scale.

Place, publisher, year, edition, pages
Institutionen för datavetenskap och elektronik , 2007. , p. 116
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 78
Keywords [en]
Fault, Failure, Fault injection, Test Techniques,
National Category
Computer Sciences
Research subject
Datavetenskap
Identifiers
URN: urn:nbn:se:mdh:diva-470ISBN: 978-91-85485-68-0 (print)OAI: oai:DiVA.org:mdh-470DiVA, id: diva2:120997
Presentation
2007-12-18, Delta, Mälardalens Högskola, Rosenhill, Högskoleplan 1, Västerås, 14:00
Opponent
Supervisors
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2018-01-13
List of papers
1. How to Save on Quality Assurance – Challenges in Software Testing
Open this publication in new window or tab >>How to Save on Quality Assurance – Challenges in Software Testing
2006 (English)In: Jornadas sobre Testeo de SoftwareArticle in journal (Refereed) Published
Identifiers
urn:nbn:se:mdh:diva-4143 (URN)
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2015-03-05Bibliographically approved
2. Experiments with Component Test to Improve Software Quality
Open this publication in new window or tab >>Experiments with Component Test to Improve Software Quality
(English)Manuscript (preprint) (Other academic)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-4144 (URN)
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2018-01-13Bibliographically approved
3. Framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Techniques
Open this publication in new window or tab >>Framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Techniques
Show others...
2006 (English)In: Proceedings - Testing: Academic and Industrial Conference - Practice and Research Techniques, TAIC PART 2006, 2006, p. 159-170, article id 1691683Conference paper, Published paper (Refereed)
Abstract [en]

Software testing is expensive for the industry, and always constrained by time and effort. Although there is a multitude of test techniques, there are currently no scientifically based guidelines for the selection of appropriate techniques of different domains and contexts. For large complex systems, some techniques are more efficient in finding failures than others and some are easier to apply than others are. From an industrial perspective, it is important to find the most effective and efficient test design technique that is possible to automate and apply. In this paper, we propose an experimental framework for comparison of test techniques with respect to efficiency, effectiveness and applicability. We also plan to evaluate ease of automation, which has not been addressed by previous studies. We highlight some of the problems of evaluating or comparingtest techniques in an objective manner. We describe our planned process for this multi-phase experimental study. This includes presentation of some of the important measurements to be collected with the dual goals of analyzing the properties of the test technique, as well as validating our experimental framework.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-4145 (URN)10.1109/TAIC-PART.2006.1 (DOI)2-s2.0-80053523364 (Scopus ID)
Conference
1st Testing: Academic and Industrial Conference - Practice and Research Techniques, TAIC PART 2006; Windsor; United Kingdom; 29 August 2006 through 31 August 2006
Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2015-06-01Bibliographically approved
4. Component Testing is Not Enough - A Study of Software Faults in Telecom Middleware
Open this publication in new window or tab >>Component Testing is Not Enough - A Study of Software Faults in Telecom Middleware
2007 (English)In: Lecture Notes in Computer Science, vol. 4581, Springer, 2007, p. 74-89Chapter in book (Refereed)
Abstract [en]

The interrelationship between software faults and failures is quite intricate and obtaining a meaningful characterization of it would definitely help the testing community in deciding on efficient and effective test strategies. Towards this objective, we have investigated and classified failures observed in a large complex telecommunication industry middleware system during 2003-2006. In this paper, we describe the process used in our study for tracking faults from failures along with the details of failure data. We present the distribution and frequency of the failures along with some interesting findings unravelled while analyzing the origins of these failures. Firstly, though "simple" faults happen, together they account for only less than 10%. The majority of faults come from either missing code or path, or superfluous code, which are all faults that manifest themselves for the first time at integration/system level; not at component level. These faults are more frequent in the early versions of the software, and could very well be attributed to the difficulties in comprehending and specifying the context (and adjacent code) and its dependencies well enough, in a large complex system with time to market pressures. This exposes the limitations of component testing in such complex systems and underlines the need for allocating more resources for higher level integration and system testing.

Place, publisher, year, edition, pages
Springer, 2007
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 4581
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-4146 (URN)10.1007/978-3-540-73066-8_6 (DOI)978-3-540-73065-1 (ISBN)
Note

19th IFIP TC6/WG6.1 International Conference on Testing of Communicating Systems, Testcom 2007, and 7th International Workshop on Formal Approaches to Testing Software, FATES 2007; Tallinn ;26 June 2007 through 29 June 2007

Available from: 2007-12-04 Created: 2007-12-04 Last updated: 2015-02-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Eldh, Sigrid

Search in DiVA

By author/editor
Eldh, Sigrid
By organisation
Department of Computer Science and Electronics
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 853 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf