mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test Case Quality in Test Driven Development: A Study Design and a Pilot Experiment
Mälardalen University, School of Innovation, Design and Engineering. (IS)ORCID iD: 0000-0001-8009-9052
Mälardalen University, School of Innovation, Design and Engineering. (IS)ORCID iD: 0000-0002-5032-2310
Mälardalen University, School of Innovation, Design and Engineering. (IS)ORCID iD: 0000-0001-5269-3900
2012 (English)In: EASE 2012, Proceedings, 2012, 223-227 p.Conference paper, Published paper (Refereed)
Abstract [en]

Background: Test driven development, as a side-effect of developing software, will produce a set of accompanied test cases which can protect implemented features during code refactoring. However, recent research results point out that successful adoption of test driven development might be limited by the testing skills of developers using it. Aim: Main goal of this paper is to investigate if there is a difference between the quality of test cases created while using test-first and test-last approaches. Additional goal of this paper is to measure the code quality produced using test-first and test-last approaches. Method: A pilot study was conducted during the master level course on Software Verification & Validation at Mälardalen University. Students were working individually on the problem implementation by being randomly assigned to a test-first or a test-last (control) group. Source code and test cases created by each participant during the study, as well as their answers on a survey questionnaire after the study, were collected and analysed. The quality of the test cases is analysed from three perspectives: (i) code coverage, (ii) mutation score and (iii) the total number of failing assertions. Results: The total number of test cases with failing assertions (test cases revealing an error in the code) was nearly the same for both test-first and test-last groups. This can be interpreted as "test cases created by test-first developers were as good as (or as bad as) test cases created by test-last developers". On the contrary, solutions created by test-first developers had, on average, 27% less failing assertions when compared to solutions created by the test-last group. Conclusions: Though the study provided some interesting observations, it needs to be conducted as a fully controlled experiment with a higher number of participants in order to validate statistical significance of the presented results.

Place, publisher, year, edition, pages
2012. 223-227 p.
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:mdh:diva-17256DOI: 10.1049/ic.2012.0029Scopus ID: 2-s2.0-84865507236ISBN: 978-184919541-6 (print)OAI: oai:DiVA.org:mdh-17256DiVA: diva2:579587
Conference
16th International Conference on Evaluation and Assessment in Software Engineering, EASE 2012; Ciudad Real;14 May 2012 through 15 May 2012
Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2013-12-03Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Causevic, AdnanSundmark, DanielPunnekkat, Sasikumar
By organisation
School of Innovation, Design and Engineering
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 52 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf