Background: Test driven development, as a side-effect of developing software, will produce a set of accompanied test cases which can protect implemented features during code refactoring. However, recent research results point out that successful adoption of test driven development might be limited by the testing skills of developers using it. Aim: Main goal of this paper is to investigate if there is a difference between the quality of test cases created while using test-first and test-last approaches. Additional goal of this paper is to measure the code quality produced using test-first and test-last approaches. Method: A pilot study was conducted during the master level course on Software Verification & Validation at Mälardalen University. Students were working individually on the problem implementation by being randomly assigned to a test-first or a test-last (control) group. Source code and test cases created by each participant during the study, as well as their answers on a survey questionnaire after the study, were collected and analysed. The quality of the test cases is analysed from three perspectives: (i) code coverage, (ii) mutation score and (iii) the total number of failing assertions. Results: The total number of test cases with failing assertions (test cases revealing an error in the code) was nearly the same for both test-first and test-last groups. This can be interpreted as "test cases created by test-first developers were as good as (or as bad as) test cases created by test-last developers". On the contrary, solutions created by test-first developers had, on average, 27% less failing assertions when compared to solutions created by the test-last group. Conclusions: Though the study provided some interesting observations, it needs to be conducted as a fully controlled experiment with a higher number of participants in order to validate statistical significance of the presented results.