https://www.mdu.se/

mdu.sePublications
Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ghazi, Nauman
    Blekinge Institute of Technolog.
    Itkonen, Juha
    Aalto University, Espoo, Finland.
    Torkar, Richard
    Chalmers University of Technology.
    Andrews, Anneliese
    University of Denver, USA.
    Bhatti, Khurram
    Blekinge Institute of Technolog.
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 844-878Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 2.
    Bate, Iain
    et al.
    University of York.
    Khan, Usman
    University of Cambridge.
    WCET analysis of modern processors using multi-criteria optimisation2011In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 16, no 1, p. 5-28Article in journal (Refereed)
    Abstract [en]

    The Worst-Case Execution Time (WCET) is an important execution metric for real-time systems, and an accurate estimate for this increases the reliability of subsequent schedulability analysis. Performance enhancing features on modern processors, such as pipelines and caches, however, make it difficult to accurately predict the WCET. One technique for finding the WCET is to use test data generated using search algorithms. Existing work on search-based approaches has been successfully used in both industry and academia based on a single criterion function, the WCET, but only for simple processors. This paper investigates how effective this strategy is for more complex processors and to what extent other criteria help guide the search, e.g. the number of cache misses. Not unexpectedly the work shows no single choice of criteria work best across all problems. Based on the findings recommendations are proposed on which criteria are useful in particular situations.

  • 3.
    Bell, R
    et al.
    AT and T Labs Research, United States .
    Ostrand, Thomas J.
    AT and T Labs Research, United States .
    WEYUKER, ELAINE
    AT and T Labs Research, United States .
    The limited impact of individual developer data on software defect prediction2013In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 18, no 3, p. 478-505Article in journal (Refereed)
    Abstract [en]

    Previous research has provided evidence that a combination of static code metrics and software history metrics can be used to predict with surprising success which files in the next release of a large system will havethe largest numbers of defects. In contrast, very little research exists to indicate whether information about individual developers can profitably be used to improve predictions. We investigate whether files in a large system that are modified by an individual developer consistently contain either more or fewer faults than the average of all files in the system. The goal of the investigation is to determine whether information about which particular developer modified a file is able to improve defect predictions. We also extend earlier research evaluating use of counts of the number of developers who modified a file as predictors of the file's future faultiness. We analyze change reports filed for three large systems, each containing 18 releases, with a combined total of nearly 4 million LOC and over 11,000 files. A buggy file ratio is defined for programmers, measuring the proportion of faulty files in Release R out of all files modified by the programmer in Release R-1. We assess the consistency of the buggy file ratio across releases for individual programmers both visually and within the context of a fault prediction model. Buggy file ratios for individual programmers often varied widely across all the releases that they participated in. A prediction model that takes account of the history of faulty files that were changed by individual developers shows improvement over the standard negative binomial model of less than 0.13% according to one measure, and no improvement at all according to another measure. In contrast, augmenting a standard model with counts of cumulative developers changing files in prior releases produced up to a 2% improvement in the percentage of faults detected in the top 20% of predicted faulty files. The cumulative number of developers interacting with a file can be a useful variable for defect prediction. However, the study indicates that adding information to a model about which particular developermodified a file is not likely to improve defect predictions.

  • 4.
    Shin, Y
    et al.
    North Carolina State University, United States .
    Bell, R
    AT and T Labs Research, United States .
    Ostrand, T
    AT and T Labs Research, United States .
    Weyuker, Elaine
    Mälardalen University, School of Innovation, Design and Engineering. AT and T Labs Research, United States .
    On the use of calling structure information to improve fault prediction2012In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 17, no 4-5, p. 390-423Article in journal (Refereed)
    Abstract [en]

    Previous studies have shown that software code attributes, such as lines of source code, and history information, such as the number of code changes and the number of faults in prior releases of software, are useful for predicting where faults will occur. In this study of two large industrial software systems, we investigate the effectiveness of adding information about calling structure to fault prediction models. Adding callingstructure information to a model based solely on non-calling structure code attributes modestly improved prediction accuracy. However, the addition of calling structure information to a model that included both history and non-calling structure code attributes produced no improvement.

  • 5.
    Weyuker, Elaine
    et al.
    AT and T Labs, United States .
    Ostrand, T
    AT and T Labs, United States .
    Bell, R
    AT and T Labs, United States .
    Comparing the Effectiveness of Several Modeling Methods for Fault Prediction2010In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 15, no 3, p. 277-295Article in journal (Refereed)
    Abstract [en]

    We compare the effectiveness of four modeling methods-negative binomial regression, recursive partitioning, random forests and Bayesian additive regression trees-for predicting the files likely to contain the most faults for 28 to 35 releases of three large industrial software systems. Predictor variables included lines of code, file age, faults in the previous release, changes in the previous two releases, and programming language. To compare the effectiveness of the different models, we use two metrics-the percent of faults contained in the top 20% of files identified by the model, and a new, more general metric, the fault-percentile-average. The negative binomial regression and random forests models performed significantly better than recursive partitioning and Bayesian additive regression trees, as assessed by either of the metrics. For each of the three systems, the negative binomial and random forests models identified 20% of the files in each release that contained an average of 76% to 94% of the faults. 

  • 6.
    weyuker, elaine
    et al.
    AT&T Labs Res, Florham Pk, NJ 07932 USA.
    Ostrand, T
    AT&T Labs Res, Florham Pk, NJ 07932 USA.
    Bell, R
    AT&T Labs Res, Florham Pk, NJ 07932 USA.
    Do Too Many Cooks Spoil the Broth? Using the Number of Developers to Enhance Defect Prediction Models2008In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 13, no 5, p. 539-559Article in journal (Refereed)
    Abstract [en]

    Fault prediction by negative binomial regression models is shown to be effective for four large production software systems from industry. A model developed originally with data from systems with regularly scheduled releases was successfully adapted to a system without releases to identify 20% of that system's files that contained 75% of the faults. A model with a pre-specified set of variables derived from earlier research was applied to three additional systems, and proved capable of identifying averages of 81, 94 and 76% of the faults in those systems. A primary focus of this paper is to investigate the impact on predictive accuracy of using data about the number of developers who access individual code units. For each system, including the cumulative number of developers who had previously modified a file yielded no more than a modest improvement in predictive accuracy. We conclude that while many factors can "spoil the broth" (lead to the release of software with too many defects), the number of developers is not a major influence.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf