https://www.mdu.se/

mdu.sePublications
Change search
Refine search result
123 1 - 50 of 113
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbaspour Asadollah, Sara
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sundmark, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Eldh, Sigrid
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Ericsson AB, Kista, Sweden .
    Hansson, Hans
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afza, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    10 Years of research on debugging concurrent and multicore software: a systematic mapping study2017In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 25, no 1, p. 49-82Article in journal (Refereed)
    Abstract [en]

    Debugging – the process of identifying, localizing and fixing bugs – is a key activity in software development. Due to issues such as non-determinism and difficulties of reproducing failures, debugging concurrent software is significantly more challenging than debugging sequential software. A number of methods, models and tools for debugging concurrent and multicore software have been proposed, but the body of work partially lacks a common terminology and a more recent view of the problems to solve. This suggests the need for a classification, and an up-to-date comprehensive overview of the area. 

    This paper presents the results of a systematic mapping study in the field of debugging of concurrent and multicore software in the last decade (2005– 2014). The study is guided by two objectives: (1) to summarize the recent publication trends and (2) to clarify current research gaps in the field.

    Through a multi-stage selection process, we identified 145 relevant papers. Based on these, we summarize the publication trend in the field by showing distribution of publications with respect to year , publication venues , representation of academia and industry , and active research institutes . We also identify research gaps in the field based on attributes such as types of concurrency bugs, types of debugging processes , types of research  and research contributions.

    The main observations from the study are that during the years 2005–2014: (1) there is no focal conference or venue to publish papers in this area, hence a large variety of conferences and journal venues (90) are used to publish relevant papers in this area; (2) in terms of publication contribution, academia was more active in this area than industry; (3) most publications in the field address the data race bug; (4) bug identification is the most common stage of debugging addressed by articles in the period; (5) there are six types of research approaches found, with solution proposals being the most common one; and (6) the published papers essentially focus on four different types of contributions, with ”methods” being the type most common one.

    We can further conclude that there is still quite a number of aspects that are not sufficiently covered in the field, most notably including (1) exploring correction  and fixing bugs  in terms of debugging process; (2) order violation, suspension  and starvation  in terms of concurrency bugs; (3) validation and evaluation research  in the matter of research type; (4) metric  in terms of research contribution. It is clear that the concurrent, parallel and multicore software community needs broader studies in debugging.This systematic mapping study can help direct such efforts.

  • 2.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering. Embedded Systems.
    Search-based approaches to software fault prediction and software testing2009Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

  • 3. Afzal, Wasif
    Search-based prediction of software quality: Evaluations and comparisons2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision support for effective management of software V&V activities.

  • 4.
    Afzal, Wasif
    Blekinge Institute of Technology.
    Using faults-slip-through metric as a predictor of fault-proneness2010In: Proceedings - Asia-Pacific Software Engineering Conference, APSEC, 2010, p. 412-422Conference paper (Refereed)
    Abstract [en]

    Background: The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. Aims: This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the speci?c test levels of integration and system test. Method: We applied eight classi?cation techniques, to the task of identifying faultprone modules, representing a variety of approaches, including a standard statistical technique for classi?cation (logistic regression), tree-structured classi?ers (C4.5 and random forests), a Bayesian technique (Naïve Bayes), machine-learning techniques (support vector machines and back-propagation arti?cial neural networks) and search-based techniques (genetic programming and arti?cial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. Results: Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, the faults-slip-through metric showed impressive results with the majority of the techniques for predicting fault-prone modules at both integration and system test levels. There were, however, no statistically signi?cant differences between the performance of different techniques based on AUC, even though certain techniques were more consistent in the classi?cation performance at the two test levels. Conclusions: We can conclude that the faults-slip-through metric is a potentially strong predictor of fault-proneness at integration and system test levels. The faults-slip-through measurements interact in ways that is conveniently accounted for by majority of the data mining techniques.

  • 5.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Bahria University, Islamabad, Pakistan.
    Alone, Snehal
    Chalmers University of Technology, Sweden.
    Glocksien, Kerstin
    Chalmers University of Technology, Sweden.
    Torkar, Richard
    Chalmers University of Technology, Sweden.
    Software Test Process Improvement Approaches: A Systematic Literature Review and an Industrial Case Study2016In: Journal of Systems and Software JSS, ISSN 0164-1212, Vol. 111, p. 1-33Article in journal (Refereed)
    Abstract [en]

    Software test process improvement (STPI) approaches are frameworks that guide software development organizations to improve their software testing process. We have identified existing STPI approaches and their characteristics (such as completeness of development, availability of information and assessment instruments, and domain limitations of the approaches) using a systematic literature review (SLR). Furthermore, two selected approaches (TPI NEXT and TMMi) are evaluated with respect to their content and assessment results in industry. As a result of this study, we have identified 18 STPI approaches and their characteristics. A detailed comparison of the content of TPI NEXT and TMMi is done. We found that many of the STPI approaches do not provide sufficient information or the approaches do not include assessment instruments. This makes it difficult to apply many approaches in industry. Greater similarities were found between TPI NEXT and TMMi and fewer differences. We conclude that numerous STPI approaches are available but not all are generally applicable for industry. One major difference between available approaches is their model representation. Even though the applied approaches generally show strong similarities, differences in the assessment results arise due to their different model representations.

  • 6.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bruneliere, H.
    IMT Atlantique – LS2N (CNRS) – ARMINES, France.
    Di Ruscio, D.
    Università degli Studi dell'Aquila - DISIM | Center of Excellence DEWS, Italy.
    Sadovykh, A.
    Softeam, France.
    Mazzini, S.
    Intecs, Italy.
    Cariou, E.
    Université de Pau et des Pays de l'Adour, LIUPPA, France.
    Truscan, D.
    Åbo Akademi University, Finland.
    Cabot, J.
    ICREA, Spain.
    Gómez, A.
    Internet Interdisciplinary Institute (IN3), Universitat Oberta de Catalunya (UOC), Spain.
    Gorroñogoitia, J.
    ATOS, Spain.
    Pomante, L.
    Università degli Studi dell'Aquila - DISIM | Center of Excellence DEWS, Italy.
    Smrz, P.
    Brno University of Technology, Czech Republic.
    The MegaM@Rt2 ECSEL project: MegaModelling at Runtime – Scalable model-based framework for continuous development and runtime validation of complex systems2018In: Microprocessors and microsystems, ISSN 0141-9331, E-ISSN 1872-9436, Vol. 61, p. 86-95Article in journal (Refereed)
    Abstract [en]

    A major challenge for the European electronic industry is to enhance productivity by ensuring quality of development, integration and maintenance while reducing the associated costs. Model-Driven Engineering (MDE) principles and techniques have already shown promising capabilities, but they still need to scale up to support real-world scenarios implied by the full deployment and use of complex electronic components and systems. Moreover, maintaining efficient traceability, integration, and communication between two fundamental system life cycle phases (design time and runtime) is another challenge requiring the scalability of MDE. This paper presents an overview of the ECSEL 1 project entitled “MegaModelling at runtime – Scalable model-based framework for continuous development and runtime validation of complex systems” (MegaM@Rt2), whose aim is to address the above mentioned challenges facing MDE. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: 1) system engineering/design and continuous development, 2) related runtime analysis and 3) global models and traceability management. Diverse industrial use cases (covering strategic domains such as aeronautics, railway, construction and telecommunications) will integrate and demonstrate the validity of the MegaM@Rt2 solution. This paper provides an overview of the MegaM@Rt2 project with respect to its approach, mission, objectives as well as to its implementation details. It further introduces the consortium as well as describes the work packages and few already produced deliverables.

    Download full text (pdf)
    fulltext
  • 7.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bruneliere, Hugo
    AtlanMod Team, Inria, France.
    Di Ruscio, Davide
    Univ. of L'Aquila, L'Aquila, Italy.
    Sadovykh, Andrey
    Softeam, France.
    Mazzini, Silvia
    Intecs, Italy.
    Cariou, Eric
    Univ. de Pau et des Pays de l'Adour, Pau, France.
    Truscan, Dragos
    Åbo Akademi Univ., Turku, Finland.
    Cabot, Jordi
    Jordi Cabot ICREA, Barcelona, Spain.
    Field, Daniel
    ATOS, Madrid, Spain.
    Pomante, Luigi
    Univ. of L'Aquila, L'Aquila, Italy.
    Smrz, Pavel
    Brno Univ. of Technol., Brno, Czech Republic.
    The MegaM@Rt2 ECSEL Project: MegaModelling at Runtime — Scalable Model-Based Framework for Continuous Development and Runtime Validation of Complex Systems2017In: The 2017 Euromicro Conference on Digital System Design DSD'17, 2017Conference paper (Refereed)
    Abstract [en]

    A major challenge for the European electronic industry is to enhance productivity while reducing costs and ensuring quality in development, integration and maintenance. Model-Driven Engineering (MDE) principles and techniques have already shown promising capabilities but still need to scale to support real-world scenarios implied by the full deployment and use of complex electronic components and systems. Moreover, maintaining efficient traceability, integration and communication between two fundamental system life-time phases (design time and runtime) is another challenge facing scalability of MDE. This paper presents an overview of the ECSEL project entitled "MegaModelling at runtime -- Scalable model-based framework for continuous development and runtime validation of complex systems" (MegaM@Rt2), whose aim is to address the above mentioned challenges facing MDE. Driven by both large and small industrial enterprises, with the support of research partners and technology providers, MegaM@Rt2 aims to deliver a framework of tools and methods for: 1) system engineering/design & continuous development, 2) related runtime analysis and 3) global model & traceability management, respectively. The diverse industrial use cases (covering domains such as aeronautics, railway, construction and telecommunications) will integrate and apply such a framework that shall demonstrate the validation of the MegaM@Rt2 solution.

  • 8.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Caporuscio, M.
    Linnaeus University, Sweden.
    Conboy, H.
    University of Massachusetts Amherst, MA, United States.
    Di Marco, A.
    University of l'Aquila, Italy.
    Duchien, D. L.
    University of Lille, France.
    Pérez, D.
    University of British Columbia, Canada.
    Seceleanu, C.
    Kyushu University, Japan.
    Shahbazian, A.
    University of California, Berkeley, CA, United States.
    Spalazzese, R.
    Microsoft, WA, United States.
    Tivoli, M.
    Florida State University, FL, United States.
    Vasilescu, B.
    University College Dublin and Lero, Ireland.
    Washizaki, Hironori
    Mälardalen University.
    Weyns, D.
    University of Southern California, CA, United States.
    Pasquale, L.
    Malmö University, Sweden.
    Nistor, A.
    Malmö University, Sweden.
    Muşlu, K.
    Waseda University, Japan.
    Kamei, Y.
    Waseda University, Japan.
    Hanam, Q.
    Carnegie Mellon University, PA, United States.
    Ying, A. T. T.
    Katholieke Universiteit Leuven, Belgium.
    Program committee for icse 2018 posters track2018In: Proceedings / International Conference of Software Engineering, ISSN 0270-5257, E-ISSN 1558-1225, Vol. Part F137351Article in journal (Other academic)
  • 9.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ghazi, Nauman
    Blekinge Institute of Technolog.
    Itkonen, Juha
    Aalto University, Espoo, Finland.
    Torkar, Richard
    Chalmers University of Technology.
    Andrews, Anneliese
    University of Denver, USA.
    Bhatti, Khurram
    Blekinge Institute of Technolog.
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, no 3, p. 844-878Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 10.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Piadehbasmenj, Amirali
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Cloud-Based Architectures for Model-Based Simulation Testing of Embedded Software2021In: 2021 10th Mediterranean Conference on Embedded Computing, MECO 2021, 2021Conference paper (Refereed)
    Abstract [en]

    Model-based testing (MBT) generates many test cases for validating a system under test against the user-defined requirements. Cloud computing provides powerful resources that can be utilised to execute these many test cases that would otherwise take much resources locally. Other benefits of utilizing cloud-based resources are elastic and on-demand, rapid provisioning and release of new, potentially value-adding services. Although cloud providers such as Amazon Web Services (AWS) have provided the necessary technologies for successful cloud-based operation, it remains difficult to migrate and hence achieve the realisation of MBT as a service for traditional in-house testing operations, especially for embedded software. In this paper, we present a series of cloud-based architectures powered by AWS and an open-source MBT tool, GraphWalker. These architectures are realized at simulation testing stage for real-world embedded software and particularly cater for online MBT, whereby the model-based tool is deployed as a RESTful web service, accessible through a number of REST API commands. The presented architectures as well as their realization through AWS can be adopted in future for more advanced levels of simulation testing of embedded software. 

  • 11.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering. Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    A comparative evaluation of using genetic programming for predicting fault count data2008In: Proceedings - The 3rd International Conference on Software Engineering Advances, ICSEA 2008, Includes ENTISY 2008: International Workshop on Enterprise Information Systems, 2008, p. 407-414Conference paper (Refereed)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models’ assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 12.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Incorporating metrics in an organizational test strategy2008In: International Conference on Software Testing, Verification and Validation: Proceedings of the International Software Testing Standard Workshop, Collocated with 1st International Conference on Software Testing, Verification and Validation, 2008, p. 304-315Conference paper (Refereed)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

  • 13.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Lessons from applying experimentation in software engineering predictive modeling2008In: Proceedings of The 2nd International workshop on Software Productivity Analysis and Cost Estimation (SPACE'08), Collocated with 15th Asia-Pacific Software Engineering Conference, Beijing, China, 2008Conference paper (Refereed)
    Abstract [en]

    Within software engineering prediction systems, experiments are undertaken primarily to investigate relationships and to measure/compare models’ accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

  • 14.
    Afzal, Wasif
    et al.
    Blekinge Inst Technol.
    Torkar, Richard
    Blekinge Inst Technol.
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 38, no 9, p. 11984-11997Article in journal (Refereed)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons 20 with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different 22 independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995–2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling; the results are inconclusive for software cost/effort/size estimation.

  • 15.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, Sweden.
    Torkar, Richard
    Blekinge Institute of Technology, Sweden.
    Suitability of genetic programming for software reliability growth modelling2008In: Proceedings - International Symposium on Computer Science and Its Applications, CSA 2008, 2008, p. 114-117Conference paper (Refereed)
    Abstract [en]

    Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

  • 16.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Bahria University, Islamabad, Pakistan .
    Torkar, Richard
    Blekinge Institute of Technology, Karlskrona, Sweden; Chalmers University of Technology, Sweden.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Computational Intelligence and Quantitative Software Engineering / [ed] Witold Pedrycz, Giancarlo Succi and Alberto Sillitti, Springer-Verlag , 2016, p. 33-58Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrele- vant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal com- ponent analysis (PCA); correlation-based feature selection (CFS); consistency-based subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic program- ming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross- validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and na ??ve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries.

  • 17.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    A systematic mapping study on non-functional search-based software testing2008In: 20th International Conference on Software Engineering and Knowledge Engineering, 2008Conference paper (Refereed)
    Abstract [en]

    Automated software test generation has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional), greybox (combination of structural and functional) and nonfunctional testing. In this paper, we undertake a systematic mapping study to present a broad review of primary studies on the application of search-based optimization techniques to non-functional testing. The motivation is to identify the evidence available on the topic and to identify gaps in the application of search-based optimization techniques to different types of non-functional testing. The study is based on a comprehensive set of 35 papers obtained after using a multi-stage selection criteria and are published in workshops, conferences and journals in the time span 1996–2007. We conclude that the search-based software testing community needs to do more and broader studies on nonfunctional search-based software testing (NFSBST) and the results from our systematic map can help direct such efforts.

  • 18.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    A systematic review of search-based testing for non-functional system properties2009In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, no 6, p. 957-976Article, review/survey (Refereed)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996–2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 19.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technolog.
    Torkar, Richard
    Blekinge Institute of Technolog.
    Feldt, Robert
    Blekinge Institute of Technology.
    Prediction of fault count data using genetic programming2008In: IEEE INMIC 2008: 12th IEEE International Multitopic Conference - Conference Proceedings, 2008, p. 349-356Conference paper (Refereed)
    Abstract [en]

    Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models’ inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of ?t (adaptability) and predictive accuracy of the evolved model is measured using ?ve different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically signi?cant goodness of ?t and predictive accuracy

  • 20.
    Afzal, Wasif
    et al.
    Bahria Univ, Islamabad, Pakistan.
    Torkar, Richard
    Chalmers University of Technology and the University of Gothenburg.
    Feldt, Robert
    Blekinge Institute of Technology.
    Resampling methods in software quality classification2012In: International journal of software engineering and knowledge engineering, ISSN 0218-1940, Vol. 22, no 2, p. 203-223Article in journal (Refereed)
    Abstract [en]

    In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and nonparametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

  • 21.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    Search-based prediction of fault count data2009In: Proceedings - 1st International Symposium on Search Based Software Engineering, SSBSE 2009, 2009, p. 35-38Conference paper (Refereed)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function,symbolic regression finds a general function, with coefficients,fitting the given set of data points. The conceptsof symbolic regression using genetic programming can be used to evolve a model for fault countpredictions.Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictionsusing genetic programming and comparing the results with traditional approaches to compare efficiency gains

  • 22.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    Gorschek, Tony
    Blekinge Institute of Technology.
    Genetic programming for cross-release fault count predictions in large and complex software projects2009In: Evolutionary Computation and Optimization Algorithms in Software Engineering / [ed] Monica Chis, IGI Global, 2009Chapter in book (Other academic)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 23.
    Afzal, Wasif
    et al.
    Bahria Univ, Pakistan.
    Torkar, Richard
    Chalmers University of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    Gorschek, Tony
    Blekinge Institute of Technology.
    Prediction of faults-slip-through in large software projects: An empirical evaluation2013In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, no 1, p. 51-86Article in journal (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

  • 24.
    Afzal, Wasif
    et al.
    Mälardalen University, School of Innovation, Design and Engineering. Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology.
    Wikstrand, Greger
    KnowIT YAHM Sweden AB.
    Search-based prediction of fault-slip-through in large software projects2010In: Proceedings - 2nd International Symposium on Search Based Software Engineering, SSBSE 2010, 2010, p. 79-88Conference paper (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by ?nding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of ?ve different techniques, namely particle swarm optimization based arti?cial neural networks (PSO-ANN), arti?cial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards ?nding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of ?t and absolute relative error measures. They indicate that the four search-based techniques (PSOANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 25.
    Ahmed, B. S.
    et al.
    Karlstad University, Karlstad, Sweden.
    Enoiu, Eduard Paul
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Zamli, K. Z.
    University Malaysia Pahang, Pekan, Malaysia.
    An evaluation of Monte Carlo-based hyper-heuristic for interaction testing of industrial embedded software applications2020In: Soft Computing - A Fusion of Foundations, Methodologies and Applications, ISSN 1432-7643, E-ISSN 1433-7479, Vol. 24, no 18, p. 13929-13954Article in journal (Refereed)
    Abstract [en]

    Hyper-heuristic is a new methodology for the adaptive hybridization of meta-heuristic algorithms to derive a general algorithm for solving optimization problems. This work focuses on the selection type of hyper-heuristic, called the exponential Monte Carlo with counter (EMCQ). Current implementations rely on the memory-less selection that can be counterproductive as the selected search operator may not (historically) be the best performing operator for the current search instance. Addressing this issue, we propose to integrate the memory into EMCQ for combinatorial t-wise test suite generation using reinforcement learning based on the Q-learning mechanism, called Q-EMCQ. The limited application of combinatorial test generation on industrial programs can impact the use of such techniques as Q-EMCQ. Thus, there is a need to evaluate this kind of approach against relevant industrial software, with a purpose to show the degree of interaction required to cover the code as well as finding faults. We applied Q-EMCQ on 37 real-world industrial programs written in Function Block Diagram (FBD) language, which is used for developing a train control management system at Bombardier Transportation Sweden AB. The results show that Q-EMCQ is an efficient technique for test case generation. Addition- ally, unlike the t-wise test suite generation, which deals with the minimization problem, we have also subjected Q-EMCQ to a maximization problem involving the general module clustering to demonstrate the effectiveness of our approach. The results show the Q-EMCQ is also capable of outperforming the original EMCQ as well as several recent meta/hyper-heuristic including modified choice function, Tabu high-level hyper-heuristic, teaching learning-based optimization, sine cosine algorithm, and symbiotic optimization search in clustering quality within comparable execution time. 

  • 26.
    Ahmed, B. S.
    et al.
    Istituto Dalle Molle di Studi Sull'Intelligenza Artificiale (IDSIA), Manno-Lugano, Switzerland.
    Sahib, M. A.
    Software and Informatics Engineering Department, Engineering College, Salahaddin University - Erbil, Iraq.
    Gambardella, L. M.
    Istituto Dalle Molle di Studi Sull'Intelligenza Artificiale (IDSIA), Manno-Lugano, Switzerland.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Zamli, K. Z.
    IBM Centre of Excellence, Faculty of Computer Systems and Software Engineering, Universiti Malaysia Pahang Lebuhraya Tun Razak, Kuantan, Pahang Darul Makmur, Malaysia.
    Optimum design of PIλDμ controller for an automatic voltage regulator system using combinatorial test design2016In: PLOS ONE, E-ISSN 1932-6203, Vol. 11, no 11, article id e0166150Article in journal (Refereed)
    Abstract [en]

    Combinatorial test design is a plan of test that aims to reduce the amount of test cases systematically by choosing a subset of the test cases based on the combination of input variables. The subset covers all possible combinations of a given strength and hence tries to match the effectiveness of the exhaustive set. This mechanism of reduction has been used successfully in software testing research with t-way testing (where t indicates the interaction strength of combinations). Potentially, other systems may exhibit many similarities with this approach. Hence, it could form an emerging application in different areas of research due to its usefulness. To this end, more recently it has been applied in a few research areas successfully. In this paper, we explore the applicability of combinatorial test design technique for Fractional Order (FO), Proportional-Integral-Derivative (PID) parameter design controller, named as FOPID, for an automatic voltage regulator (AVR) system. Throughout the paper, we justify this new application theoretically and practically through simulations. In addition, we report on first experiments indicating its practical use in this field. We design different algorithms and adapted other strategies to cover all the combinations with an optimum and effective test set. Our findings indicate that combinatorial test design can find the combinations that lead to optimum design. Besides this, we also found that by increasing the strength of combination, we can approach to the optimum design in a way that with only 4-way combinatorial set, we can get the effectiveness of an exhaustive test set. This significantly reduced the number of tests needed and thus leads to an approach that optimizes design of parameters quickly. © 2016 Ahmed et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

  • 27.
    Ahmed, Bestoun
    et al.
    Istituto Dalle Molle di Studi sullIntelligenza Artificiale (IDSIA), Switzerland.
    Gambardella, Luca
    Istituto Dalle Molle di Studi sullIntelligenza Artificiale (IDSIA), Switzerland.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Zamli, Kamal
    University Malaysia Pahang, Gambang, Malaysia.
    Handling Constraints in Combinatorial Interaction Testing in the Presence of Multi Objective Particle Swarm and Multithreading2017In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 86, no 01, p. 20-36Article in journal (Refereed)
    Abstract [en]

    Combinatorial strategies have received a lot of attention lately as a result of their diverse applications in areas of research, particularly in software engineering. In its simple form, a combinatorial strategy can reduce several input parameters (configurations) of a system into a small set of these parameters based on their interaction (combination). However, in practice, the input configurations of software systems are subjected to constraints, especially highly configurable systems. To implement this feature within a strategy, many difficulties arise for construction. While there are many combinatorial interaction testing strategies nowadays, few of them support constraints. This paper presents a new strategy, called Octopus to construct a combinatorial interaction test suites with the presence of constraints. The design and algorithms are provided in the paper in detail. The strategy is inspired by the behaviour of octopus to search for the optimal solution using multi-threading mechanism. To overcome the multi judgement criteria for an optimal solution, the multi-objective particle swarm optimisation is used. The strategy and its algorithms are evaluated extensively using different benchmarks and comparisons. The evaluation results showed the efficiency of each algorithm in the strategy. The benchmarking results also showed that Octopus can generate test suites efficiently as compared to state-of-the-art strategies.

  • 28.
    Ahmed, Bestoun
    et al.
    Czech Technical University, Czech Republic.
    Zamli, Kamal
    University Malaysia Pahang, Gambang, Malaysia..
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bures, Miroslav
    Czech Technical University, Czech Republic.
    Constrained Interaction Testing: A Systematic Literature Study2017In: IEEE Access, E-ISSN 2169-3536, IEEE Access, ISSN 2169-3536, Vol. PP, no 99Article, book review (Refereed)
    Abstract [en]

    Interaction testing can be used to effectively detect faults that are otherwise difficult to find by other testing techniques. However, in practice, the input configurations of software systems are subjected to constraints, especially in the case of highly configurable systems. Handling constraints effectively and efficiently in combinatorial interaction testing is a challenging problem. Nevertheless, researchers have attacked this challenge through different techniques, and much progress has been achieved in the past decade. Thus, it is useful to reflect on the current achievements and shortcomings and to identify potential areas of improvements. This paper presents the first comprehensive and systematic literature study to structure and categorize the research contributions for constrained interaction testing. Following the guidelines of conducting a literature study, the relevant data is extracted from a set of 103 research papers belonging to constrained interaction testing. The topics addressed in constrained interaction testing research are classified into four categories of constraint test generation, application, generation & application and model validation studies. The papers within each of these categories are extensively reviewed. Apart from answering several other research questions, this study also discusses the applications of constrained interaction testing in several domains such as software product lines, fault detection & characterization, test selection, security and GUI testing. The study ends with a discussion of limitations, challenges and future work in the area.

  • 29.
    Aronsson Karlsson, Viktor
    et al.
    Mälardalen University.
    Almasri, Ahmed
    Mälardalen University.
    Enoiu, Eduard Paul
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Charbachi, P.
    Volvo Construction Equipment AB, Eskilstuna, Sweden.
    Automation of the creation and execution of system level hardware-in-loop tests through model-based testing2022In: A-TEST - Proc. Int. Workshop Autom. Test Case Des., Select., Eval., co-located ESEC/FSE, Association for Computing Machinery, Inc , 2022, p. 9-16Conference paper (Refereed)
    Abstract [en]

    In this paper, we apply model-based testing (MBT) to automate the creation of hardware-in-loop (HIL) test cases. In order to select MBT tools, different tools' properties were compared to each other through a literature study, with the result of selecting GraphWalker and MoMuT tools to be used in an industrial case study. The results show that the generated test cases perform similarly to their manual counterparts regarding how the test cases achieved full requirements coverage. When comparing the effort needed for applying the methods, a comparable effort is required for creating the first iteration, while with every subsequent update, MBT will require less effort compared to the manual process. Both methods achieve 100% requirements coverage, and since manual tests are created and executed by humans, some requirements are favoured over others due to company demands, while MBT tests are generated randomly. In addition, a comparison between the used tools showcased the differences in the models' design and their test case generation. The comparison showed that GraphWalker has a more straightforward design method and is better suited for smaller systems, while MoMuT can handle more complex systems but has a more involved design method.

  • 30.
    Arshad, I.
    et al.
    SRI, TUS, Athlone, Ireland.
    Alsamhi, S. H.
    SRI, TUS, Athlone, Ireland.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Big Data Testing Techniques: Taxonomy, Challenges and Future Trends2023In: Computers, Materials and Continua, ISSN 1546-2218, E-ISSN 1546-2226, Vol. 74, no 2, p. 2739-2770Article in journal (Refereed)
    Abstract [en]

    Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data. However, because of the diversity and complexity of data, testing Big Data is challenging. Though numerous research efforts deal with Big Data testing, a comprehensive review to address testing techniques and challenges of Big Data is not available as yet. Therefore, we have systematically reviewed the Big Data testing techniques’ evidence occurring in the period 2010–2021. This paper discusses testing data processing by highlighting the techniques used in every processing phase. Furthermore, we discuss the challenges and future directions. Our findings show that diverse functional, non-functional and combined (functional and non-functional) testing techniques have been used to solve specific problems related to Big Data. At the same time, most of the testing challenges have been faced during the MapReduce validation phase. In addition, the combinatorial testing technique is one of the most applied techniques in combination with other techniques (i.e., random testing, mutation testing, input space partitioning and equivalence testing) to find various functional faults through Big Data testing.

  • 31.
    Ayerdi, J.
    et al.
    University of Mondragon, Spain.
    Garciandia, A.
    Ikerlan.
    Arrieta, A.
    University of Mondragon, Spain.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Enoiu, Eduard Paul
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Agirre, A.
    Ikerlan.
    Sagardui, G.
    University of Mondragon, Spain.
    Arratibel, M.
    Orona.
    Sellin, O.
    Bombardier Transportation.
    Towards a Taxonomy for Eliciting Design-Operation Continuum Requirements of Cyber-Physical Systems2020In: Proceedings of the IEEE International Conference on Requirements Engineering, IEEE Computer Society , 2020, p. 280-290Conference paper (Other academic)
    Abstract [en]

    Software systems that are embedded in autonomous Cyber-Physical Systems (CPSs) usually have a large life-cycle, both during its development and in maintenance. This software evolves during its life-cycle in order to incorporate new requirements, bug fixes, and to deal with hardware obsolescence. The current process for developing and maintaining this software is very fragmented, which makes developing new software versions and deploying them in the CPSs extremely expensive. In other domains, such as web engineering, the phases of development and operation are tightly connected, making it possible to easily perform software updates of the system, and to obtain operational data that can be analyzed by engineers at development time. However, in spite of the rise of new communication technologies (e.g., 5G) providing an opportunity to acquire Design-Operation Continuum Engineering methods in the context of CPSs, there are still many complex issues that need to be addressed, such as the ones related with hardware-software co-design. Therefore, the process of Design-Operation Continuum Engineering for CPSs requires substantial changes with respect to the current fragmented software development process. In this paper, we build a taxonomy for Design-Operation Continuum Engineering of CPSs based on case studies from two different industrial domains involving CPSs (elevation and railway). This taxonomy is later used to elicit requirements from these two case studies in order to present a blueprint on adopting Design-Operation Continuum Engineering in any organization developing CPSs.

  • 32.
    Bajceta, Aleksandar
    et al.
    Mälardalen University.
    Leon, Miguel
    Mälardalen University, School of Innovation, Design and Engineering, Innovation and Product Realisation.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lindberg, P.
    Alstom Sweden, Västerås, Sweden.
    Bohlin, Markus
    Mälardalen University, School of Innovation, Design and Engineering, Innovation and Product Realisation.
    Using NLP Tools to Detect Ambiguities in System Requirements - A Comparison Study2022In: CEUR Workshop Proceedings, CEUR-WS , 2022, Vol. 3122Conference paper (Refereed)
    Abstract [en]

    Requirements engineering is a time-consuming process, and it can benefit significantly from automated tool support. Ambiguity detection in natural language requirements is a challenging problem in the requirements engineering community. Several Natural Language Processing tools and techniques have been developed to improve and solve the problem of ambiguity detection in natural language requirements. However, there is a lack of empirical evaluation of these tools. We aim to contribute the understanding of the empirical performance of such solutions by evaluating four tools using the dataset of 180 system requirements from the electric train propulsion system provided to us by our industrial partner Alstom. The tools that were selected for this study are Automated Requirements Measurement (ARM), Quality Analyzer for Requirement Specifications (QuARS), REquirements Template Analyzer (RETA), and Requirements Complexity Measurement (RCM). Our analysis showed that selected tools could achieve high recall. Two of them had the recall of 0.85 and 0.98. But they struggled to achieve high precision. The RCM, an in-house developed tool by our industrial partner Alstom, achieved the highest precision in our study of 0.68. 

  • 33.
    Barrett, Ayodele
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Enoiu, Eduard Paul
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    On the Current State of Academic Software Testing Education in Sweden2023In: Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2023, Institute of Electrical and Electronics Engineers Inc. , 2023, p. 397-404Conference paper (Refereed)
    Abstract [en]

    Well-trained software development personnel, in the art and science of software testing, will effectively and efficiently develop quality software products with potentially fewer, less-critical defects. Thus software testing education is considered to be an important part of curricula for a university degree in Computer Science or Information Systems. The objective of this paper is to determine how much dedicated knowledge in the field of software testing is taught within Swedish universities. To achieve this objective, a systematic search of syllabi for software testing-related courses was done. From 25 Swedish universities offering Computer Science (or related) degrees, 14 currently offer dedicated courses in software testing. Some findings include: 32% of the individual courses were offered at the undergraduate level; 28% of the universities offer courses for specialised testing training; and, for the vast majority of the universities, dedicated software testing courses account for about 5% of the total degree credits. While some universities fare better than others, the overall state of academic software testing education in Sweden is limited but promising.

  • 34.
    Bashir, Shariq
    et al.
    Mohammad Ali Jinnah University, Islamabad, Pakistan.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Baig, Rauf
    Al Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia.
    Opinion-based entity ranking using learning to rank2016In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 38, no 1, p. 151-163Article in journal (Refereed)
    Abstract [en]

    As social media and e-commerce on the Internet continue to grow, opinions have become one of the most important sources of information for users to base their future decisions on. Unfortunately, the large quantities of opinions make it difficult for an individual to comprehend and evaluate them all in a reasonable amount of time. The users have to read a large number of opinions of different entities before making any decision. Recently a new retrieval task in information retrieval known as Opinion-Based Entity Ranking (OpER) has emerged. OpER directly ranks relevantentities based on how well opinions on them are matched with a user's preferences that are given in the form of queries. With such a capability, users do not need to read a large number of opinions available for the entities. Previous research on OpER does not take into account the importance and subjectivity of query keywords in individual opinions of an entity. Entity relevance scores are computed primarily on the basis of occurrences of query keywords match, by assuming all opinions of an entity as a single field of text. Intuitively, entities that have positive judgments and strong relevance with query keywords should be ranked higher than those entities that have poor relevance and negative judgments. This paper outlines several ranking features and develops an intuitive framework for OpER in which entities are ranked according to how well individual opinions of entities are matched with the user's query keywords. As a useful ranking model may be constructed from many rankingfeatures, we apply learning to rank approach based on genetic programming (GP) to combine features in order to develop an effective retrieval model for OpER task. The proposed approach is evaluated on two collections and is found to be significantly more effective than the standard OpER approach.

  • 35.
    Betz, Stefanie
    et al.
    Blekinge Institute of Technology.
    Smite, Darja
    Blekinge Institute of Technology.
    Fricker, Samuel
    Blekinge Institute of Technology.
    Moss, Andrew
    Blekinge Institute of Technology.
    Afzal, Wasif
    Bahria University, Pakistan.
    Svahnberg, Mikael
    Blekinge Institute of Technology.
    Wohlin, Claes
    Blekinge Institute of Technology.
    Börstler, Jürgen
    Blekinge Institute of Technology.
    Gorschek, Tony
    Blekinge Institute of Technology.
    An evolutionary perspective on socio-technical congruence: The rubber band effect2013In: International Symposium on Empirical Software Engineering and Measurement: Proceedings of the 3rd International Workshop on Replication in Empirical Software Engineering Research (RESER'13), Collocated with 7th International Symposium on Empirical Software Engineering and Measurement (ESEM'13), Baltimore, United States, 2013Conference paper (Refereed)
    Abstract [en]

    Conway’s law assumes a strong association between the system’s architecture and the organization’s communication structure that designs it. In the light of contemporary software development, when many companies rely on geographically distributed teams, which often turn out to be temporarily composed and thus having an often changing communication structure, the importance of Conway’s law and its inspired work grows. In this paper, we examine empirical research related to Conway’s law and its application for cross-site coordination. Based on the results obtained we conjecture that changes in the communication structure alone sooner or later trigger changes in the design structure of the software products to return the sociotechnical system into the state of congruence. This is further used to formulate a concept of a rubber band effect and propose a replication study that goes beyond the original idea of Conway’s law by investigating the evolution of sociotechnical congruence over time.

  • 36.
    Bilic, Damir
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Carlson, Jan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sundmark, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Wallin, Peter
    RISE SICS Västerås, Sweden.
    Detecting Inconsistencies in Annotated Product Line Models2020In: ACM International Conference Proceeding Series, 2020, Vol. F164267-A, p. 252-262, article id 20Conference paper (Refereed)
    Abstract [en]

    Model-based product line engineering applies the reuse practices from product line engineering with graphical modeling for the specification of software intensive systems. Variability is usually described in separate variability models, while the implementation of the variable systems is specified in system models that use modeling languages such as SysML. Most of the SysML modeling tools with variability support, implement the annotation-based modeling approach. Annotated product line models tend to be error-prone since the modeler implicitly describes every possible variant in a single system model.To identifying variability-related inconsistencies, in this paper, we firstly define restrictions on the use of SysML for annotative modeling in order to avoid situations where resulting instances of the annotated model may contain ambiguous model constructs. Secondly, inter-feature constraints are extracted from the annotated model, based on relations between elements that are annotated with features. By analyzing the constraints, we can identify if the combined variability- and system model can result in incorrect or ambiguous instances. The evaluation of our prototype implementation shows the potential of our approach by identifying inconsistencies in the product line model of our industrial partner which went undetected through several iterations of the model.

  • 37.
    Bilic, Damir
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sundmark, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Wallin, Peter
    RISE SICS Västerås, Sweden.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Amlinger, C.
    Volvo Construction Equipment, Eskilstuna, Sweden.
    Barkah, D.
    Volvo Construction Equipment, Eskilstuna, Sweden.
    Towards a Model-Driven Product Line Engineering Process - An Industrial Case Study2020In: ACM International Conference Proceeding Series, NEW YORK: ASSOC COMPUTING MACHINERY , 2020, article id 3385043Conference paper (Refereed)
    Abstract [en]

    Many organizations developing software-intensive systems face challenges with high product complexity and large numbers of variants. In order to effectively maintain and develop these product variants, Product-Line Engineering methods are often considered, while Model-based Systems Engineering practices are commonly utilized to tackle product complexity. In this paper, we report on an industrial case study concerning the ongoing adoption of Product Line Engineering in the Model-based Systems Engineering environment at Volvo Construction Equipment (Volvo CE) in Sweden. In the study, we identify and define a Product Line Engineering process that is aligned with Model-based Systems Engineering activities at the engines control department of Volvo CE. Furthermore, we discuss the implications of the migration from the current development process to a Model-based Product Line Engineering-oriented process. This process, and its implications, are derived by conducting and analyzing interviews with Volvo CE employees, inspecting artifacts and documents, and by means of participant observation. Based on the results of a first system model iteration, we were able to document how Model-based Systems Engineering and variability modeling will affect development activities, work products and stakeholders of the work products.

  • 38.
    Bilic, Damir
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sundmark, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Wallin, Peter
    RISE SICS, Västerås, Sweden.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. ES (Embedded Systems).
    Amlinger, Christoffer
    Volvo CE, Eskilstuna, Sweden.
    Model-Based Product Line Engineering in an Industrial Automotive Context: An Exploratory Case Study2018In: 1st Intl. Workshop on Variability and Evolution of Software-intensive Systems VariVolution'18, 2018Conference paper (Refereed)
    Abstract [en]

    Product Line Engineering is an approach to reuse assets of complex systems by taking advantage of commonalities between product families. Reuse within complex systems usually means reuse of artifacts from different engineering domains such as mechanical, electronics and software engineering. Model-based systems engineering is becoming a standard for systems engineering and collaboration within different domains. This paper presents an exploratory case study on initial efforts of adopting Product Line Engineering practices within the model-based systems engineering process at Volvo Construction Equipment (Volvo CE), Sweden. We have used SysML to create overloaded models of the engine systems at Volvo CE. The variability within the engine systems was captured by using the Orthogonal Variability Modeling language. The case study has shown us that overloaded SysML models tend to become complex even on small scale systems, which in turn makes scalability of the approach a major challenge. For successful reuse and to, possibly, tackle scalability, it is necessary to have a database of reusable assets from which product variants can be derived.

  • 39.
    Brahneborg, Daniel
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Infoflex Connect AB.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A Lightweight Architecture Analysis of a Monolithic Messaging Gateway2020In: Proceedings - 2020 IEEE International Conference on Software Architecture Companion, ICSA-C 2020, Salvador, Bahia, Brazil: IEEE, 2020, p. 25-32, article id 9095659Conference paper (Refereed)
    Abstract [en]

    Background: The Enterprise Messaging Gateway(EMG) from Infoflex Connect (ICAB) is a monolithic system used to deliver mobile text messages (SMS) world-wide. The companies using it have diverse requirements on both functionality and quality attributes and would thus benefit from more versatile customizations, e.g. regarding authorization and data replication.

    Objective: ICAB needed help in assessing the current architecture of EMG in order to find candidates for architectural changes as well as fulfilling the needs of variability in meeting the wide range of customer requirements.

    Method: We analysed EMG using a lightweight version of ATAM (Architectural Trade-off Analysis Method) to get a better understanding of how different architectural decisions would affect the trade-offs between the quality requirements from the identified stakeholders.

    Result: Using the results of this structured approach, it was easy for ICAB to identify the functionality that needed to be improved. It also became clear that the selected component should be converted into a set of microservices, each one optimized for a specific set of customers.

    Limitation: The stakeholder requirements were gathered intermittently during a long period of continuous engagement, but there is a chance some of their requirements were still not communicated to us.

    Conclusion: Even though this ATAM study was performed internally at ICAB without direct involvement from any external stakeholders, documenting elicited quality attribute requirements and relating them to the EMG architecture provided new, unexpected, and valuable understandings of the system with a rather small effort.

  • 40.
    Brahneborg, Daniel
    et al.
    Infoflex Connect AB, Stockholm, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A Black-Box Approach to Latency and Throughput Analysis2017In: Proceedings - 2017 IEEE International Conference on Software Quality, Reliability and Security Companion, QRS-C 2017, 2017, p. 603-604, article id 8004393Conference paper (Refereed)
    Abstract [en]

    To enable fast and reliable delivery of mobile text messages (SMS), special bidirectional protocols are often used. Measuring the achieved throughput and involved latency is however non-trivial, due to the complexity of these protocols. Modifying an existing system would incur too much of a risk, so instead a new tool was created to analyse the log files containing information about this traffic in a black-box fashion. When the produced raw data was converted into graphs, they gave new insights into the behaviour of both the protocols and the remote systems involved.

  • 41.
    Brahneborg, Daniel
    et al.
    Infoflex Connect AB, Stockholm, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A Pragmatic Perspective on Regression Testing Challenges2017In: Proceedings - 2017 IEEE International Conference on Software Quality, Reliability and Security Companion, QRS-C 2017, Prague, Czech Republic, 2017, p. 618-619, article id 8004401Conference paper (Refereed)
    Abstract [en]

    Regression testing research has received significant focus during the past decades, acknowledging the benefits it can provide to organisations in terms of reduced development and maintenance costs, as well as sustained end-user satisfaction. There are several challenges left to overcome before the industry can fully take advantage of the available research results in this area. To get a better overview of how current regression testing research fits in with today’s industrial practices, we read a selection of papers in the field and based on our experience, critically examined their content. As a result, we present and discuss a taxonomy of regression testing challenges, from the perspectives of both methods and organisations, that we believe will foster the industrial uptake of regression testing.

  • 42.
    Brahneborg, Daniel
    et al.
    Infoflex Connect AB, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Björkman, Mats
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Superlinear and Bandwidth Friendly Geo-replication for Store-And-Forward Systems2020In: ICSOFT 2020 - Proceedings of the 15th International Conference on Software Technologies, SciTePress, 2020, p. 328-338Conference paper (Refereed)
    Abstract [en]

    To keep internet based services available despite inevitable local internet and power outages, their data must be replicated to one or more other sites. For most systems using the store-and-forward architecture, data loss can also be prevented by using end-to-end acknowledgements. So far we have not found any sufficiently good solutions for replication of data in store-and-forward systems without acknowledgements and with geographically separated system nodes. We therefore designed a new replication protocol, which could take advantage of the lack of a global order between the messages and the acceptance of a slightly higher risk for duplicated deliveries than existing protocols. We tested a proof-of-concept implementation of the protocol for throughput and latency in a controlled experiment using 7 nodes in 4 geographically separated areas, and observed the throughput increasing superlinearly with the number of nodes up to almost 3500 messages per second. It is also, to the best of our knowledge, the first replication protocol with a bandwidth usage that scales according to the number of nodes allowed to fail and not the total number of nodes in the system.

  • 43.
    Brahneborg, Daniel
    et al.
    Infoflex Connect AB, Stockholm, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sundmark, Daniel
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Björkman, Mats
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Round-Trip Time Anomaly Detection2018In: ICPE '18 Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, 2018, p. 107-114Conference paper (Refereed)
    Abstract [en]

    Mobile text messages (SMS) are sometimes used for authentication, which requires short and reliable delivery times. The observed round-trip times when sending an SMS message provide valuable information on the quality of the connection. In this industry paper, we propose a method for detecting round-trip time anomalies, where the exact distribution is unknown, the variance is several orders of magnitude, and there are lots of shorter spikes that should be ignored. In particular, we show that using an adaption of Double Seasonal Exponential Smoothing to reduce the content dependent variations, followed by the Remedian to find short-term and long-term medians, successfully identifies larger groups of outliers. As training data for our method we use log files from a live SMS gateway. In order to verify the effectiveness of our approach, we utilize simulated data. Our contributions are a description on how to isolate content dependent variations, and the sequence of steps to find significant anomalies in big data.

  • 44.
    Brahneborg, Daniel
    et al.
    Braxo AB, Stockholm, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Resilient Conflict-free Replicated Data Types without Atomic Broadcast2022In: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON SOFTWARE TECHNOLOGIES (ICSOFT) / [ed] Hans-Georg Fill; Marten van Sinderen; Leszek Maciaszek, 2022, p. 516-523Conference paper (Refereed)
    Abstract [en]

    In a distributed system, applications can perform both reads and updates without costly synchronous network round-trips by using Conflict-free Replicated Data Types (CRDTs). Most CRDTs are based on some variant of atomic broadcast, as that enables them to support causal dependencies between updates of multiple objects. However, the overhead of this atomic broadcast is unnecessary in systems handling only independent CRDT objects. We identified a set of use cases for tracking resource usage where there is a need for a replication mechanism with less complexity and network usage as compared to using atomic broadcast. In this paper, we present the design of such a replication protocol that efficiently leverages the commutativity of CRDTs. The proposed protocol CReDiT (CRDT enhanced with intelligence) uses up to four communication steps per update, but these steps can be batched as needed. It uses network resources only when updates need to be communicated. Furthermore, it is less sensit ive to server failures than current state-of-the-art solutions as other nodes can use new values already after the first communication step, instead of after two or more.

    Download full text (pdf)
    fulltext
  • 45.
    Brahneborg, Daniel
    et al.
    Infoflex Connect AB, Stockholm, Sweden.
    Causevic, Adnan
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Björkman, Mats
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Towards a more reliable store-and-forward protocol for mobile text messages2018In: Proceedings of the Annual ACM Symposium on Principles of Distributed Computing, Association for Computing Machinery , 2018, p. 13-20Conference paper (Refereed)
    Abstract [en]

    Businesses often use mobile text messages (SMS) as a cost effective and universal way of communicating concise information to their customers. Today, these messages are usually sent via SMS brokers, which forward them further to the next stakeholder, typically the various mobile operators, and then the messages eventually reach the intended recipients. Infoflex Connect AB delivers an SMS gateway application to the brokers with the main responsibility of reliable message delivery within set quality thresholds. However, the protocols used for SMS communication are not designed for reliability and thus messages may be lost. In this position paper we deduce requirements for a new protocol for routing messages through the SMS gateway application running at a set of broker nodes, in order to increase the reliability. The requirements cover important topics for the required communication protocol such as event ordering, message handling and system membership. The specification of such requirements sets the foundation for the forthcoming design and implementation of such a protocol and its evaluation.

  • 46.
    Brahneborg, Daniel
    et al.
    Braxo AB, S-11864 Stockholm, Sweden..
    Duvignau, Romaric
    Chalmers Tekn Hgsk, Dept Comp Sci & Engn, S-41296 Gothenburg, Sweden..
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    GeoRep-Resilient Storage for Wide Area Networks2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 75772-75788Article in journal (Refereed)
    Abstract [en]

    Embedded systems typically have limited processing and storage capabilities, and may only intermittently be powered on. After sending data from its sensors upstream, the system must therefore be able to trust that the data, once acknowledged, is not lost. The purpose of this work is to propose a novel solution for replicating data between the upstream nodes in such systems, with a minimal effect on the software architecture. On the assumption that there is no relative order between replicated data tuples, we designed a new replication protocol based on partial replication. Our protocol uses only 2 communication steps per data tuple, instead of the 3 to 12 used by other solutions. We verified its failover mechanism in a proof-of-concept implementation of the protocol using simulated network failures, and evaluated the implementation on throughput and latency in several controlled experiments using up to 7 nodes in up to 5 geographically separated areas, with up to 1000 data producers per node. The recorded system throughput increased linearly relative to both the number of nodes and the number of data producers. For comparison, Paxos showed a performance similar to our protocol when using 3 nodes, but got slower as more nodes were added. The lack of a relative order, in combination with partial replication, enables our system to continue working during network partitions, not only in the part containing the majority of the nodes, but also in any sufficiently large minority partitions.

    Download full text (pdf)
    fulltext
  • 47.
    Doganay, Kivanc
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. SICS Swedish ICT AB, Kista, Sweden..
    Eldh, Sigrid
    Ericsson AB, Kista, Sweden.;Karlstad Univ, Karlstad, Sweden..
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bohlin, Markus
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. SICS Swedish ICT AB, Kista, Sweden.
    Search-Based Testing for Embedded Telecom Software with Complex Input Structures2014In: TESTING SOFTWARE AND SYSTEMS (ICTSS 2014) / [ed] Merayo, MG DeOca, EM, SPRINGER-VERLAG BERLIN , 2014, p. 205-210Conference paper (Refereed)
    Abstract [en]

    In this paper, we discuss the application of search-based software testing techniques for unit level testing of a real-world telecommunication middleware at Ericsson. Our current implementation analyzes the existing test cases to handle non-trivial variables such as uninitialized pointers, and to discover any setup code that needs to run before the actual test case, such as setting global system parameters. Hill climbing (HC) and (1+1) evolutionary algorithm (EA) metaheuristic search algorithms are used to generate input data for branch coverage. We compare HC, (1+1) EA, and random search with respect to effectiveness, measured as branch coverage, and efficiency, measured as number of executions needed. Difficulties arising from the specialized execution environment and the adaptations for handling these problems are also discussed.

  • 48.
    Doganay, Kivanc
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Eldh, Sigrid
    Ericsson AB, Kista, Sweden.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bohlin, Markus
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Search-based Testing for Embedded Telecommunication Software with Complex Input Structures: An Industrial Case Study2014Report (Other academic)
    Abstract [en]

    In this paper, we discuss the application of search-based software test-ing techniques for unit level testing of a real-world telecommunication middleware at Ericsson. Input data for the system under test consists of nested data structures, and includes non-trivial variables such as unini-tialized pointers. Our current implementation analyzes the existing test cases to discover how to handle pointers, set global system parameters, and any other setup code that needs to run before the actual test case. Hill climbing (HC) and (1+1) evolutionary algorithm (EA) metaheuristic search algorithms are used to generate input data for branch coverage. We compare HC, (1+1)EA, and random search as a baseline of performance with respect to e˙ectiveness, measured as branch coverage, and eÿciency, measured as number of executions needed. Diÿculties arising from the specialized execution environment and the adaptations for handling these problems are also discussed.

  • 49.
    Fatima, R.
    et al.
    School of Software, Tsinghua University, Beijing, China.
    Yasin, A.
    School of Software, Tsinghua University, Beijing, China.
    Liu, L.
    School of Software, Tsinghua University, Beijing, China.
    Wang, J.
    School of Software, Tsinghua University, Beijing, China.
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sharing information online rationally: An observation of user privacy concerns and awareness using serious game2019In: Journal of Information Security and Applications, ISSN 2214-2134, E-ISSN 2214-2126, Vol. 48, article id 102351Article in journal (Refereed)
    Abstract [en]

    Recent studies have shown that excessive online information disclosure is a major reason of privacy breach. It makes it easy for social engineers to gather information about their targets. The objective of this study is to gather user privacy concerns reported in the literature and categorize them into themes, then design a serious game covering the categorized privacy concerns and evaluate the educational effect of the game regarding dangers associated with excessive online information disclosure. We have conducted a literature review and extracted user privacy concerns reported in 109+ publications. Then we designed a serious game and empirically evaluated the game players awareness of dangers associated with excessive online information disclosure. We find that privacy awareness has a positive long-term impact on users online behavior in terms of controlled information sharing. However, social networking needs drive users to share information online, even knowing the potential risks. The proposed serious game shows positive effect in improving the privacy awareness of participants.

  • 50.
    Fatima, Rubia
    et al.
    Tsinghua Univ, Sch Software, Beijing, Peoples R China..
    Yasin, Affan
    Tsinghua Univ, Sch Software, Beijing, Peoples R China..
    Liu, Lin
    Tsinghua Univ, Sch Software, Beijing, Peoples R China..
    Wang, Jianmin
    Tsinghua Univ, Sch Software, Beijing, Peoples R China..
    Afzal, Wasif
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Retrieving arXiv, SocArXiv, and SSRN metadata for initial review screening2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 161, article id 107251Article, review/survey (Refereed)
    Abstract [en]

    Context: Researchers around the globe invest a lot of time searching the literature for performing reviews (Systematic Literature Review (SLR), Multivocal Literature Review (MLR)). The steps to performing the review includes inclusion of the grey literature, preprints, and quality assessed non-peer reviewed literature (the purpose is to minimize the publication bias). The initial screening of the papers takes time and bibliographic information is only available online for the researcher(s). Objective: Objective of our study is to propose, design, and develop a method that will help the research community to download the basic information of the papers (title, abstract, author) for the searched query from arxiv, SSRN, and SocArxiv (Social Science ArXiv). Method: We used Web scraping to extract data from the servers and save it in excel file. To retrieve the desired query from the databases, a Python code is used. Two methods have been discussed in the study to download the metadata of the searched query. Results: We have used different queries (such as "grey literature", "testing software", and "python" etc.) to see the results of our proposed method. Furthermore, we cross-verified the results with the online search results of the databases. Conclusion: Initial results from the preliminary pilot evaluations show that it is a viable method to search, download, and shortlist the research articles information (title, abstract etc.) from arXiv,1 SSRN,2 and SocArXiv.3 For external validity more evaluations are needed.

123 1 - 50 of 113
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf