mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 67) Show all publications
Sadovykh, A., Bagnato, A., Truscan, D., Pierini, P., Bruneliere, H., Gómez, A., . . . Afzal, W. (2020). A Tool-Supported Approach for Building the Architecture and Roadmap in MegaM@Rt2 Project. In: Adv. Intell. Sys. Comput.: . Paper presented at 7 June 2018 through 8 June 2018 (pp. 265-274). Springer Verlag, 925
Open this publication in new window or tab >>A Tool-Supported Approach for Building the Architecture and Roadmap in MegaM@Rt2 Project
Show others...
2020 (English)In: Adv. Intell. Sys. Comput., Springer Verlag , 2020, Vol. 925, p. 265-274Conference paper, Published paper (Refereed)
Abstract [en]

MegaM@Rt2 is a large European project dedicated to the provisioning of a model-based methodology and supporting tooling for system engineering at a wide scale. It notably targets the continuous development and runtime validation of such complex systems by developing the MegaM@Rt2 framework to address a large set of engineering processes and application domains. This collaborative project involves 27 partners from 6 different countries, 9 industrial case studies as well as over 30 different tools from project partners (and others). In the context of the project, we opted for a pragmatic model-driven approach in order to specify the case study requirements, design the high-level architecture of the MegaM@Rt2 framework, perform the gap analysis between the industrial needs and current state-of-the-art, and to plan a first framework development roadmap accordingly. The present paper concentrates on the concrete examples of the tooling approach for building the framework architecture. In particular, we discuss the collaborative modeling, requirements definition tooling, approach for components modeling, traceability and document generation. The paper also provides a brief discussion of the practical lessons we have learned from it so far.

Place, publisher, year, edition, pages
Springer Verlag, 2020
Keywords
Architecture, Document generation, Model-driven engineering, Modelio, Requirement engineering, SysML, Traceability, UML, Computer programming, Computer science, Application programs
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-43179 (URN)10.1007/978-3-030-14687-0_24 (DOI)2-s2.0-85064182093 (Scopus ID)9783030146863 (ISBN)
Conference
7 June 2018 through 8 June 2018
Available from: 2019-04-26 Created: 2019-04-26 Last updated: 2019-04-26Bibliographically approved
Mehmood, M. A., Khan, M. N. & Afzal, W. (2019). Automating Test Data Generation for Testing Context-Aware Applications. In: Proceedings of the IEEE International Conference on Software Engineering and Service Sciences, ICSESS: . Paper presented at 9th IEEE International Conference on Software Engineering and Service Science, ICSESS 2018, 23 November 2018 through 25 November 2018 (pp. 104-108). IEEE Computer Society
Open this publication in new window or tab >>Automating Test Data Generation for Testing Context-Aware Applications
2019 (English)In: Proceedings of the IEEE International Conference on Software Engineering and Service Sciences, ICSESS, IEEE Computer Society , 2019, p. 104-108Conference paper, Published paper (Refereed)
Abstract [en]

Context-aware applications are emerging applications in the modern era of computing. These applications can determine and adapt to situational context to provide better user experience. Testing these applications is not straightforward and poses several challenges such as developing context-aware test cases and generating test data etc. However, by employing model based testing technique, testing process for context-aware applications can be automated. To achieve maximum degree of automation, it is necessary to automate model transformation, test data generation and test cases execution. To execute test cases, test data is required and developing test data for context-aware applications is a challenging task. The aim of this study is to address this issue; thus, we propose automated test data generation for functional testing of context-aware applications. This automated test data generation can reduce testing time and cost, thus enabling test engineers to execute more testing cycles to attain higher degree of test coverage.

Place, publisher, year, edition, pages
IEEE Computer Society, 2019
Keywords
Context-Aware Applications, Software Testing, Test Data, Application programs, Automatic test pattern generation, Automation, Metadata, Model checking, Automated test data generation, Context aware applications, Emerging applications, Model based testing, Model transformation, Situational context, Test data generation
National Category
Computer Sciences Software Engineering
Identifiers
urn:nbn:se:mdh:diva-43069 (URN)10.1109/ICSESS.2018.8663920 (DOI)2-s2.0-85063624322 (Scopus ID)9781538665640 (ISBN)
Conference
9th IEEE International Conference on Software Engineering and Service Science, ICSESS 2018, 23 November 2018 through 25 November 2018
Available from: 2019-05-09 Created: 2019-05-09 Last updated: 2019-05-09
Strandberg, P. E., Enoiu, E. P., Afzal, W., Daniel, S. & Feldt, R. (2019). Information Flow in Software Testing: An Interview Study with Embedded Software Engineering Practitioners. IEEE Access, 7, 46434-46453
Open this publication in new window or tab >>Information Flow in Software Testing: An Interview Study with Embedded Software Engineering Practitioners
Show others...
2019 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 46434-46453Article in journal (Refereed) Published
Abstract [en]

Activities in software testing is a challenge for companies that develop embedded systems where multiple functional teams and technologically difficult tasks are common. This study aims at exploring the information flow in software testing, the perceived challenges and good approaches, for a more effective information flow. We conducted semi-structured interviews with twelve software practitioners working at five organizations in the embedded software industry in Sweden. The interviews were analyzed by means of thematic analysis. The data was classified into six themes that affect the information flow in software testing: testing and troubleshooting, communication, processes, technology, artifacts and organization. We further identified a number of challenges such as poor feedback and understanding exactly what has been tested; and approaches such as fast feedback as well as custom automated test reporting; to achieve an improved information flow. Our results indicate that there are many opportunities to improve this information flow: a first mitigation step is to better understand the challenges and approaches. Future work is needed to realize this in practice, for example to shorten feedback cycles between roles, as well as enhance exploration and visualization of test results

National Category
Software Engineering
Identifiers
urn:nbn:se:mdh:diva-40930 (URN)10.1109/ACCESS.2019.2909093 (DOI)000465621200001 ()2-s2.0-85064750453 (Scopus ID)
Funder
Knowledge Foundation, 20150277
Available from: 2018-09-13 Created: 2018-09-13 Last updated: 2019-05-09Bibliographically approved
Tahvili, S., Pimentel, R., Afzal, W., Ahlberg, M., Fornander, E. & Bohlin, M. (2019). sOrTES: A Supportive Tool for Stochastic Scheduling of Manual Integration Test Cases. IEEE Access, 7, 12928-12946
Open this publication in new window or tab >>sOrTES: A Supportive Tool for Stochastic Scheduling of Manual Integration Test Cases
Show others...
2019 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 12928-12946Article in journal (Refereed) Published
Abstract [en]

The main goal of software testing is to detect as many hidden bugs as possible in the final software product before release. Generally, a software product is tested by executing a set of test cases, which can be performed manually or automatically. The number of test cases which are required to test a software product depends on several parameters such as the product type, size, and complexity. Executing all test cases with no particular order can lead to waste of time and resources. Test optimization can provide a partial solution for saving time and resources which can lead to the final software product being released earlier. In this regard, test case selection, prioritization, and scheduling can be considered as possible solutions for test optimization. Most of the companies do not provide direct support for ranking test cases on their own servers. In this paper, we introduce, apply, and evaluate sOrTES as our decision support system for manual integration of test scheduling. sOrTES is a Python-based supportive tool which schedules manual integration test cases which are written in a natural language text. The feasibility of sOrTES is studied by an empirical evaluation which has been performed on a railway use-case at Bombardier Transportation, Sweden. The empirical evaluation indicates that around 40% of testing failure can be avoided by using the proposed execution schedules by sOrTES, which leads to an increase in the requirements coverage of up to 9.6%.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019
Keywords
Software testing, integration testing, test optimization, decision support systems, stochastic test scheduling, manual testing, scheduler algorithm, dependency
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-42989 (URN)10.1109/ACCESS.2019.2893209 (DOI)000458177800064 ()2-s2.0-85061302207 (Scopus ID)
Available from: 2019-03-29 Created: 2019-03-29 Last updated: 2019-04-10Bibliographically approved
Strandberg, P. E., Ostrand, T. J., WEYUKER, E., Daniel, S. & Afzal, W. (2018). Automated test mapping and coverage for network topologies. In: ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis: . Paper presented at 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, 16 July 2018 through 21 July 2018 (pp. 73-83). Association for Computing Machinery, Inc
Open this publication in new window or tab >>Automated test mapping and coverage for network topologies
Show others...
2018 (English)In: ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, Association for Computing Machinery, Inc , 2018, p. 73-83Conference paper, Published paper (Refereed)
Abstract [en]

Communication devices such as routers and switches play a critical role in the reliable functioning of embedded system networks. Dozens of such devices may be part of an embedded system network, and they need to be tested in conjunction with various computational elements on actual hardware, in many different configurations that are representative of actual operating networks. An individual physical network topology can be used as the basis for a test system that can execute many test cases, by identifying the part of the physical network topology that corresponds to the configuration required by each individual test case. Given a set of available test systems and a large number of test cases, the problem is to determine for each test case, which of the test systems are suitable for executing the test case, and to provide the mapping that associates the test case elements (the logical network topology) with the appropriate elements of the test system (the physical network topology). We studied a real industrial environment where this problem was originally handled by a simple software procedure that was very slow in many cases, and also failed to provide thorough coverage of each network's elements. In this paper, we represent both the test systems and the test cases as graphs, and develop a new prototype algorithm that a) determines whether or not a test case can be mapped to a subgraph of the test system, b) rapidly finds mappings that do exist, and c) exercises diverse sets of network nodes when multiple mappings exist for the test case. The prototype has been implemented and applied to over 10,000 combinations of test cases and test systems, and reduced the computation time by a factor of more than 80 from the original procedure. In addition, relative to a meaningful measure of network topology coverage, the mappings achieved an increased level of thoroughness in exercising the elements of each test system.

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc, 2018
Keywords
Network topology, Subgraph isomorphism, Test coverage, Testing, Embedded systems, Mapping, Test facilities, Topology, Communication device, Computational elements, Industrial environments, Physical network topologies, Prototype algorithms, Software testing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-40528 (URN)10.1145/3213846.3213859 (DOI)2-s2.0-85051515196 (Scopus ID)9781450356992 (ISBN)
Conference
27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, 16 July 2018 through 21 July 2018
Available from: 2018-08-23 Created: 2018-08-23 Last updated: 2018-10-02Bibliographically approved
Tahvili, S., Hatvani, L., Felderer, M., Afzal, W., Saadatmand, M. & Bohlin, M. (2018). Cluster-Based Test Scheduling Strategies Using Semantic Relationships between Test Specifications. In: 5th International Workshop on Requirements Engineering and Testing RET'18: . Paper presented at 5th International Workshop on Requirements Engineering and Testing RET'18, 02 Jun 2018, Gothenburg, Sweden (pp. 1-4). , F137811
Open this publication in new window or tab >>Cluster-Based Test Scheduling Strategies Using Semantic Relationships between Test Specifications
Show others...
2018 (English)In: 5th International Workshop on Requirements Engineering and Testing RET'18, 2018, Vol. F137811, p. 1-4Conference paper, Published paper (Refereed)
Abstract [en]

One of the challenging issues in improving the test efficiency is that of achieving a balance between testing goals and testing resources. Test execution scheduling is one way of saving time and budget, where a set of test cases are grouped and tested at the same time. To have an optimal test execution schedule, all related information of a test case (e.g. execution time, functionality to be tested, dependency and similarity with other test cases) need to be analyzed. Test scheduling problem becomes more complicated at high-level testing, such as integration testing and especially in manual testing procedure. Test specifications at high-level are generally written in natural text by humans and usually contain ambiguity and uncertainty. Therefore, analyzing a test specification demands a strong learning algorithm. In this position paper, we propose a natural language processing (NLP) based approach that, given test specifications at the integration level, allows automatic detection of test cases’ semantic dependencies. The proposed approach utilizes the Doc2Vec algorithm and converts each test case into a vector in n-dimensional space. These vectors are then grouped using the HDBSCAN clustering algorithm into semantic clusters. Finally, a set of cluster-based test scheduling strategies are proposed for execution. The proposed approach has been applied in a sub-system from the railway domain by analyzing an ongoing testing project at Bombardier Transportation AB, Sweden.

Keywords
Software testing, Test scheduling, NLP, Dependency, Clustering, Doc2Vec, Optimization, HDBSCAN
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-38953 (URN)10.1145/3195538.3195540 (DOI)2-s2.0-85051238162 (Scopus ID)9781450357494 (ISBN)
Conference
5th International Workshop on Requirements Engineering and Testing RET'18, 02 Jun 2018, Gothenburg, Sweden
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsTOCSYC - Testing of Critical System Characteristics (KKS)MegaMaRt2 - Megamodelling at Runtime (ECSEL/Vinnova)TESTOMAT Project - The Next Level of Test Automation
Available from: 2018-05-15 Created: 2018-05-15 Last updated: 2018-08-23Bibliographically approved
Strandberg, P. E., Afzal, W. & Daniel, S. (2018). Decision Making and Visualizations Based on Test Results. In: Empirical Software Engineering and Measurement, 12th International Symposium on ESEM18: . Paper presented at Empirical Software Engineering and Measurement, 12th International Symposium on ESEM18, 11 Oct 2018, Oulu, Finland. , Article ID 34.
Open this publication in new window or tab >>Decision Making and Visualizations Based on Test Results
2018 (English)In: Empirical Software Engineering and Measurement, 12th International Symposium on ESEM18, 2018, article id 34Conference paper, Published paper (Refereed)
Abstract [en]

Background: Testing is one of the main methods for quality assurance in the development of embedded software, as well as in software engineering in general. Consequently, test results (and how they are reported and visualized) may substantially influence business decisions in software-intensive organizations. Aims: This case study examines the role of test results from automated nightly software testing and the visualizations for decision making they enable at an embedded systems company in Sweden. In particular, we want to identify the use of the visualizations for supporting decisions from three aspects: in daily work, at feature branch merge, and at release time. Method: We conducted an embedded case study with multiple units of analysis by conducting interviews, questionnaires, using archival data and participant observations. Results: Several visualizations and reports built on top of the test results database are utilized in supporting daily work, merging a feature branch to the master and at release time. Some important visualizations are: lists of failing test cases, easy access to log files, and heatmap trend plots. The industrial practitioners perceived the visualizations and reporting as valuable, however they also mentioned several areas of improvement such as better ways of visualizing test coverage in a functional area as well as better navigation between different views. Conclusions: We conclude that visualizations of test results are a vital decision making tool for a variety of roles and tasks in embedded software development, however the visualizations need to be continuously improved to keep their value for its stakeholders.

Keywords
Software Testing, Visualizations, Decision Making
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-40902 (URN)10.1145/3239235.3268921 (DOI)2-s2.0-85053207166 (Scopus ID)978-1-4503-5823-1 (ISBN)
Conference
Empirical Software Engineering and Measurement, 12th International Symposium on ESEM18, 11 Oct 2018, Oulu, Finland
Projects
TOCSYC - Testing of Critical System Characteristics (KKS)The Volvo chair of vehicular electronics and software architectureITS ESS-H Industrial Graduate School in Reliable Embedded Sensor SystemsTESTMINE - Mining Test Evolution for Improved Software Regression Test Selection (KKS)
Available from: 2018-09-13 Created: 2018-09-13 Last updated: 2019-04-02Bibliographically approved
Tahvili, S., Afzal, W., Saadatmand, M., Bohlin, M. & Hasan Ameerjan, S. (2018). ESPRET: A Tool for Execution Time Estimation of Manual Test Cases. Journal of Systems and Software, 146, 26-41
Open this publication in new window or tab >>ESPRET: A Tool for Execution Time Estimation of Manual Test Cases
Show others...
2018 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 146, p. 26-41Article in journal (Refereed) Published
Abstract [en]

Manual testing is still a predominant and an important approach for validation of computer systems, particularly in certain domains such as safetycritical systems. Knowing the execution time of test cases is important to perform test scheduling, prioritization and progress monitoring. In this work, we present, apply and evaluate ESPRET (EStimation and PRediction of Execution Time) as our tool for estimating and predicting the execution time of manual test cases based on their test specifications. Our approach works by extracting timing information for various steps in manual test specifcation. This information is then used to estimate the maximum time for test steps that have not previously been executed, but for which textual specifications exist. As part of our approach, natural language parsing of the specifications is performed to identify word combinations to check whether existing timing information on various test steps is already available or not. Since executing test cases on the several machines may take different time, we predict the actual execution time for test cases by a set of regression models. Finally, an empirical evaluation of the approach and tool has been performed on a railway use case at Bombardier Transportation (BT) in Sweden.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-40905 (URN)10.1016/j.jss.2018.09.003 (DOI)000451488900004 ()2-s2.0-85053193472 (Scopus ID)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsMegaMaRt2 - Megamodelling at Runtime (ECSEL/Vinnova)TESTOMAT Project - The Next Level of Test Automation
Available from: 2018-09-11 Created: 2018-09-11 Last updated: 2019-01-16Bibliographically approved
Flemström, D., Enoiu, E. P., Afzal, W., Daniel, S., Gustafsson, T. & Kobetski, A. (2018). From natural language requirements to passive test cases using guarded assertions. In: Proceedings - 2018 IEEE 18th International Conference on Software Quality, Reliability, and Security, QRS 2018: . Paper presented at 18th IEEE International Conference on Software Quality, Reliability, and Security, QRS 2018, 16 July 2018 through 20 July 2018 (pp. 470-481). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>From natural language requirements to passive test cases using guarded assertions
Show others...
2018 (English)In: Proceedings - 2018 IEEE 18th International Conference on Software Quality, Reliability, and Security, QRS 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 470-481Conference paper, Published paper (Refereed)
Abstract [en]

In large-scale embedded system development, requirements are often expressed in natural language. Translating these requirements to executable test cases, while keeping the test cases and requirements aligned, is a challenging task. While such a transformation typically requires extensive domain knowledge, we show that a systematic process in combination with passive testing would facilitate the translation as well as linking the requirements to tests. Passive testing approaches observe the behavior of the system and test their correctness without interfering with the normal behavior. We use a specific approach to passive testing: guarded assertions (G/A). This paper presents a method for transforming system requirements expressed in natural language into G/As. We further present a proof of concept evaluation, performed at Bombardier Transportation Sweden AB, in which we show how the process would be used, together with practical advice of the reasoning behind the translation steps.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Computer software selection and evaluation, Embedded systems, Natural language processing systems, Software reliability, Bombardier Transportation, Domain knowledge, Large scale embedded systems, Natural language requirements, Natural languages, Proof of concept, System requirements, Systematic process, Translation (languages)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-40744 (URN)10.1109/QRS.2018.00060 (DOI)2-s2.0-85052319900 (Scopus ID)9781538677575 (ISBN)
Conference
18th IEEE International Conference on Software Quality, Reliability, and Security, QRS 2018, 16 July 2018 through 20 July 2018
Available from: 2018-09-07 Created: 2018-09-07 Last updated: 2018-10-31Bibliographically approved
Tahvili, S., Ahlberg, M., Fornander, E., Afzal, W., Saadatmand, M., Bohlin, M. & Sarabi, M. (2018). Functional Dependency Detection for Integration Test Cases. In: Proceedings - 2018 IEEE 18th International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018: . Paper presented at 18th IEEE International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018, 16 July 2018 through 20 July 2018 (pp. 207-214). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Functional Dependency Detection for Integration Test Cases
Show others...
2018 (English)In: Proceedings - 2018 IEEE 18th International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 207-214Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a natural language processing (NLP) based approach that, given software requirements specification, allows the functional dependency detection between integration test cases. We analyze a set of internal signals to the implemented modules for detecting dependencies between requirements and thereby identifying dependencies between test cases such that: module 2 depends on module 1 if an output internal signal from module 1 enters as an input internal signal to the module 2. Consequently, all requirements (and thereby test cases) for module 2 are dependent on all the designed requirements (and test cases) for module 1. The dependency information between requirements (and thus corresponding test cases) can be utilized for test case prioritization and scheduling. We have implemented our approach as a tool and the feasibility is evaluated through an industrial use case in the railway domain at Bombardier Transportation (BT), Sweden.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Dependency, Internal Signals, NLP, Optimization, Software Requirement, Software Testing, C (programming language), Computer software selection and evaluation, Integral equations, Natural language processing systems, Requirements engineering, Software reliability, Testing, Bombardier Transportation, Dependency informations, Functional dependency, Software requirements, Software requirements specifications, Test case prioritization
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-40742 (URN)10.1109/QRS-C.2018.00047 (DOI)000449555600034 ()2-s2.0-85052305334 (Scopus ID)9781538678398 (ISBN)
Conference
18th IEEE International Conference on Software Quality, Reliability, and Security Companion, QRS-C 2018, 16 July 2018 through 20 July 2018
Available from: 2018-09-07 Created: 2018-09-07 Last updated: 2019-02-15Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0611-2655

Search in DiVA

Show all publications