mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 29) Show all publications
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2019). Statistical Model Checking for Real-Time Database Management Systems: A Case Study. In: The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019: . Paper presented at The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 10 Sep 2019, Zaragoza, Spain.
Open this publication in new window or tab >>Statistical Model Checking for Real-Time Database Management Systems: A Case Study
2019 (English)In: The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Many industrial control systems manage critical data using Database Management Systems (DBMS). The correctness of transactions, especially their atomicity, isolation and temporal correctness, is essential for the dependability of the entire system. Existing methods and techniques, however, either lack the ability to analyze the interplay of these properties, or do not scale well for systems with large amounts of transactions and data, and complex transaction management mechanisms. In this paper, we propose to analyze large scale real-time database systems using statistical model checking. We propose a pattern-based framework, by extending our previous work, to model the real-time DBMS as a network of stochastic timed automata, which can be analyzed by UPPAAL Statistical Model Checker. We present an industrial case study, in which we design a collision avoidance system for multiple autonomous construction vehicles, via concurrency control of a real-time DBMS. The desired properties of the designed system are analyzed using our proposed framework.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-45045 (URN)
Conference
The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 10 Sep 2019, Zaragoza, Spain
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2019-08-22 Created: 2019-08-22 Last updated: 2019-08-22Bibliographically approved
Cai, S., Gallina, B., Nyström, D., Seceleanu, C. & Larsson, A. (2019). Tool-supported design of data aggregation processes in cloud monitoring systems. Journal of Ambient Intelligence and Humanized Computing, 10(7), 2519-2535
Open this publication in new window or tab >>Tool-supported design of data aggregation processes in cloud monitoring systems
Show others...
2019 (English)In: Journal of Ambient Intelligence and Humanized Computing, ISSN 1868-5137, E-ISSN 1868-5145, Vol. 10, no 7, p. 2519-2535Article in journal (Refereed) Published
Abstract [en]

Efficient monitoring of a cloud system involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A thorough understanding and analysis of the characteristics of data aggregation processes can help to improve the software quality and reduce development cost. In this paper, we propose a systematic approach for designing data aggregation processes in cloud monitoring systems. Our approach applies a feature-oriented taxonomy called DAGGTAX (Data AGGregation TAXonomy) to systematically specify the features of the designed system, and SAT-based analysis to check the consistency of the specifications. Following our approach, designers first specify the data aggregation processes by selecting and composing the features from DAGGTAX. These specified features, as well as design constraints, are then formalized as propositional formulas, whose consistency is checked by the Z3 SAT solver. To support our approach, we propose a design tool called SAFARE (SAt-based Feature-oriented dAta aggREgation design), which implements DAGGTAX-based specification of data aggregation processes and design constraints, and integrates the state-of-the-art solver Z3 for automated analysis. We also propose a set of general design constraints, which are integrated by default in SAFARE. The effectiveness of our approach is demonstrated via a case study provided by industry, which aims to design a cloud monitoring system for video streaming. The case study shows that DAGGTAX and SAFARE can help designers to identify reusable features, eliminate infeasible design decisions, and derive crucial system parameters.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Keywords
Cloud monitoring system design, Consistency checking, Data aggregation, Feature model, Computer software selection and evaluation, Design, Quality control, Specifications, Taxonomies, Based specification, Cloud monitoring, Efficient monitoring, Feature modeling, Large amounts of data, Propositional formulas, Monitoring
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-43881 (URN)10.1007/s12652-018-0730-6 (DOI)000469922500004 ()2-s2.0-85049591829 (Scopus ID)
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2019-06-18Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2018). Effective Test Suite Design for Detecting Concurrency Control Faults in Distributed Transaction Systems. In: 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018: . Paper presented at 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 30 Oct 2018, Limassol, Cyprus (pp. 355-374).
Open this publication in new window or tab >>Effective Test Suite Design for Detecting Concurrency Control Faults in Distributed Transaction Systems
2018 (English)In: 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 2018, p. 355-374Conference paper, Published paper (Refereed)
Abstract [en]

Concurrency control faults may lead to unwanted interleavings, and breach data consistency in distributed transaction systems. However, due to the unpredictable delays between sites, detecting concurrency control faults in distributed transaction systems is difficult. In this paper, we propose a methodology, relying on model-based testing and mutation testing, for designing test cases in order to detect such faults. The generated test inputs are designated delays between distributed operations, while the outputs are the occurrence of unwanted interleavings that are consequences of the concurrency control faults. We mutate the distributed transaction specification with common concurrency control faults, and model them as UPPAAL timed automata, in which designated delays are encoded as stopwatches. Test cases are generated via reachability analysis using UPPAAL Model Checker, and are selected to form an effective test suite. Our methodology can reduce redundant test cases, and find the appropriate delays to detect concurrency control faults effectively.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41709 (URN)10.1007/978-3-030-03424-5_24 (DOI)978-3-030-03424-5 (ISBN)
Conference
8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 30 Oct 2018, Limassol, Cyprus
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2018-12-20 Created: 2018-12-20 Last updated: 2018-12-20Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2018). Specification and Formal Verification of Atomic Concurrent Real-Time Transactions. In: 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018: . Paper presented at The 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 04 Dec 2018, Taipei, Taiwan.
Open this publication in new window or tab >>Specification and Formal Verification of Atomic Concurrent Real-Time Transactions
2018 (English)In: 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Although atomicity, isolation and temporal correctness are crucial to the dependability of many real-time database-centric systems, the selected assurance mechanism for one property may breach another. Trading off these properties requires to specify and analyze their dependencies, together with the selected supporting mechanisms (abort recovery, concurrency control, and scheduling), which is still insufficiently supported. In this paper, we propose a UML profile, called UTRAN, for specifying atomic concurrent real-time transactions, with explicit support for all three properties and their supporting mechanisms. We also propose a pattern-based modeling framework, called UPPCART, to formalize the transactions and the mechanisms specified in UTRAN, as UPPAAL timed automata. Various mechanisms can be modeled flexibly using our reusable patterns, after which the desired properties can be verified by the UPPAAL model checker. Our techniques facilitate systematic analysis of atomicity, isolation and temporal correctness trade-offs with guarantee, thus contributing to a dependable real-time database system.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41710 (URN)000462280800012 ()978-1-5386-5700-3 (ISBN)
Conference
The 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 04 Dec 2018, Taipei, Taiwan
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2018-12-18 Created: 2018-12-18 Last updated: 2019-04-11Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). Customized Real-Time Data Management for Automotive Systems: A Case Study. In: IECON 2017 - 43RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY: . Paper presented at 43rd Annual Conference of the IEEE Industrial Electronics Society IECON 2017, 30 Oct 2017, Beijing, China (pp. 8397-8404).
Open this publication in new window or tab >>Customized Real-Time Data Management for Automotive Systems: A Case Study
2017 (English)In: IECON 2017 - 43RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2017, p. 8397-8404Conference paper, Published paper (Refereed)
Abstract [en]

Real-time DataBase Management Systems (RTDBMS) have been considered as a promising means to manage data for data-centric automotive systems. During the design of an RTDBMS, one must carefully trade off data consistency and timeliness, in order to achieve an acceptable level of both properties. Previously, we have proposed a design process called DAGGERS to facilitate a systematic customization of transaction models and decision on the run-time mechanisms. In this paper, we evaluate the applicability of DAGGERS via an industrially relevant case study that aims to design the transaction management for an on-board diagnostic system, which should guarantee both timeliness and data consistency under concurrent access. To achieve this, we apply the pattern-based approach of DAGGERS to formalize the transactions, and derive the appropriate isolation level and concurrency control algorithm guided by model checking. We show by simulation that the implementation of our designed system satisfies the desired timeliness and derived isolation, and demonstrate that DAGGERS helps to customize desired real-time transaction management prior to implementation.

Series
IEEE Industrial Electronics Society, ISSN 1553-572X
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37061 (URN)10.1109/IECON.2017.8217475 (DOI)000427164808042 ()2-s2.0-85046624820 (Scopus ID)978-1-5386-1127-2 (ISBN)
Conference
43rd Annual Conference of the IEEE Industrial Electronics Society IECON 2017, 30 Oct 2017, Beijing, China
Projects
DAGGERS - Data aggregation for embedded real-time database systemsAdequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2017-11-07 Created: 2017-11-07 Last updated: 2018-05-24Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). DAGGTAX: A Taxonomy of Data Aggregation Processes. Västerås
Open this publication in new window or tab >>DAGGTAX: A Taxonomy of Data Aggregation Processes
2017 (English)Report (Other academic)
Abstract [en]

Data aggregation processes are essential constituents in many data management applications. Due to their complexity, designing data aggregation processes often demands considerable efforts. A study on the features of data aggregation processes will provide a comprehensive view for the designers and ease the design process. Existing works either propose application-specific aggregation solutions, or focus on particular aspects of aggregation processes such as aggregate functions, hence they do not offer a high-level, generic description. In this paper, we propose a taxonomy of data aggregation processes called DAGGTAX, which builds on the results of an extensive survey within various application domains. Our work focuses on the features of aggregation processes and their implications, especially on the temporal data consistency and the process timeliness. We present our taxonomy as a feature diagram, which is a visual notation with formal semantics. The taxonomy can then serve as the foundation of a design tool that enables designers to build an aggregation process by selecting and composing desired features. Based on the implications of the features, we formulate three design rules that eliminate infeasible feature combinations. We also provide a set of design heuristics that could help designers to decide the appropriate mechanisms for achieving the selected features. 

Place, publisher, year, edition, pages
Västerås: , 2017
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35366 (URN)MDH-MRTC-319/2017-1-SE (ISRN)
Projects
DAGGERS
Funder
Knowledge Foundation
Available from: 2017-05-22 Created: 2017-05-22 Last updated: 2017-05-29Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). DAGGTAX: A taxonomy of data aggregation processes. In: Lecture Notes in Computer Science, vol. 10563: . Paper presented at 7th International Conference on Model and Data Engineering (MEDI), Barcelona, SPAIN, OCT 04-06, 2017 (pp. 324-339). Springer Verlag
Open this publication in new window or tab >>DAGGTAX: A taxonomy of data aggregation processes
2017 (English)In: Lecture Notes in Computer Science, vol. 10563, Springer Verlag , 2017, p. 324-339Conference paper, Published paper (Refereed)
Abstract [en]

Data aggregation processes are essential constituents for data management in modern computer systems, such as decision support systems and Internet of Things (IoT) systems. Due to the heterogeneity and real-time constraints in such systems, designing appropriate data aggregation processes often demands considerable effort. A study on the characteristics of data aggregation processes is then desirable, as it provides a comprehensive view of such processes, potentially facilitating their design, as well as the development of tool support to aid designers. In this paper, we propose a taxonomy called DAGGTAX, which is a feature diagram that models the common and variable characteristics of data aggregation processes, with a special focus on the real-time aspect. The taxonomy can serve as the foundation of a design tool, which we also introduce, enabling designers to build an aggregation process by selecting and composing desired features, and to reason about the feasibility of the design. We apply DAGGTAX on industrial case studies, showing that DAGGTAX not only strengthens the understanding, but also facilitates the model-driven design of data aggregation processes. © 2017, Springer International Publishing AG.

Place, publisher, year, edition, pages
Springer Verlag, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 10563 LNCS
Keywords
Data aggregation taxonomy, Feature model, Real-time data management, Artificial intelligence, Decision support systems, Internet of things, Real time systems, Taxonomies, Aggregation process, Data aggregation, Feature modeling, Industrial case study, Internet of Things (IOT), Modern computer systems, Real time constraints, Real time data management, Information management
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-38313 (URN)10.1007/978-3-319-66854-3_25 (DOI)000439935200025 ()2-s2.0-85030711559 (Scopus ID)9783319668536 (ISBN)
Conference
7th International Conference on Model and Data Engineering (MEDI), Barcelona, SPAIN, OCT 04-06, 2017
Available from: 2018-02-12 Created: 2018-02-12 Last updated: 2018-08-17Bibliographically approved
Cai, S., Gallina, B., Nyström, D., Seceleanu, C. & Larsson, A. (2017). Design of Cloud Monitoring Systems via DAGGTAX: A Case Study. Paper presented at The 8th International Conference on Ambient Systems, Networks and Technologies ANT 2017, 16 May 2017, Madeira, Portugal. Procedia Computer Science, 109, 424-431
Open this publication in new window or tab >>Design of Cloud Monitoring Systems via DAGGTAX: A Case Study
Show others...
2017 (English)In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 109, p. 424-431Article in journal (Refereed) Published
Abstract [en]

Efficient auto-scaling of cloud resources relies on the monitoring of the cloud, which involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A systematic way of describing the data together with the possible aggregations is beneficial for designers to reason about the properties of these aspects as well as their implications on the design, thus improving quality and lowering development costs. In this paper, we propose to apply DAGGTAX, a feature-oriented taxonomy for organizing common and variable data and aggregation process properties, to the design of cloud monitoring systems. We demonstrate the effectiveness of DAGGTAX via a case study provided by industry, which aims to design a cloud monitoring system that serves auto-scaling for a video streaming system. We design the cloud monitoring system by selecting and composing DAGGTAX features, and reason about the feasibility of the selected features. The case study shows that the application of DAGGTAX can help designers to identify reusable features, analyze trade-offs between selected features, and derive crucial system parameters.

Keywords
data aggregation, information system design, cloud monitoring system design
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35493 (URN)10.1016/j.procs.2017.05.412 (DOI)000414533000053 ()2-s2.0-85021817536 (Scopus ID)
Conference
The 8th International Conference on Ambient Systems, Networks and Technologies ANT 2017, 16 May 2017, Madeira, Portugal
Projects
DAGGERS - Data aggregation for embedded real-time database systems
Available from: 2017-06-08 Created: 2017-06-08 Last updated: 2017-11-23Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2016). Towards the verification of temporal data consistency in Real-Time Data Management. In: 2016 2nd International Workshop on Modelling, Analysis, and Control of Complex CPS, CPS Data 2016: . Paper presented at 2nd International Workshop on Modelling, Analysis, and Control of Complex CPS, CPS Data 2016, 11 April 2016. , Article ID Article number 7496422.
Open this publication in new window or tab >>Towards the verification of temporal data consistency in Real-Time Data Management
2016 (English)In: 2016 2nd International Workshop on Modelling, Analysis, and Control of Complex CPS, CPS Data 2016, 2016, article id Article number 7496422Conference paper, Published paper (Refereed)
Abstract [en]

Many Cyber-Physical Systems (CPSs) require both timeliness of computation and temporal consistency of their data. Therefore, when using real-time databases in a real-time CPS application, the Real-Time Database Management Systems (RTDBMSs) must ensure both transaction timeliness and temporal data consistency. RTDBMSs prevent unwanted interferences of concurrent transactions via concurrency control, which in turn has a significant impact on the timeliness and temporal consistency of data. Therefore it is important to verify, already at early design stages that these properties are not breached by the concurrency control. However, most often such early on guarantees of properties under concurrency control are missing. In this paper we show how to verify transaction timeliness and temporal data consistency using model checking. We model the transaction work units, the data and the concurrency control mechanism as a network of timed automata, and specify the properties in TCTL. The properties are then checked exhaustively and automatically using the UPPAAL model checker. 

Keywords
Complex networks, Embedded systems, Information management, Model checking, Real time systems, Concurrent transactions, Cyber physical systems (CPSs), Early design stages, Real time data management, Real-time database, Real-time database management systems, Temporal consistency, Uppaal model checkers, Concurrency control
National Category
Embedded Systems
Identifiers
urn:nbn:se:mdh:diva-32523 (URN)10.1109/CPSData.2016.7496422 (DOI)000390778200005 ()2-s2.0-84982976149 (Scopus ID)9781509011544 (ISBN)
Conference
2nd International Workshop on Modelling, Analysis, and Control of Complex CPS, CPS Data 2016, 11 April 2016
Available from: 2016-08-18 Created: 2016-08-18 Last updated: 2019-01-28Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2015). Trading-off Data Consistency for Timeliness in Real-Time Database Systems. In: 27th Euromicro Conference on Real-Time Systems ECRTS'15: . Paper presented at 27th Euromicro Conference on Real-Time Systems ECRTS'15, 7-10 Jul 2015, Lund, Sweden (pp. 13-16).
Open this publication in new window or tab >>Trading-off Data Consistency for Timeliness in Real-Time Database Systems
2015 (English)In: 27th Euromicro Conference on Real-Time Systems ECRTS'15, 2015, p. 13-16Conference paper, Published paper (Refereed)
Abstract [en]

In order to guarantee transaction timeliness, Realtime Database Management Systems (RTDBMSs) often relax data consistency by relaxing the ACID transaction properties. Such relaxation varies depending on the application and thus different transaction management mechanisms have to be decided for developing a tailored RTDBMS. However, current RTDBMSs development does not include systematic verification of timeliness and desired ACID properties. Consequently, the implemented transaction management mechanisms may breach timeliness of transactions. In this paper, we propose a process called DAGGERS for developing a tailored RTDBMS that guarantees timeliness and desired data consistency for real-time systems by employing model-checking techniques during the process. Based on the characteristics of the desired data manipulations, transaction models are designed and then formally verified iteratively together with selected run-time mechanisms, in order to achieve the desired/necessary trade-offs between timeliness and data consistency. The outcome of DAGGERS is thus a tailored transaction management with guaranteed appropriate trade-offs, as well as the model-checking based worst-case execution times and blocking times of transactions under these mechanisms and assumptions of the hardware architecture.

Keywords
RTDBMS, timeliness, ACID, formal verification
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-30494 (URN)
Conference
27th Euromicro Conference on Real-Time Systems ECRTS'15, 7-10 Jul 2015, Lund, Sweden
Projects
DAGGERS - Data aggregation for embedded real-time database systems
Available from: 2015-12-22 Created: 2015-12-21 Last updated: 2015-12-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2898-9570

Search in DiVA

Show all publications