mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Cai, Simin
Publications (10 of 12) Show all publications
Kunnappiilly, A., Cai, S., Marinescu, R. & Seceleanu, C. (2019). Architecture modelling and formal analysis of intelligent multi-agent systems. In: ENASE 2019 - Proceedings of the 14th International Conference on Evaluation of Novel Approaches to Software Engineering: . Paper presented at 14th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2019, 4 May 2019 through 5 May 2019 (pp. 114-126). SciTePress
Open this publication in new window or tab >>Architecture modelling and formal analysis of intelligent multi-agent systems
2019 (English)In: ENASE 2019 - Proceedings of the 14th International Conference on Evaluation of Novel Approaches to Software Engineering, SciTePress , 2019, p. 114-126Conference paper, Published paper (Refereed)
Abstract [en]

Modern cyber-physical systems usually assume a certain degree of autonomy. Such systems, like Ambient Assisted Living systems aimed at assisting elderly people in their daily life, often need to perform safety-critical functions, for instance, fall detection, health deviation monitoring, communication to caregivers, etc. In many cases, the system users have distributed locations, as well as different needs that need to be serviced intelligently and simultaneously. These features call for intelligent, adaptive, scalable and fault-tolerant system design solutions, which are well embodied by multi-agent architectures. Analyzing such complex architectures at design phase, to verify if an abstraction of the system satisfies all the critical requirements is beneficial. In this paper, we start from an agent-based architecture for ambient assisted living systems, inspired from the literature, which we model in the popular Architecture Analysis and Design Language. Since the latter lacks the ability to specify autonomous agent behaviours, which are often intelligent, non deterministic or probabilistic, we extend the architectural language with a sub-language called Agent Annex, which we formally encode as a Stochastic Transition System. This contribution allows us to specify behaviours of agents involved in agent-based architectures of cyber-physical systems, which we show how to exhaustively verify with the state-of-art model checker PRISM. As a final step, we apply our framework on a distributed ambient assisted living system, whose critical requirements we verify with PRISM.

Place, publisher, year, edition, pages
SciTePress, 2019
Keywords
Ambient Assisted Living, Architecture Analsysis And Design Language, Cyber-physical Systems, Model Checking, Multi-Agent Systems, PRISM, Assisted living, Autonomous agents, Biological systems, Computer architecture, Cyber Physical System, Embedded systems, Intelligent agents, Prisms, Safety engineering, Stochastic systems, Systems analysis, Agent based architectures, Architectural languages, Design languages, Intelligent multi agent systems, Multiagent architecture, Safety-critical functions, Stochastic transition systems, Multi agent systems
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-44882 (URN)2-s2.0-85067426971 (Scopus ID)9789897583759 (ISBN)
Conference
14th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2019, 4 May 2019 through 5 May 2019
Note

Conference code: 148432; Export Date: 11 July 2019; Conference Paper

Available from: 2019-07-11 Created: 2019-07-11 Last updated: 2019-07-11Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2019). Statistical Model Checking for Real-Time Database Management Systems: A Case Study. In: The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019: . Paper presented at The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 10 Sep 2019, Zaragoza, Spain.
Open this publication in new window or tab >>Statistical Model Checking for Real-Time Database Management Systems: A Case Study
2019 (English)In: The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Many industrial control systems manage critical data using Database Management Systems (DBMS). The correctness of transactions, especially their atomicity, isolation and temporal correctness, is essential for the dependability of the entire system. Existing methods and techniques, however, either lack the ability to analyze the interplay of these properties, or do not scale well for systems with large amounts of transactions and data, and complex transaction management mechanisms. In this paper, we propose to analyze large scale real-time database systems using statistical model checking. We propose a pattern-based framework, by extending our previous work, to model the real-time DBMS as a network of stochastic timed automata, which can be analyzed by UPPAAL Statistical Model Checker. We present an industrial case study, in which we design a collision avoidance system for multiple autonomous construction vehicles, via concurrency control of a real-time DBMS. The desired properties of the designed system are analyzed using our proposed framework.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-45045 (URN)
Conference
The 24th IEEE Conference on Emerging Technologies and Factory Automation ETFA2019, 10 Sep 2019, Zaragoza, Spain
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2019-08-22 Created: 2019-08-22 Last updated: 2019-08-22Bibliographically approved
Cai, S., Gallina, B., Nyström, D., Seceleanu, C. & Larsson, A. (2019). Tool-supported design of data aggregation processes in cloud monitoring systems. Journal of Ambient Intelligence and Humanized Computing, 10(7), 2519-2535
Open this publication in new window or tab >>Tool-supported design of data aggregation processes in cloud monitoring systems
Show others...
2019 (English)In: Journal of Ambient Intelligence and Humanized Computing, ISSN 1868-5137, E-ISSN 1868-5145, Vol. 10, no 7, p. 2519-2535Article in journal (Refereed) Published
Abstract [en]

Efficient monitoring of a cloud system involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A thorough understanding and analysis of the characteristics of data aggregation processes can help to improve the software quality and reduce development cost. In this paper, we propose a systematic approach for designing data aggregation processes in cloud monitoring systems. Our approach applies a feature-oriented taxonomy called DAGGTAX (Data AGGregation TAXonomy) to systematically specify the features of the designed system, and SAT-based analysis to check the consistency of the specifications. Following our approach, designers first specify the data aggregation processes by selecting and composing the features from DAGGTAX. These specified features, as well as design constraints, are then formalized as propositional formulas, whose consistency is checked by the Z3 SAT solver. To support our approach, we propose a design tool called SAFARE (SAt-based Feature-oriented dAta aggREgation design), which implements DAGGTAX-based specification of data aggregation processes and design constraints, and integrates the state-of-the-art solver Z3 for automated analysis. We also propose a set of general design constraints, which are integrated by default in SAFARE. The effectiveness of our approach is demonstrated via a case study provided by industry, which aims to design a cloud monitoring system for video streaming. The case study shows that DAGGTAX and SAFARE can help designers to identify reusable features, eliminate infeasible design decisions, and derive crucial system parameters.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Keywords
Cloud monitoring system design, Consistency checking, Data aggregation, Feature model, Computer software selection and evaluation, Design, Quality control, Specifications, Taxonomies, Based specification, Cloud monitoring, Efficient monitoring, Feature modeling, Large amounts of data, Propositional formulas, Monitoring
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-43881 (URN)10.1007/s12652-018-0730-6 (DOI)000469922500004 ()2-s2.0-85049591829 (Scopus ID)
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2019-06-18Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2018). Effective Test Suite Design for Detecting Concurrency Control Faults in Distributed Transaction Systems. In: 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018: . Paper presented at 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 30 Oct 2018, Limassol, Cyprus (pp. 355-374).
Open this publication in new window or tab >>Effective Test Suite Design for Detecting Concurrency Control Faults in Distributed Transaction Systems
2018 (English)In: 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 2018, p. 355-374Conference paper, Published paper (Refereed)
Abstract [en]

Concurrency control faults may lead to unwanted interleavings, and breach data consistency in distributed transaction systems. However, due to the unpredictable delays between sites, detecting concurrency control faults in distributed transaction systems is difficult. In this paper, we propose a methodology, relying on model-based testing and mutation testing, for designing test cases in order to detect such faults. The generated test inputs are designated delays between distributed operations, while the outputs are the occurrence of unwanted interleavings that are consequences of the concurrency control faults. We mutate the distributed transaction specification with common concurrency control faults, and model them as UPPAAL timed automata, in which designated delays are encoded as stopwatches. Test cases are generated via reachability analysis using UPPAAL Model Checker, and are selected to form an effective test suite. Our methodology can reduce redundant test cases, and find the appropriate delays to detect concurrency control faults effectively.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41709 (URN)10.1007/978-3-030-03424-5_24 (DOI)978-3-030-03424-5 (ISBN)
Conference
8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation ISoLA 2018, 30 Oct 2018, Limassol, Cyprus
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2018-12-20 Created: 2018-12-20 Last updated: 2018-12-20Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2018). Specification and Formal Verification of Atomic Concurrent Real-Time Transactions. In: 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018: . Paper presented at The 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 04 Dec 2018, Taipei, Taiwan.
Open this publication in new window or tab >>Specification and Formal Verification of Atomic Concurrent Real-Time Transactions
2018 (English)In: 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Although atomicity, isolation and temporal correctness are crucial to the dependability of many real-time database-centric systems, the selected assurance mechanism for one property may breach another. Trading off these properties requires to specify and analyze their dependencies, together with the selected supporting mechanisms (abort recovery, concurrency control, and scheduling), which is still insufficiently supported. In this paper, we propose a UML profile, called UTRAN, for specifying atomic concurrent real-time transactions, with explicit support for all three properties and their supporting mechanisms. We also propose a pattern-based modeling framework, called UPPCART, to formalize the transactions and the mechanisms specified in UTRAN, as UPPAAL timed automata. Various mechanisms can be modeled flexibly using our reusable patterns, after which the desired properties can be verified by the UPPAAL model checker. Our techniques facilitate systematic analysis of atomicity, isolation and temporal correctness trade-offs with guarantee, thus contributing to a dependable real-time database system.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41710 (URN)000462280800012 ()978-1-5386-5700-3 (ISBN)
Conference
The 23rd IEEE Pacific Rim International Symposium on Dependable Computing PRDC 2018, 04 Dec 2018, Taipei, Taiwan
Projects
Adequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2018-12-18 Created: 2018-12-18 Last updated: 2019-04-11Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). Customized Real-Time Data Management for Automotive Systems: A Case Study. In: IECON 2017 - 43RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY: . Paper presented at 43rd Annual Conference of the IEEE Industrial Electronics Society IECON 2017, 30 Oct 2017, Beijing, China (pp. 8397-8404).
Open this publication in new window or tab >>Customized Real-Time Data Management for Automotive Systems: A Case Study
2017 (English)In: IECON 2017 - 43RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2017, p. 8397-8404Conference paper, Published paper (Refereed)
Abstract [en]

Real-time DataBase Management Systems (RTDBMS) have been considered as a promising means to manage data for data-centric automotive systems. During the design of an RTDBMS, one must carefully trade off data consistency and timeliness, in order to achieve an acceptable level of both properties. Previously, we have proposed a design process called DAGGERS to facilitate a systematic customization of transaction models and decision on the run-time mechanisms. In this paper, we evaluate the applicability of DAGGERS via an industrially relevant case study that aims to design the transaction management for an on-board diagnostic system, which should guarantee both timeliness and data consistency under concurrent access. To achieve this, we apply the pattern-based approach of DAGGERS to formalize the transactions, and derive the appropriate isolation level and concurrency control algorithm guided by model checking. We show by simulation that the implementation of our designed system satisfies the desired timeliness and derived isolation, and demonstrate that DAGGERS helps to customize desired real-time transaction management prior to implementation.

Series
IEEE Industrial Electronics Society, ISSN 1553-572X
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-37061 (URN)10.1109/IECON.2017.8217475 (DOI)000427164808042 ()2-s2.0-85046624820 (Scopus ID)978-1-5386-1127-2 (ISBN)
Conference
43rd Annual Conference of the IEEE Industrial Electronics Society IECON 2017, 30 Oct 2017, Beijing, China
Projects
DAGGERS - Data aggregation for embedded real-time database systemsAdequacy-based Testing of Extra-Functional Properties of Embedded Systems (VR)
Available from: 2017-11-07 Created: 2017-11-07 Last updated: 2018-05-24Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). DAGGTAX: A Taxonomy of Data Aggregation Processes. Västerås
Open this publication in new window or tab >>DAGGTAX: A Taxonomy of Data Aggregation Processes
2017 (English)Report (Other academic)
Abstract [en]

Data aggregation processes are essential constituents in many data management applications. Due to their complexity, designing data aggregation processes often demands considerable efforts. A study on the features of data aggregation processes will provide a comprehensive view for the designers and ease the design process. Existing works either propose application-specific aggregation solutions, or focus on particular aspects of aggregation processes such as aggregate functions, hence they do not offer a high-level, generic description. In this paper, we propose a taxonomy of data aggregation processes called DAGGTAX, which builds on the results of an extensive survey within various application domains. Our work focuses on the features of aggregation processes and their implications, especially on the temporal data consistency and the process timeliness. We present our taxonomy as a feature diagram, which is a visual notation with formal semantics. The taxonomy can then serve as the foundation of a design tool that enables designers to build an aggregation process by selecting and composing desired features. Based on the implications of the features, we formulate three design rules that eliminate infeasible feature combinations. We also provide a set of design heuristics that could help designers to decide the appropriate mechanisms for achieving the selected features. 

Place, publisher, year, edition, pages
Västerås: , 2017
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35366 (URN)MDH-MRTC-319/2017-1-SE (ISRN)
Projects
DAGGERS
Funder
Knowledge Foundation
Available from: 2017-05-22 Created: 2017-05-22 Last updated: 2017-05-29Bibliographically approved
Cai, S., Gallina, B., Nyström, D. & Seceleanu, C. (2017). DAGGTAX: A taxonomy of data aggregation processes. In: Lecture Notes in Computer Science, vol. 10563: . Paper presented at 7th International Conference on Model and Data Engineering (MEDI), Barcelona, SPAIN, OCT 04-06, 2017 (pp. 324-339). Springer Verlag
Open this publication in new window or tab >>DAGGTAX: A taxonomy of data aggregation processes
2017 (English)In: Lecture Notes in Computer Science, vol. 10563, Springer Verlag , 2017, p. 324-339Conference paper, Published paper (Refereed)
Abstract [en]

Data aggregation processes are essential constituents for data management in modern computer systems, such as decision support systems and Internet of Things (IoT) systems. Due to the heterogeneity and real-time constraints in such systems, designing appropriate data aggregation processes often demands considerable effort. A study on the characteristics of data aggregation processes is then desirable, as it provides a comprehensive view of such processes, potentially facilitating their design, as well as the development of tool support to aid designers. In this paper, we propose a taxonomy called DAGGTAX, which is a feature diagram that models the common and variable characteristics of data aggregation processes, with a special focus on the real-time aspect. The taxonomy can serve as the foundation of a design tool, which we also introduce, enabling designers to build an aggregation process by selecting and composing desired features, and to reason about the feasibility of the design. We apply DAGGTAX on industrial case studies, showing that DAGGTAX not only strengthens the understanding, but also facilitates the model-driven design of data aggregation processes. © 2017, Springer International Publishing AG.

Place, publisher, year, edition, pages
Springer Verlag, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 10563 LNCS
Keywords
Data aggregation taxonomy, Feature model, Real-time data management, Artificial intelligence, Decision support systems, Internet of things, Real time systems, Taxonomies, Aggregation process, Data aggregation, Feature modeling, Industrial case study, Internet of Things (IOT), Modern computer systems, Real time constraints, Real time data management, Information management
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-38313 (URN)10.1007/978-3-319-66854-3_25 (DOI)000439935200025 ()2-s2.0-85030711559 (Scopus ID)9783319668536 (ISBN)
Conference
7th International Conference on Model and Data Engineering (MEDI), Barcelona, SPAIN, OCT 04-06, 2017
Available from: 2018-02-12 Created: 2018-02-12 Last updated: 2018-08-17Bibliographically approved
Cai, S., Gallina, B., Nyström, D., Seceleanu, C. & Larsson, A. (2017). Design of Cloud Monitoring Systems via DAGGTAX: A Case Study. Paper presented at The 8th International Conference on Ambient Systems, Networks and Technologies ANT 2017, 16 May 2017, Madeira, Portugal. Procedia Computer Science, 109, 424-431
Open this publication in new window or tab >>Design of Cloud Monitoring Systems via DAGGTAX: A Case Study
Show others...
2017 (English)In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 109, p. 424-431Article in journal (Refereed) Published
Abstract [en]

Efficient auto-scaling of cloud resources relies on the monitoring of the cloud, which involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A systematic way of describing the data together with the possible aggregations is beneficial for designers to reason about the properties of these aspects as well as their implications on the design, thus improving quality and lowering development costs. In this paper, we propose to apply DAGGTAX, a feature-oriented taxonomy for organizing common and variable data and aggregation process properties, to the design of cloud monitoring systems. We demonstrate the effectiveness of DAGGTAX via a case study provided by industry, which aims to design a cloud monitoring system that serves auto-scaling for a video streaming system. We design the cloud monitoring system by selecting and composing DAGGTAX features, and reason about the feasibility of the selected features. The case study shows that the application of DAGGTAX can help designers to identify reusable features, analyze trade-offs between selected features, and derive crucial system parameters.

Keywords
data aggregation, information system design, cloud monitoring system design
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35493 (URN)10.1016/j.procs.2017.05.412 (DOI)000414533000053 ()2-s2.0-85021817536 (Scopus ID)
Conference
The 8th International Conference on Ambient Systems, Networks and Technologies ANT 2017, 16 May 2017, Madeira, Portugal
Projects
DAGGERS - Data aggregation for embedded real-time database systems
Available from: 2017-06-08 Created: 2017-06-08 Last updated: 2017-11-23Bibliographically approved
Cai, S. (2017). Systematic Design of Data Management for Real-Time Data-Intensive Applications. (Licentiate dissertation). Västerås: Mälardalen University
Open this publication in new window or tab >>Systematic Design of Data Management for Real-Time Data-Intensive Applications
2017 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Modern real-time data-intensive systems generate large amounts of data that are processed using complex data-related computations such as data aggregation. In order to maintain the consistency of data, such computations must be both logically correct (producing correct and consistent results) and temporally correct (completing before specified deadlines). One solution to ensure logical and temporal correctness is to model these computations as transactions and manage them using a Real-Time Database Management System (RTDBMS). Ideally, depending on the particular system, the transactions are customized with the desired logical and temporal correctness properties, which are achieved by the customized RTDBMS with appropriate run-time mechanisms. However, developing such a data management solution with provided guarantees is not easy, partly due to inadequate support for systematic analysis during the design. Firstly, designers do not have means to identify the characteristics of the computations, especially data aggregation, and to reason about their implications. Design flaws might not be discovered, and thus they may be propagated to the implementation. Secondly, trade-off analysis of conflicting properties, such as conflicts between transaction isolation and temporal correctness, is mainly performed ad-hoc, which increases the risk of unpredictable behavior.

In this thesis, we propose a systematic approach to develop transaction-based data management with data aggregation support for real-time systems. Our approach includes the following contributions: (i) a taxonomy of data aggregation, (ii) a process for customizing transaction models and RTDBMS, and (iii) a pattern-based method of modeling transactions in the timed automata framework, which we show how to verify with respect to transaction isolation and temporal correctness. Our proposed taxonomy of data aggregation processes helps in identifying their common and variable characteristics, based on which their implications can be reasoned about. Our proposed process allows designers to derive transaction models with desired properties for the data-related computations from system requirements, and decide the appropriate run-time mechanisms for the customized RTDBMS to achieve the desired properties. To perform systematic trade-off analysis between transaction isolation and temporal correctness specifically, we propose a method to create formal models of transactions with concurrency control, based on which the isolation and temporal correctness properties can be verified by model checking, using the UPPAAL tool. By applying the proposed approach to the development of an industrial demonstrator, we validate the applicability of our approach.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2017
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 258
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35369 (URN)978-91-7485-334-6 (ISBN)
Presentation
2017-06-12, Kappa, Mälardalens högskola, Västerås, 13:30 (English)
Opponent
Supervisors
Projects
DAGGERS
Funder
Knowledge Foundation
Available from: 2017-05-23 Created: 2017-05-22 Last updated: 2017-07-10Bibliographically approved
Organisations

Search in DiVA

Show all publications