Executing software processes in the cloud can bring several benefits to software development. In this paper, we discuss the benefits and considerations of cloud-based software processes. EXE-SPEM is our extension of the Software and Systems Process Engineering (SPEM2.0) Meta-model to support creating cloud-based executable software process models. Since SPEM2.0 is a visual modelling language, we introduce an XML notation meta-model and mapping rules from EXE-SPEM to this notation which can be executed in a workflow engine. We demonstrate our approach by modelling an example software process using EXE-SPEM and mapping it to the XML notation.
Engineering safety-critical systems is a complex task which involves multiple stakeholders. It requires shared and scalable computation to systematically involve geographically distributed teams. The paper proposes a model-driven cloud-based enactment architecture automating safety-critical processes. This work adapts our previous work on cloud-based software engineering by enriching the architecture with an automatic support for generation of both, product-based safety arguments from failure logic analysis results and process-based arguments from the process model and the enactment data. The approach is demonstrated using a fragment of a process adapted from the aerospace domain.
Over the years, software development has evolved to meet the needs of new types of applications and to embrace new technological disruptions. Today, we witness the rise of mobility where the role of the conventional high-end PC is declining. Some refer to this era as the Post- PC era. This technological shift, powered by a key enabling technology, cloud computing, has opened new opportunities for human advancement. Consequently, the evolving landscape of software systems drives the need for new methods for conceiving them. Such methods need to: (a) address the challenges and requirements of this era and (b) embrace the benefits of new technological breakthroughs. In this paper, we list the characteristics of the Post-PC era from the software development perspective and describe two motivating trends of software development processes. Then, we derive a list of requirements for the future software development from the characteristics of the Post-PC era and from the motivating trends. Finally, we propose a reference architecture for cloud-based software process enactment as an enabler for Software Development as a Service. The architecture is the first step addressing the needs that we have identified.
Allocating tasks to distributed sites in Global Software Development (GSD) projects is often done unsystematically and based on the personal experience of project managers. Wrong allocation decisions increase the projects risks as tasks have dependencies that are inherited by the distributed sites. Decision support can help make the task allocation a more informed and systematic process. The challenges in allocating tasks to distributed sites exist because of three distance dimensions between sites (geographical, temporal and cultural). An informed task allocation decision needs to consider these distances. Therefore, in this paper, we propose to integrate and semi-automate the calculation of an existing Global Distance Metric (GDM) into an architecture that supports executing cloud-based software processes. We analyze the potential of integrating the GDM into this architecture and identify the needed extensions to the architecture.
Allocating tasks to distributed sites in Global Software Development (GSD) projects is often done unsystematically and based on the personal experi- ence of project managers. Wrong allocation decisions increase the projects risks as tasks have dependencies that are inherited by the distributed sites. Decision sup- port can help make the task allocation a more informed and systematic process. The challenges in allocating tasks to distributed sites exist because of three dis- tance dimensions between sites (geographical, temporal and cultural). An informed task allocation decision needs to consider these distances. Therefore, in this paper, we propose to integrate and semi-automate the calculation of an existing Global Distance Metric (GDM) into an architecture that supports executing cloud-based software processes. We analyze the potential of integrating the GDM into this archi- tecture and identify the needed extensions to the architecture.
Using cloud computing to execute software processes brings several benefits to software development. In a previous work, we proposed a reference architecture, which treats software processes as workflows and uses cloud computing to execute them. Scheduling the execution in the cloud impacts the execution cost and the cloud resources utilization. Existing workflow scheduling algorithms target business and scientific (data-driven) workflows, but not software processes workflows. In this paper, we adapt three scheduling algorithms for our architecture and propose a fourth one; the Proportional Adaptive Task Schedule algorithm. We evaluate the algorithms in terms of their execution cost, makespan and cloud resource utilization. Our results show that our proposed algorithm saves between 19.74% and 45.78% of the execution cost and provides the best resource (virtual machine) utilization compared to the adapted algorithms while providing the second best makespan.
For the purpose of certification, manufactures of nowadays highly connected safety-critical systems are expected to en- gineer their systems according to well-defined engineering processes in compliance with safety and security standards. Certification is an extremely expensive and time-consuming process. Since safety and security standards exhibit a certain degree of commonality, certification-related artifacts (e.g., process models) should to some extent be reusable. To en- able systematic reuse and customization of process infor- mation, in this paper we further develop security-informed safety-oriented process line engineering (i.e., engineering of sets of processes including security and safety concerns). More specifically, first we consider three tool-supported ap- proaches for process-related commonality and variability man- agement and we apply them to limited but meaningful por- tions of safety and security standards within airworthiness. Then, we discuss our findings. Finally, we draw our conclu- sions and sketch future work.
Earth-moving vehicles (EMVs) are vital in numerous industries, including construction, forestry, mining, cleaning, and agriculture. The changing nature of the off-road environment in which they operate makes situational awareness for readiness and, consequently, mental stress crucial for drivers and requires a high level of controllability. Therefore, the monitoring of drivers’ acute stress patterns may be used as an input in identifying various levels of attentiveness. This research presents an experimental evaluation of a physiological-based system that can be useful to evaluate the readiness of a driver in different conditions. For the experimental validation, physiological signals such as electrocardiogram (ECG), galvanic skin response (GSR) and speech data were collected from nine participants throughout driving experiments of increasing complexity on a specific simulator. The experimental results show that the identified parameters derived from the acquired physiological signals can help us understand the driver status when performing different tasks, the engagement of which is related to different road environments. This multi-parameter approach can provide more reliable information compared to single parameter approaches (e.g., eye monitoring with a camera) and identify driver status variations, from relaxed to stressed or drowsy. The use of these signals allows for the development of a smart driving cockpit, which could communicate to the vehicle the driver’s status, to set up an innovative protection system aiming to increase road safety.
Nowadays systems are becoming more and more connected. Consequently, the co-engineering of (cyber)security and safety life cycles becomes paramount. Currently, no standard provides a structured co-engineering process to facilitate the communication between safety and security engineers. In this paper, we propose a process for co- engineering safety and security by the explicit systematization and management of commonalities and variabilities, implicitly stated in the requirements of the different standards. Our process treats the safety and security life cycles as members of a security-informed safety-oriented process line and so it forces safety and security engineers to come together and brainstorm on what might be considered a commonality and what might be considered a variability. We illustrate the usage of our process by systematizing commonalities and variabilities at risk analysis phase in the context of ISO 26262 and SAE J3061. We then draw lessons learnt. Finally, we sketch some directions for future work.
Traditional Concurrency Control (CC) mechanisms ensure absence of undesired interference in transaction-based systems and enforce isolation. However, CC may introduce unpredictable delays that could lead to breached timeliness, which is unwanted for real-time transactions. To avoid deadline misses, some CC algorithms relax isolation in favor of timeliness, whereas others limit possible interleavings by leveraging real-time constraints and preserve isolation. Selecting an appropriate CC algorithm that can guarantee timeliness at an acceptable level of isolation thus becomes an essential concern for system designers. However, trading-off isolation for timeliness is not easy with existing analysis techniques in database and real-time communities. In this paper, we propose to use model checking of a timed automata model of the transaction system, in order to check the traded-off timeliness and isolation. Our solution provides modularization for the basic transactional constituents, which enables flexible modeling and composition of various candidate CC algorithms, and thus reduces the effort of selecting the appropriate CC algorithm.
Real-time DataBase Management Systems (RTDBMS) have been considered as a promising means to manage data for data-centric automotive systems. During the design of an RTDBMS, one must carefully trade off data consistency and timeliness, in order to achieve an acceptable level of both properties. Previously, we have proposed a design process called DAGGERS to facilitate a systematic customization of transaction models and decision on the run-time mechanisms. In this paper, we evaluate the applicability of DAGGERS via an industrially relevant case study that aims to design the transaction management for an on-board diagnostic system, which should guarantee both timeliness and data consistency under concurrent access. To achieve this, we apply the pattern-based approach of DAGGERS to formalize the transactions, and derive the appropriate isolation level and concurrency control algorithm guided by model checking. We show by simulation that the implementation of our designed system satisfies the desired timeliness and derived isolation, and demonstrate that DAGGERS helps to customize desired real-time transaction management prior to implementation.
Data aggregation processes are essential constituents in many data management applications. Due to their complexity, designing data aggregation processes often demands considerable efforts. A study on the features of data aggregation processes will provide a comprehensive view for the designers and ease the design process. Existing works either propose application-specific aggregation solutions, or focus on particular aspects of aggregation processes such as aggregate functions, hence they do not offer a high-level, generic description. In this paper, we propose a taxonomy of data aggregation processes called DAGGTAX, which builds on the results of an extensive survey within various application domains. Our work focuses on the features of aggregation processes and their implications, especially on the temporal data consistency and the process timeliness. We present our taxonomy as a feature diagram, which is a visual notation with formal semantics. The taxonomy can then serve as the foundation of a design tool that enables designers to build an aggregation process by selecting and composing desired features. Based on the implications of the features, we formulate three design rules that eliminate infeasible feature combinations. We also provide a set of design heuristics that could help designers to decide the appropriate mechanisms for achieving the selected features.
Data aggregation processes are essential constituents for data management in modern computer systems, such as decision support systems and Internet of Things (IoT) systems. Due to the heterogeneity and real-time constraints in such systems, designing appropriate data aggregation processes often demands considerable effort. A study on the characteristics of data aggregation processes is then desirable, as it provides a comprehensive view of such processes, potentially facilitating their design, as well as the development of tool support to aid designers. In this paper, we propose a taxonomy called DAGGTAX, which is a feature diagram that models the common and variable characteristics of data aggregation processes, with a special focus on the real-time aspect. The taxonomy can serve as the foundation of a design tool, which we also introduce, enabling designers to build an aggregation process by selecting and composing desired features, and to reason about the feasibility of the design. We apply DAGGTAX on industrial case studies, showing that DAGGTAX not only strengthens the understanding, but also facilitates the model-driven design of data aggregation processes. © 2017, Springer International Publishing AG.
Data aggregation processes are essential constituents for data management in modern computer systems, such as decision support systems and Internet of Things (IoT) systems, many with timing constraints. Understanding the common and variable features of data aggregation processes, especially their implications to the timerelated properties, is key to improving the quality of the designed system and reduce design effort. In this paper, we present a survey of data aggregation processes in a variety of application domains from literature.We investigate their common and variable features, which serves as the basis of our previously proposed taxonomy called DAGGTAX. By studying the implications of the DAGGTAX features, we formulate a set of constraints to be satisfied during design, which helps to check the correctness of the specifications and reduce the design space. We also provide a set of design heuristics that could help designers to decide the appropriate mechanisms for achieving the selected features. We apply DAGGTAX on industrial case studies, showing that DAGGTAX not only strengthens the understanding, but also serves as the foundation of a design tool which facilitates the model-driven design of data aggregation processes.
Concurrency control faults may lead to unwanted interleavings, and breach data consistency in distributed transaction systems. However, due to the unpredictable delays between sites, detecting concurrency control faults in distributed transaction systems is difficult. In this paper, we propose a methodology, relying on model-based testing and mutation testing, for designing test cases in order to detect such faults. The generated test inputs are designated delays between distributed operations, while the outputs are the occurrence of unwanted interleavings that are consequences of the concurrency control faults. We mutate the distributed transaction specification with common concurrency control faults, and model them as UPPAAL timed automata, in which designated delays are encoded as stopwatches. Test cases are generated via reachability analysis using UPPAAL Model Checker, and are selected to form an effective test suite. Our methodology can reduce redundant test cases, and find the appropriate delays to detect concurrency control faults effectively.
Many database management systems (DBMS) need to ensure atomicity and isolation of transactions for logical data consistency, as well as to guarantee temporal correctness of the executed transactions. Since the mechanisms for atomicity and isolation may lead to breaching temporal correctness, trade-offs between these properties are often required during the DBMS design. To be able to address this concern, we have previously proposed the pattern-based UPPCART framework, which models the transactions and the DBMS mechanisms as timed automata, and verifies the trade-offs with provable guarantee. However, the manual construction of UPPCART models can require considerable effort and is prone to errors. In this paper, we advance the formal analysis of atomic concurrent real-time transactions with tool-automated construction of UPPCART models. The latter are generated automatically from our previously proposed UTRAN specifications, which are high-level UML-based specifications familiar to designers. To achieve this, we first propose formal definitions for the modeling patterns in UPPCART, as well as for the pattern-based construction of DBMS models, respectively. Based on this, we establish a translational semantics from UTRAN specifications to UPPCART models, to provide the former with a formal semantics relying on timed automata, and develop a tool that implements the automated transformation. We also extend the expressiveness of UTRAN and UPPCART, to incorporate transaction sequences and their timing properties. We demonstrate the specification in UTRAN, automated transformation to UPPCART, and verification of the traded-off properties, via an industrial use case.
Although atomicity, isolation and temporal correctness are crucial to the dependability of many real-time database-centric systems, the selected assurance mechanism for one property may breach another. Trading off these properties requires to specify and analyze their dependencies, together with the selected supporting mechanisms (abort recovery, concurrency control, and scheduling), which is still insufficiently supported. In this paper, we propose a UML profile, called UTRAN, for specifying atomic concurrent real-time transactions, with explicit support for all three properties and their supporting mechanisms. We also propose a pattern-based modeling framework, called UPPCART, to formalize the transactions and the mechanisms specified in UTRAN, as UPPAAL timed automata. Various mechanisms can be modeled flexibly using our reusable patterns, after which the desired properties can be verified by the UPPAAL model checker. Our techniques facilitate systematic analysis of atomicity, isolation and temporal correctness trade-offs with guarantee, thus contributing to a dependable real-time database system.
Many industrial control systems manage critical data using Database Management Systems (DBMS). The correctness of transactions, especially their atomicity, isolation and temporal correctness, is essential for the dependability of the entire system. Existing methods and techniques, however, either lack the ability to analyze the interplay of these properties, or do not scale well for systems with large amounts of transactions and data, and complex transaction management mechanisms. In this paper, we propose to analyze large scale real-time database systems using statistical model checking. We propose a pattern-based framework, by extending our previous work, to model the real-time DBMS as a network of stochastic timed automata, which can be analyzed by UPPAAL Statistical Model Checker. We present an industrial case study, in which we design a collision avoidance system for multiple autonomous construction vehicles, via concurrency control of a real-time DBMS. The desired properties of the designed system are analyzed using our proposed framework.
Many Cyber-Physical Systems (CPSs) require both timeliness of computation and temporal consistency of their data. Therefore, when using real-time databases in a real-time CPS application, the Real-Time Database Management Systems (RTDBMSs) must ensure both transaction timeliness and temporal data consistency. RTDBMSs prevent unwanted interferences of concurrent transactions via concurrency control, which in turn has a significant impact on the timeliness and temporal consistency of data. Therefore it is important to verify, already at early design stages that these properties are not breached by the concurrency control. However, most often such early on guarantees of properties under concurrency control are missing. In this paper we show how to verify transaction timeliness and temporal data consistency using model checking. We model the transaction work units, the data and the concurrency control mechanism as a network of timed automata, and specify the properties in TCTL. The properties are then checked exhaustively and automatically using the UPPAAL model checker.
In order to guarantee transaction timeliness, Realtime Database Management Systems (RTDBMSs) often relax data consistency by relaxing the ACID transaction properties. Such relaxation varies depending on the application and thus different transaction management mechanisms have to be decided for developing a tailored RTDBMS. However, current RTDBMSs development does not include systematic verification of timeliness and desired ACID properties. Consequently, the implemented transaction management mechanisms may breach timeliness of transactions. In this paper, we propose a process called DAGGERS for developing a tailored RTDBMS that guarantees timeliness and desired data consistency for real-time systems by employing model-checking techniques during the process. Based on the characteristics of the desired data manipulations, transaction models are designed and then formally verified iteratively together with selected run-time mechanisms, in order to achieve the desired/necessary trade-offs between timeliness and data consistency. The outcome of DAGGERS is thus a tailored transaction management with guaranteed appropriate trade-offs, as well as the model-checking based worst-case execution times and blocking times of transactions under these mechanisms and assumptions of the hardware architecture.
Efficient monitoring of a cloud system involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A thorough understanding and analysis of the characteristics of data aggregation processes can help to improve the software quality and reduce development cost. In this paper, we propose a systematic approach for designing data aggregation processes in cloud monitoring systems. Our approach applies a feature-oriented taxonomy called DAGGTAX (Data AGGregation TAXonomy) to systematically specify the features of the designed system, and SAT-based analysis to check the consistency of the specifications. Following our approach, designers first specify the data aggregation processes by selecting and composing the features from DAGGTAX. These specified features, as well as design constraints, are then formalized as propositional formulas, whose consistency is checked by the Z3 SAT solver. To support our approach, we propose a design tool called SAFARE (SAt-based Feature-oriented dAta aggREgation design), which implements DAGGTAX-based specification of data aggregation processes and design constraints, and integrates the state-of-the-art solver Z3 for automated analysis. We also propose a set of general design constraints, which are integrated by default in SAFARE. The effectiveness of our approach is demonstrated via a case study provided by industry, which aims to design a cloud monitoring system for video streaming. The case study shows that DAGGTAX and SAFARE can help designers to identify reusable features, eliminate infeasible design decisions, and derive crucial system parameters.
Efficient auto-scaling of cloud resources relies on the monitoring of the cloud, which involves multiple aggregation processes and large amounts of data with various and interdependent requirements. A systematic way of describing the data together with the possible aggregations is beneficial for designers to reason about the properties of these aspects as well as their implications on the design, thus improving quality and lowering development costs. In this paper, we propose to apply DAGGTAX, a feature-oriented taxonomy for organizing common and variable data and aggregation process properties, to the design of cloud monitoring systems. We demonstrate the effectiveness of DAGGTAX via a case study provided by industry, which aims to design a cloud monitoring system that serves auto-scaling for a video streaming system. We design the cloud monitoring system by selecting and composing DAGGTAX features, and reason about the feasibility of the selected features. The case study shows that the application of DAGGTAX can help designers to identify reusable features, analyze trade-offs between selected features, and derive crucial system parameters.
This volume contains the proceedings of the International Conference on Software Reuse (ICSR 18) held during May 2123, 2018, in Madrid, Spain. The International Conference on Software Reuse is the premier international event in the software reuse community. The main goal of ICSR is to present the most recent advances and breakthroughs in the area of software reuse and to promote an intensive and continuous exchange among researchers and practitioners. The conference featured two keynotes by John Favaro, Intecs SpA (Italy) and Alberto Abella from MELODA (Spain). We received 29 submissions (excluding withdrawn submissions). Each submission was reviewed by three Program Committee members. The Program Committee decided to accept 11 papers (nine full papers and two short ones), resulting in an acceptance rate of 37.9%. The program also included one full-day tutorial, one invited talk, and a panel about the future of software reuse. This conference was a collaborative work that could only be realized through many dedicated efforts. We would like to thank all the colleagues who made possible the success of ICSR 2018: Barbara Gallina, Carlos Cetina, Mathieu Acher, Tewfik Ziadi, Roberto E. López Herrejón, Gregorio Robles, Jens Knodel, Carlos Carrillo, and Alejandro Valdezate. We also thank the ICSR Steering Committee for the approval to organize this edition in Madrid. Last but not least, we would like to sincerely thank all authors who submitted papers to the conference for their contributions and interest in ICSR 2018. We also thank the members of the Program Committee and the additional reviewers for their accurate reviews as well as their participation in the discussions of the submissions. Finally, we thank Danilo Beuche for his tutorial and the members that participated as panelists including the support from people of The Reuse Company (Spain).
Much has been investigated about software reuse since the software crisis. The development of software reuse methods, implementation techniques, and cost models has resulted in a significant amount of research over years. Nevertheless, the increasing adoption of reuse techniques, many of them subsumed under higher level software engineering processes, and advanced programming techniques that ease the way to reuse software assets, have hidden somehow in the recent years new research trends on the practice of reuse and caused the disappearance of several reuse conferences. Also, new forms of reuse like open data and feature models have brought new opportunities for reuse beyond the traditional software components. From past to present, we summarize in this research the recent history of software reuse, and we report new research areas and forms of reuse according to current needs in industry and application domains, as well as promising research trends for the upcoming years.
In the automotive domain, the employment of agile development is currently hindered by the fact that the safety lifecycle, which implies the creation and maintenance of safety work products, is manually executed, being a complex and expensive process. Given a change in the system under consideration, ISO 26262 recommends that the impact of that change on the safety case of the system shall be assessed and that the safety case shall be correspondingly updated. To this end, in this paper, while assuming a model-based system and safety engineering context, we propose checkable safety case models, which are semantically rich safety case models integrated with system and safety engineering models (i.e., work products of a model-based safety lifecycle). The semantically rich specification and the model integration allow for automated consistency checks between the safety case and the system, specifically its engineering models. We exemplify our contributions via an in-vehicle driver assistance system for driving through intersections.
The new generation of safety-critical systems will be interconnected, having other systems as collaborating partners for achieving common goals (e.g., interconnected cyber-physical systems such as connected cars, or collaborative embedded systems such as an advanced driver assistance system connected with different sensors). Frequent new business goals of such systems are to enable new collaborations with new types of technical systems, thus changing their operating context, which triggers the need for agile development in automotive. In safety-critical domains, a change in the operating context triggers the need for impact analysis on the artefacts generated during the safety lifecycle. Impact analyses are time and resource consuming, hindering agile development. Hence, the need for automation. Safety cases comprise safety arguments explicitly specifying the traces among the artefacts generated during the safety lifecycle. Our longer term goal is to support the automated identification of the artefacts affected by changes in the system's operating context, while proposing an automated change impact analysis executed on the system's safety case. To ensure completeness of the results of such analysis, in this work, we enhance state-ofthe- art safety case patterns by referencing all artefacts generated during the safety lifecycle. Further, we enable the explicit specification of the properties of the operating context for which we foresee certain changes. We evaluate our patterns by using them for the construction of the safety case of a simplified airbag system.
Manually checking the compliance of process plans against the requirements of applicable standards is a common practice in the safety-critical context. We hypothesize that automating this task could be of interest. To test our hypothesis, we conducted a personal opinion survey among practitioners who participate in safety-related process compliance checking. In this paper, we present the results of this survey. Practitioners indicated the methods used and their challenges, as well as their interest in a novel method that could permit them to move from manual to automated practices via compliance checking.
ISO 26262 demands a confirmation review of the safety plan, which includes the compliance checking of planned processes against safety requirements. Formal Contract Logic (FCL), a logic-based language stemming from business compliance, provides means to formalize normative requirements enabling automatic compliance checking. However, formalizing safety requirements in FCL requires skills, which cannot be taken for granted. In this paper, we provide a set of ISO 26262-specific FCL compliance patterns to facilitate rules formalization. First, we identify and define the patterns, based on Dwyer' et al.'s specification patterns style. Then, we instantiate the patterns to illustrate their applicability. Finally, we sketch conclusions and future work.
Revisions of safety-related standards lead to the release of new versions. Consequently, products and processes need to be recertified. To support that need, product line-oriented best practices have been adopted to systematize reuse at various levels, including the engineering process itself. As a result, Safety-oriented Process Line Engineering (SoPLE) is introduced to systematize reuse of safety-oriented process-related artifacts. To systematize reuse of artifacts during automated process compliance checking, SoPLE was conceptually combined with a logic-based framework. However, no integrated and tool-supported solution was provided. In this paper, we focus on process recertification (interpreted as the need to show process plan adherence with the new version of the standard) and propose a concrete technical and tool-supported methodological framework for reusing (safety-oriented) compliance artifacts while recertifying. We illustrate the benefits of our methodological framework by considering ISO 14971 versions, and measuring the enabled reuse.
In the safety-critical context, part of the software process improvement effort is expended in process-based compliance. To facilitate this task, we proposed a method for automated process-based compliance checking, which can be used as a basis for decision making. Our method requires users to create a knowledge base that contains formalized requirements and processes checkable for compliance. Such task may have some degree of complexity. Thus, in this paper, we exploit the natural separation of concerns in the state of practice to offer adequate means to facilitate the creation of the required concepts by using a divide-and-conquer strategy. For this, we discuss the impact of process factors in compliance assessment and provide separation of concerns based on SPEM 2.0 (Systems and Software Process Engineering Metamodel). Then, we illustrate the defined concerns and discuss our findings.
The growing connectivity of the systems that we rely on e.g. transportation vehicles is pushing towards the introduction of new standards aimed at providing a baseline to address cybersecurity besides safety. If the interplay of the two normative spaces is not mastered, compliance management might become more time consuming and costly, preventing engineers from dedicating their energies to system engineering. In this paper, we build on top of previous work aimed at increasing efficiency and confidence in compliance management. More specifically, we contribute to building a terminological framework needed to enable the systematization of commonalities and variabilities within ISO 26262 and SAE J3061. Then, we focus our attention on the requirements for software design and implementation and we use defeasible logic to prove compliance. Based on the compliance checking results, we reveal reuse opportunities. Finally, we draw our conclusions and sketch future research directions.
Nowadays, the engineering of (software) systems has to comply with di erent standards, which often exhibit common requirements or at least a signi cant potential for synergy. Compliance management is a delicate, time-consuming, and costly activity, which would bene- t from increased con dence, automation, and systematic reuse. In this paper, we introduce a new approach, called SoPLE&Logic-basedCM. SoPLE&Logic-basedCM combines (safety-oriented) process line engineering with defeasible logic-based approaches for formal compliance checking. As a result of this combination, SoPLE&Logic-basedCM enables automation of compliance checking and systematic reuse of process elements as well as compliance proofs. To illustrate SoPLE&Logic-basedCM, we apply it to the automotive domain and we draw our lessons learnt.
Safety-critical systems manufacturers have the duty of care, i.e., they should take correct steps while performing acts that could foreseeably harm others. Commonly, industry standards prescribe reasonable steps in their process requirements, which regulatory bodies trust. Manufacturers perform careful documentation of compliance with each requirement to show that they act under acceptable criteria. To facilitate this task, a safety-centered planning-time framework, called ACCEPT, has been proposed. Based on compliance-by-design, ACCEPT capabilities (i.e., processes and standards modeling, and automatic compliance checking) permit to design Compliance-aware Engineering Process Plans (CaEPP), which are able to show the planning-time allocation of standard demands, i.e., if the elements set down by the standard requirements are present at given points in the engineering process plan. In this paper, we perform a case study to understand if the ACCEPT produced models could support the planning of space software engineering processes. Space software is safety and mission-critical, and it is often the result of industrial cooperation. Such cooperation is coordinated through compliance with relevant standards. In the European context, ECSS-E-ST-40C is the de-facto standard for space software production. The planning of processes in compliance with project-specific ECSS-E-ST-40C applicable requirements is mandatory during contractual agreements. Our analysis is based on qualitative criteria targeting the effort dictated by task demands required to create a CaEPP for software development with ACCEPT. Initial observations show that the effort required to model compliance and processes artifacts is significant. However, such an effort pays off in the long term since models are, to some extend, reusable and flexible. The coverage level of the models is also analyzed based on design decisions. In our opinion, such a level is adequate since it responds to the information needs required by the ECSS-E-ST-40C framework.
A confirmation review of the safety plan is required during compliance assessment with ISO 26262. Its production could be facilitated by creating a specification of the standard’s requirements in FCL (Formal Contract Logic), which is a language that can be used to automatically checking compliance. However, we have learned, via previous experiences, that interpreting ISO 26262 requirements and specifying them in FCL is complex. Thus, we perform a formalization-oriented pre-processing of ISO 26262 to find effective ways to proceed with this task. In this paper, we present the lessons learned from this pre-processing which includes the identification of the essential normative parts to be formalized, the identification of SCP (Safety Compliance Patterns) and its subsequent documentation as templates, and the definition of a methodological guideline to facilitate the formalization of normative clauses. Finally, we illustrate the defined methodology by formalizing ISO 26262 part 3 and discuss our findings.
The processes used to develop software need to comply with normative requirements (e.g., standards and regulations) to align with the market and the law. Manual compliance checking is challenging because there are numerous requirements with changing nature and different purposes. Despite the importance of automated techniques, there is not any systematic study in this field. This lack may hinder organizations from moving toward automated compliance checking practices. In this paper, we characterize the methods for automatic compliance checking of software processes, including used techniques, potential impacts, and challenges. For this, we undertake a systematic literature review (SLR) of studies reporting methods in this field. As a result, we identify solutions that use different techniques (e.g., anthologies and metamodels) to represent processes and their artifacts (e.g., tasks and roles). Various languages, which have diverse capabilities for managing competing and changing norms, and agile strategies, are also used to represent normative requirements. Most solutions require tool-support concretization and enhanced capabilities to handle processes and normative diversity. Our findings outline compelling areas for future research. In particular, there is a need to select suitable languages for consolidating a generic and normative-agnostic solution, increase automation levels, tool support, and boost the application in practice by improving usability aspects.
Compliance with process-based safety standards may imply the provision of a safety plan and its corresponding compliance justification. The provision of this justification is time-consuming since it requires that the process engineer checks the fulfillment of hundred of requirements by taking into account the evidence provided by the process entities. Available methodologies and their implemented tools can be used to automate this checking and provide a compliance report that can be part of the justification to be scrutinized by the safety auditor. In this paper, we explain our compliance checking vision for supporting the process engineer, in which the interaction between SPEM 2.0 (Software & Systems Process Engineering Metamodel) and Regorous (a tool-supported methodology for compliance checking) is established. Then, we focus on SPEM 2.0 to identify mechanisms to provide the minimal set of elements required to be processed by Regorous and describe how to implement them in EPF Composer. We also illustrate these mechanisms by modeling a simple example from ISO 26262 and show how a compliance report can be used to trace unfulfilled requirements.
In some domains, the applicable safety standards prescribe processrelated requirements. Essential pieces of evidence for compliance assessment with such standard are the compliance justifications of the process plans used to engineer systems. These justifications should show that the process plans are produced in accordance with the prescribed requirements. However, providing the required evidence may be time-consuming and error-prone since safety standards are large, natural language-based documents with hundreds of requirements. Besides, a company may have many safety-critical-related processes to be examined. In this paper, we propose a novel approach that combines process modeling and compliance checking capabilities. Our approach aims at facilitating the analysis required to conclude whether the model of a process plan corresponds to a model with compliant states. Hitherto, our proposed methodology has been evaluated with academic examples that show the potential benefits of its use.
Context: Software processes have increased demands coming from normative requirements. Organizations developing software comply with such demands to be in line with the market and the law. The state-of-the-art provides means to automatically check whether a software process complies with a set of normative requirements. However, no comprehensive and systematic review has been conducted to characterize such works. Objective: We characterize the current research on this topic, including an account of the used techniques, their potential impacts, and challenges. Method: We undertake a Systematic Literature Review (SLR) of primary studies reporting techniques for automated compliance checking of software processes. Results: We identified 41 papers reporting solutions focused on limited normative frameworks. Such solutions use specific languages for the processes and normative representation. Thus, the artifacts represented vary from one solution to the other. The level of automation, which in most methods requires tool-support concretization, focuses mostly on the reasoning process and requires human intervention, e.g., for creating the inputs for such reasoning. In addition, only a few contemplate agile environments and standards evolution. Conclusions: Our findings outline compelling areas for future research. In particular, there is a need to consolidate existing languages for process and normative representation, compile efforts in a generic and normative-agnostic solution, increase automation and tool support, and incorporate a layer of trust to guarantee that rules are correctly derived from the normative requirements.
Manual compliance with process-based standards is time-consuming and prone-to-error. No ready-to-use solution is currently available for increasing efficiency and confidence. In our previous work, we have presented our automated compliance checking vision to support the process engineer’s work. This vision includes the creation of a process model, given by using a SPEM 2.0 (Systems & Software Process Engineering Metamodel)-reference implementation, to be checked by Regorous, a compliance checker used in the business context. In this paper, we move a step further for the concretization of our vision by defining the transformation, necessary to automatically generate the models required by Regorous. Then, we apply our transformation to a small portion of the design phase recommended in the rail sector. Finally, we discuss our findings, and present conclusions and future work.
In this paper, we investigate the pondered selection of innovative software verification technology in the safety-critical domain and its implications. Verification tools perform analyses, testing or simulation activities. The compliance of the techniques implemented by these tools to fulfill standard-mandated objectives (i.e., to be means of compliance in the context of DO-178C and related supplements) should be explained to the certification body. It is thereby difficult for practitioners to use novel techniques, without a systematic method for arguing their appropriateness. Thus, we offer a method for arguing the appropriate application of a certain verification technique (potentially in combination with other techniques) to produce the evidence needed to satisfy certification objectives regarding fault detection and mitigation in a realistic avionics application via safety cases. We use this method for the choice of an appropriate compiler to support the development of a drone.
Safety standards from different domains recommend the execution of a process for keeping the system safety case up to date, whenever the system undergoes a change, however, without providing any more specific guidelines on how to do this. Even if several (semi)automated safety case maintenance approaches have been proposed in the literature, currently, in the industry, the execution of this process is still manual, being error prone and expensive. To this end, we present in this paper the results of what is, to the best of our knowledge, the first Systematic Literature Review (SLR) conducted with the goal to provide a holistic overview of state-of-the-art safety case maintenance approaches. For each identified approach, we analyze its strengths and weaknesses. We observe that existing approaches are pessimistic, identifying a larger number of safety case elements as impacted by a change than the number of the actually impacted elements. Also, there is limited quantitative impact assessment. Further, existing approaches only address a few system change scenarios when providing guidelines for updating the safety case.
The need to make sense of complex input data within a vast variety of unpredictable scenarios has been a key driver for the use of machine learning (ML), for example in Automated Driving Systems (ADS). Such systems are usually safety-critical, and therefore they need to be safety assured. In order to consider the results of the safety assurance activities (scoping uncovering previously unknown hazardous scenarios), a continuous approach to arguing safety is required, whilst iteratively improving ML-specific safety-relevant properties, such as robustness and prediction certainty. Such a continuous safety life cycle will only be practical with an efficient and effective approach to analyzing the impact of system changes on the safety case. In this paper, we propose a semi-automated approach for accurately identifying the impact of changes on safety arguments. We focus on arguments that reason about the sufficiency of the data used for the development of ML components. The approach qualitatively and quantitatively analyses the impact of changes in the input space of the considered ML component on other artifacts created during the execution of the safety life cycle, such as datasets and performance requirements and makes recommendations to safety engineers for handling the identified impact. We implement the proposed approach in a model-based safety engineering environment called FASTEN, and we demonstrate its application for an ML-based pedestrian detection component of an ADS.
The ISO 26262 functional safety standard provides appropriate development processes, requirements and safety integrity levels specific for the automotive domain. One crucial requirement consists of the creation of a safety case, a structured argument, which inter-relates evidence and claims, needed to show that safety-critical systems are acceptably safe. The standard is currently not mandatory to be applied to safety critical systems installed in heavy trucks; however, this is likely to be changed by 2016. This paper describes the experience gathered by applying the standard to the Fuel Level Estimation and Display System, a subsystem that together with other subsystems plays a significant role in terms of global system safety for heavy trucks manufactured by Scania. More specifically, exploratory and laborious work related to the creation of a safety case in compliance with ISO 26262 in an inexperienced industrial setting is described, and the paper ends with presenting some lessons learned together with guidelines to facilitate the adoption of ISO 26262.
Most safety-critical systems must undergo assurance and certification processes. The associated activities can be complex and labour-intensive, thus practitioners need suitable means to execute them. The activities are further becoming more challenging as a result of the evolution of the systems towards cyber-physical ones, as these systems have new assurance and certification needs. The AMASS project (Architecture-driven, Multi-concern and Seamless Assurance and Certification of Cyber-Physical Systems) tackled these issues by creating and consolidating the de-facto European-wide open tool platform, ecosystem, and self-sustainable community for assurance and certification of cyber-physical systems. The project defined a novel holistic approach for architecture-driven assurance, multi-concern assurance, seamless interoperability, and cross- and intra-domain reuse of assurance assets. AMASS results were applied in 11 industrial case studies to demonstrate the reduction of effort in assurance and certification, the reduction of (re)certification cost, the reduction of assurance and certification risks, and the increase in technology harmonisation and interoperability.
Cyber-physical systems are usually subject to assurance and certifica- tion processes, including thorough requirements engineering tasks, to ensure that they are acceptably dependable. The underlying activities can be complex and labour-intensive, thus practitioners need tools that facilitate them. We present the AMASS Tool Platform as an example of these tools. This Platform is an open source solution that supports the main activities for assurance and certification. It also provides advanced features such as argument fragment composition and automated assurance evidence generation and collection. In addition, we present the main insights gained from tool usage. Among them, practitioners expect improvement in relation to usability, performance, and ease of configuration. Videos showing tool usage are available online, including general usage scenarios.