Today, well-established hazard analysis techniques are available and widely used to identify hazards for single systems in various industries. However, hazard analysis techniques for a System of Systems (SoS) are not properly investigated. SoS is a complex system where multiple systems work together to achieve a common goal. However, the interaction between systems may lead to unforeseen interactions and interdependencies between systems. This increases the difficulty of identifying and assessing system failures and potential safety hazards. In this paper, we explore whether Hazard Ontology (HO) can be applied to an SoS and whether it can identify emergent hazards, their causes, sources, and consequences. To conduct our exploration, we apply the HO to a quarry automation site (an SoS) from the construction equipment domain. The results indicate that the HO is a promising technique that facilitates the identification of emergent hazards and their components.
While ontology comparison and alignment have been extensively researched in the last decade, there are still some challenges to these disciplines, such as incomplete ontologies, those that cover only a portion of a domain, and differences in domain modeling due to varying viewpoints. Although the literature has compared ontological concepts from the same domain, comparisons of concepts from different domains (e.g., security and safety) remain unexplored. To compare the concepts of security and safety domains, a security ontology must first be created to bridge the gap between these domains. Therefore, this paper presents a Combined Security Ontology (CSO) based on the Unified Foundational Ontology (UFO) that could be compared to or aligned with other ontologies. This CSO includes the core ontological concepts and their respective relationships that had been extracted through a previous systematic literature review. The CSO concepts and their relationships were mapped to the UFO to get a common terminology that facilitates to bridge the gap between the security and safety domains. Since the proposed CSO is based on the UFO, it could be compared to or aligned with other ontologies from different domains.
Safety and security ontologies quickly become essential support for integrating heterogeneous knowledge from various sources. Today, there is little standardization of ontologies and almost no discussion of how to compare concepts and their relationships, establish a general approach to create relationships or model them in general. However, concepts with similar names are not semantically similar or compatible in some cases. In this case, the problem of correspondence arises among the concepts and relationships found in the ontologies. To solve this problem, a comparison between the Hazard Ontology (HO) and the Combined Security Ontology (CSO) is proposed, in which the value of equivalence between their concepts and their relationships was extracted and analyzed. Although the HO covers the concepts related to the safety domain and the CSO includes securityrelated concepts, both are based on the Unified Foundational Ontology (UFO). For this study, HO and CSO were compared, and the results were summarized in the form of comparison tables. Our main contribution involves the comparisons among the concepts in HO and CSO to identify equivalences and differences between the two. Due to the increasing number of ontologies, their mapping, merging, and alignment are primary challenges in bridging the gaps that exist between the safety and security domains.
Security ontologies have been developed to facilitate the organization and management of security knowledge. A comparison and evaluation of how these ontologies relate to one another is challenging due to their structure, size, complexity, and level of expressiveness. Differences between ontologies can be found on both the ontological and linguistic levels, resulting in errors and inconsistencies (i.e., different concept hierarchies, types of concepts, definitions) when comparing and aligning them. Moreover, many concepts related to security ontologies have not been thoroughly explored and do not fully meet security standards. By using standards, we can ensure that concepts and definitions are unified and coherent. In this study, we address these deficiencies by reviewing existing security ontologies to identify core concepts and relationships. The primary objective of the systematic literature review is to identify core concepts and relationships that are used to describe security issues. We further analyse and map these core concepts and relationships to five security standards (i.e., NIST SP 800-160, NIST SP 800-30 rev.1, NIST SP 800-27 rev.A, ISO/IEC 27001 and NISTIR 8053). As a contribution, this paper provides a set of core concepts and relationships that comply with the standards mentioned above and allow for a new security ontology to be developed.
This study introduces a Mitigation Ontology (MO) designed for the analysis of safety-critical systems. Recognizing the paramount importance of systematically addressing potential risks and hazards in complex systems, the proposed ontology serves as a structured framework for comprehensively modeling and analyzing mitigation strategies. Leveraging ontological principles, the framework enables a precise representation of safety-critical information, emphasizing the relationships and dependencies among various mitigation elements. To encapsulate the essence of safety-critical systems and support understanding of the mechanisms of situations, events, and associated hazards, we propose a hazard and mitigation domain ontology, i.e., the MO to provide a combined ontological interpretation of hazard and mitigation strategies. The MO facilitates a more thorough and standardized analysis of safety measures, contributing to enhanced understanding, communication, and implementation of mitigation strategies in software and hardware levels of safety-critical systems. The MO is grounded on Unified Foundational Ontology (UFO) and based on widely accepted standards, and scientific guides. We demonstrate our proposed ontology in the autonomous vehicle domain to check how it can help to analyze the safety of real-world safety-critical systems. Through the ontology instantiation process for a case study from the autonomous vehicle domain, we have verified that safety-critical related hazards, causes and consequences, and other entities contributing to hazards were well identified. we have seen that the MO offers a shared vocabulary that facilitates communication among diverse communities, preventing misunderstandings among engineers and stakeholders involved in safety-critical systems. Additionally, the conceptual model serves as a reference point for developers of safety-critical systems, enabling them to systematically extract and analyze safety requirements specifications and provide safety mechanisms.
The Ravenscar profile for high integrity systems using Ada 95 is well defined in all real-time aspects. The complexity of the run-time system has been reduced to allow full utilization of formal methods for applications using the Ravenscar profile. In the Mana project a tool set is being developed including a formal model of a Ravenscar compliant run-time system, a gnat compatible run-time system, and an ASIS based tool to allow for the verification of a system including both COTS and code that is reused.
The paper contributes with three methods that together will make a complete tool-set for verification of mission critical applications. The first method is the transformation of existing Ada or VHDL code into an intermediate form. This form is used for verification by numerous different model checkers. The second method is a predictable runtime kernel that has both a verifiable formal model and is implemented in hardware to achieve full predictability. Finally, a method for transforming the intermediate form of the complete system into a hardware unit, the SafetyChip that performs runtime control of the system. This SafetyChip can catch 'out-of-state' behaviors.
The development quality for the control software for autonomous vehicles is rapidly progressing, so that the control units in the field generally perform very reliably. Nevertheless, fatal misjudgments occasionally occur putting people at risk: such as the recent accident in which a Tesla vehicle in Autopilot mode rammed a police vehicle. Since the object recognition software which is a part of the control software is based on machine learning (ML) algorithms at its core, one can distinguish a training phase from a deployment phase of the software. In this paper we investigate to what extent the deployment phase has an impact on the robustness and reliability of the software; because just as traditional, software based on ML degrades with time. A widely known effect is the so-called concept drift: in this case, one finds that the deployment conditions in the field have changed and the software, based on the outdated training data, no longer responds adequately to the current field situation. In a previous research paper, we developed the SafeML approach with colleagues from the University of Hull, where datasets are compared for their statistical distance measures. In doing so, we detected that for simple, benchmark data, the statistical distance correlates with the classification accuracy in the field. The contribution of this paper is to analyze the applicability of the SafeML approach to complex, multidimensional data used in autonomous driving. In our analysis, we found that the SafeML approach can be used for this data as well. In practice, this would mean that a vehicle could constantly check itself and detect concept drift situation early. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
The Architecture Analysis and Design Language (AADL) is a popular language for architectural modeling and analysis of software intensive systems in application domains such as automotive, avionics, railway and medical systems. These systems often have stringent real-time requirements. This paper presents an extension to AADL's behavior model using time annotations in order to improve the evaluation of timing properties in AADL. The translational semantics of this extension is based on mappings to the Timed Abstract State Machines (TASM) language. As a result, timing analysis with timed simulation or timed model checking is possible. The translation is supported by art Eclipse-based plug-in and the approach is validated with a case study of an industrial production cell system.
For a large and complex safety-critical system, where safety is ensured by a strict control over many properties, the safety information is structured into a safety case. As a small change to the system design may potentially affect a large section of the safety argumentation, a systematic method for evaluating the impact of system changes on the safety argumentation would be valuable. We have chosen two of the most common notations: the Goal Structuring Notation (GSN) for the safety argumentation and the Architecture Analysis and Design Language (AADL) for the system architecture model. In this paper, we address the problem of impact analysis by introducing the GSN and AADL Graph Evaluation (GAGE) method that maps safety argumentation structure against system architecture, which is also a prerequisite for successful composition of modular safety cases. In order to validate the method, we have implemented the GAGE tool that supports the mapping between the GSN and AADL notations and highlight changes in impact on the argumentation. © 2012 IEEE.
As system failure of mission-critical embedded systems may result in serious consequences, the development process should include verification techniques already at the architectural design stage, in order to provide evidence that the architecture fulfils its requirements. The Architecture Analysis and Design Language (AADL) is a language designed for modeling embedded systems, and its Behavior Annex defines the behavior of the system. However, even though it is an internationally used industry standard, AADL still lacks a formal semantics and is not executable, which limits the possibility to perform formal verification. In this paper, we introduce a formal analysis framework for a subset of AADL and its Behavior Annex, which includes the following: a denotational semantics, its implementation in Standard ML, and a graphical Eclipse-based tool encapsulating the implementation. We also show how to perform model checking of AADL properties defined in the Computation Tree Logic (CTL).
We present a denotational semantics for the Architecture Analysis and Design Language with Behavior Annex and the Computational Tree logic. We also present tool support as an OSATE plug-in as well as the Production Cell case study.
The requirements and specification of a protocol for low level communication between the run-time systems in a distributed Ada environment is presented. This allows an Ada system to be separated into software resources and run-time controllers. Calls to the local run-time system of a node, concerning task management, are transformed into remote calls to the controller, that schedules all tasks in the application. The calls to the run-time system together with all messages, requests and replies, that are triggered as a consequence, are described. The controller will be implemented in hardware separate from the processors. Communication between processors and controllers are by means of high speed (Gigabit) networks. In the proposed system, partitioning and distribution of Ada programs can fully utilize the inherent and strong type checking in Ada.
Distribution of a single Ada program on a local area network is accomplished by partitioning the run-time system into two parts. A central scheduling module is responsible for task management. Distributed run-time executives handle context switches and remote entry calls; however all activities are supervised by the scheduler. The scheduler can be implemented in hardware in order to achieve high efficiency. A network based on optical fibers is necessary due to the high speed required for system calls. Asynchronous Transfer Mode is suggested as the protocol for the communication. We describe an implementation of the divided run-time system on an Ethernet network, using MC68030-based micro computers as targets and an Ada program executing on a Rational host as the scheduler.
This paper describes the methodology used to add nonintrusive system-level fault tolerance to an electronic throttle controller. The original model of the throttle controller is a hybrid system created at a major automotive company. We use Gurkh as a framework within which we translate the hybrid model into a set of timed automata and perform analysis using formal methods. The first step of the translation process is to transform the hybrid model and its static schedule into Gurkh’s preemptive tasking paradigm. Using the UPPAAL tool, we then check the correctness of the resulting set of timed-automata by formally verifying reachability and timing properties. We also propose a method for quantifying the quality of the translation by estimating the amount of jitter thence introduced. The final step is the implementation of a Monitoring Chip based on the formal system model. The chip provides non-intrusive "out-of-path" and timing error detection which in turn allows for fault tolerance at a system level.
The ISO 26262 functional safety standard provides appropriate development processes, requirements and safety integrity levels specific for the automotive domain. One crucial requirement consists of the creation of a safety case, a structured argument, which inter-relates evidence and claims, needed to show that safety-critical systems are acceptably safe. The standard is currently not mandatory to be applied to safety critical systems installed in heavy trucks; however, this is likely to be changed by 2016. This paper describes the experience gathered by applying the standard to the Fuel Level Estimation and Display System, a subsystem that together with other subsystems plays a significant role in terms of global system safety for heavy trucks manufactured by Scania. More specifically, exploratory and laborious work related to the creation of a safety case in compliance with ISO 26262 in an inexperienced industrial setting is described, and the paper ends with presenting some lessons learned together with guidelines to facilitate the adoption of ISO 26262.
The five-year Master of Engineering Programme in Dependable Aerospace Systems, with dependability as its silver thread, started at Mälardalen University (MDH) in 2015. This paper presents selected ideas behind the creation of the programme, together with some preliminary analysis of current results and suggested enhancements for the programme’s fourth and fifth years.
This paper describes methods we used to improve our Master of Engineering programme in Dependable Aerospace Systems together with the industry. The target audience is mainly programme coordinators/managers who are in the process to develop their programmes for future demands. The two main questions we address are: Q1 – How do we ensure a good progression within a programme to ensure the industry’s current and future needs in engineering skills? and Q2 – How do we ensure students become acquainted with research during their studies? The results indicate that our suggested method to analyse programme progression through subject abilities supports developer of engineering programmes and that our approach to undergraduate research opportunities is a way forward to introduce students to research early.
An assurance strategy for new computing platforms in safety-critical avionics has to be flexible and take into account different types of commercial-of-the-shelf (COTS) hardware technologies. Completely new COTS technologies are already being introduced and successfully used in other domains. Good examples are heterogeneous platforms, hardware-based machine learning and approximate computing. Current avionics certification guidance material cannot cope with next generation of devices. We suggest using the generic assurance approach of the Overarching Properties (OPs) together with assurance cases to argument that COTS assurance objectives are met and to achieve the flexibility required for future computing platforms. We introduce a novel assurance cased-based OP approach in [1] and refine the work into a framework in [2]. Within this framework we are able to integrate COTS technology specific assurance objectives using a five-step process. In this paper, we show through some representative examples of emerging computing platforms that our strategy is a way forward for new platforms in safety-critical avionics.
According to the aircraft standard ARP4754A, requirements should be carefully traced and validated. A systematic methodology for safety requirements elaboration (refinement/decomposition as well as allocation management) is lacking. To overcome this lack, an ARP-aligned and DOORS implementable approach called RAP (Requirements Allocation Process) is proposed. RAP offers a textual as well as graphical means for managing safety requirements. Besides supporting requirements decomposition and allocation, RAP also supports design decisions. The usefulness of RAP is illustrated by an example, applying the approach to a High Lift System.
The fast development of sensing devices and radios enables more powerful and flexible remote health monitoring systems. Considering the future vision of the Internet of Things (IoT), many requirements and challenges rise to the design and implementation of such systems. Bridging the gap between sensor nodes on the human body and the Internet becomes a challenging task in terms of reliable communications. Additionally, the systems will not only have to provide functionality, but also be highly secure. In this paper, we provide a survey on existing communication protocols and security issues related to pervasive health monitoring, describing their limitations, challenges, and possible solutions. We propose a generic protocol stack design as a first step toward handling interoperability in heterogeneous low-power wireless body area networks.
Certification of safety-critical avionics systems is an expensive and time-consuming activity due to the necessity of providing numerous deliverables. Some of these deliverables are process-related. To reduce time and cost related to the provision of process-related deliverables, in this paper, we propose to combine three approaches: the safety-oriented process line engineering approach, the process-based argumentation line approach, and the model driven certification-oriented approach. More specifically, we focus on safety-related processes for the development of avionics systems and we define how these three approaches are combined and which techniques, tools and guidelines should be used to implement the resulting approach, called THRUST. Advantages and disadvantages of possible existing techniques and tools are discussed and proposals as well as conceptual solutions for new techniques are sketched. Based on the sketched conceptual solutions, we then apply THRUST to speed up the creation of process-related deliverables in compliance with DO-178B/C.
Prescriptive process-based safety standards (e.g. EN 50128, DO-178B, etc.) incorporate best practices to be adopted to develop safety-critical systems or software. In some domains, compliance with the standards is required to get the certificate from the certification authorities. Thus, a well-defined interpretation of the processes to be adopted is essential for certification purposes. Currently, no satisfying means allows process engineers and safety managers to model and exchange safety-oriented processes. To overcome this limitation, this paper proposes S-TunExSPEM, an extension of Software & Systems Process Engineering Meta-Model 2.0 (SPEM 2.0) to allow users to specify safety-oriented processes for the development of safety-critical systems in the context of safety standards according to the required safety level. Moreover, to enable exchange for simulation, monitoring, execution purposes, S-TunExSPEM concepts are mapped onto XML Process Definition Language 2.2 (XPDL 2.2) concepts. Finally, a case-study from the avionics domain illustrates the usage and effectiveness of the proposed extension.
Mission planning for multi-agent autonomous systems aims to generate feasible and optimal mission plans that satisfy the given requirements. In this article, we propose a mission-planning methodology that combines (i) a path-planning algorithm for synthesizing path plans that are safe in environments with complex road conditions, and (ii) a task-scheduling method for synthesizing task plans that schedule the tasks in the right and fastest order, taking into account the planned paths. The task-scheduling method is based on model checking, which provides means of automatically generating task execution orders that satisfy the requirements and ensure the correctness and efficiency of the plans by construction. We implement our approach in a tool named MALTA, which offers a user-friendly GUI for configuring mission requirements, a module for path planning, an integration with the model checker UPPAAL, and functions for automatic generation of formal models, and parsing of the execution traces of models. Experiments with the tool demonstrate its applicability and performance in various configurations of an industrial case study of an autonomous quarry. We also show the adaptability of our tool by employing it on a special case of the industrial case study.
The problem of mission planning for multiple autonomous agents, including path planning and task scheduling, is often complex. Recent efforts aiming at solving this problem have explored ways of using formal methods to synthesize mission plans that satisfy various requirements. However, when the number of agents grows or requirements include real-time constraints, the complexity of the problem increases to the extent that current algorithmic formal methods cannot handle. In this paper, we propose a novel approach called MCRL, which overcomes this shortcoming by integrating model checking and reinforcement learning techniques. Our approach employs timed automata and timed computation tree logic to describe the autonomous agents' behavior and requirements, and trains the model by a reinforcement learning algorithm, namely Q-learning, to populate a table used to restrict the state space of the model. MCRL combines the ability of model checking to synthesize verifiable mission plans, and the exploration and exploitation capabilities of Q-learning to alleviate the state-space-explosion problem of exhaustive model checking. Our method provides a means to synthesize mission plans for autonomous systems whose complexity exceeds the scalability boundaries of exhaustive model checking, but also to analyze and verify synthesized mission plans in order to ensure given requirements. We evaluate the proposed method on various relevant scenarios involving autonomous agents, and also present and discuss comparisons with other methods and tools.
Mission planning is one of the crucial problems in the design of autonomous Multi-Agent Systems (MAS), requiring the agents to calculate collision-free paths and efficiently schedule their tasks. The complexity of this problem greatly increases when the number of agents grows, as well as timing requirements and stochastic behavior of agents are considered. In this paper, we propose a novel method that integrates statistical model checking and reinforcement learning for mission planning within such context. Additionally, in order to synthesise mission plans that are statistically optimal, we employ hybrid automata to model the continuous movement of agents and moving obstacles, and estimate the possible delay of the agents' travelling time when facing unpredictable obstacles. We show the result of synthesising mission plans, analyze bottlenecks of the mission plans, and re-plan when pedestrians suddenly appear, by modeling and verifying a real industrial use case in UPPAAL SMC.
The problem of synthesizing mission plans for multiple autonomous agents, including path planning and task scheduling, is often complex. Employing model checking alone to solve the problem might not be feasible, especially when the number of agents grows or requirements include real-time constraints. In this paper, we propose a novel approach called MCRL that integrates model checking and reinforcement learning to overcome this limitation. Our approach employs timed automata and timed computation tree logic to describe the autonomous agents' behavior and requirements, and trains the model by a reinforcement learning algorithm, namely Q-learning, to populate a table used to restrict the state space of the model. Our method provides means to synthesize mission plans for multi-agent systems whose complexity exceeds the scalability boundaries of exhaustive model checking, but also to analyze and verify synthesized mission plans to ensure given requirements. We evaluate the proposed method on various scenarios involving autonomous agents, as well as present comparisons with two similar approaches, TAMAA and UPPAAL STRATEGO. The evaluation shows that MCRL performs better for a number of agents greater than three.
Path planning and task scheduling are two challenging problems in the design of multiple autonomous agents. Both problems can be solved by the use of exhaustive search techniques such as model checking and algorithmic game theory. However, model checking suffers from the infamous state-space explosion problem that makes it inefficient at solving the problems when the number of agents is large, which is often the case in realistic scenarios. In this paper, we propose a new version of our novel approach called MCRL that integrates model checking and reinforcement learning to alleviate this scalability limitation. We apply this new technique to synthesize path planning and task scheduling strategies for multiple autonomous agents. Our method is capable of handling a larger number of agents if compared to what is feasibly handled by the model-checking technique alone. Additionally, MCRL also guarantees the correctness of the synthesis results via post-verification. The method is implemented in UPPAAL STRATEGO and leverages our tool MALTA for model generation, such that one can use the method with less effort of model construction and higher efficiency of learning than those of the original MCRL. We demonstrate the feasibility of our approach on an industrial case study: an autonomous quarry, and discuss the strengths and weaknesses of the methods.
Planning is a critical function of multi-agent autonomous systems, which includes path finding and task scheduling. Exhaustive search-based methods such as model checking and algorithmic game theory can solve simple instances of multi-agent planning. However, these methods suffer from the state-space explosion when the number of agents is large. Learning-based methods can alleviate this problem but lack a guarantee of the correctness of the results. In this paper, we introduce MoCReL, a new version of our previously proposed method that combines model checking with reinforcement learning in solving the planning problem. The approach takes advantage of reinforcement learning to synthesize path plans and task schedules for large numbers of autonomous agents, and of model checking to verify the correctness of the synthesized strategies. Further, MoCReL can compress large strategies into smaller ones that have down to 0.05% of the original sizes, while preserving their correctness, which we show in this paper. MoCReL is integrated into a new version of UPPAAL Stratego that supports calling external libraries when running learning and verification of timed games models.
In an attempt to increase productivity and the workers' safety, the construction industry is moving towards autonomous construction sites, where various construction machines operate without human intervention. In order to perform their tasks autonomously, the machines are equipped with different features, such as position localization, human and obstacle detection, collision avoidance, etc. Such systems are safety critical, and should operate autonomously with very high dependability (e.g., by meeting task deadlines, avoiding (fatal) accidents at all costs, etc.). An Autonomous Wheel Loader is a machine that transports materials within the construction site without a human in the cab. To check the dependability of the loader, in this paper we provide a timed automata description of the vehicle's control system, including the abstracted path planning and collision avoidance algorithms used to navigate the loader, and we model check the encoding in UPPAAL, against various functional, timing and safety requirements. The complex nature of the navigation algorithms makes the loader's abstract modeling and the verification very challenging. Our work shows that exhaustive verification techniques can be applied early in the development of autonomous systems, to enable finding potential design errors that would incur increased costs if discovered later.
Autonomous vehicles rely heavily on intelligent algorithms for path planning and collision avoidance, and their functionality and dependability could be ensured through formal verification. To facilitate the verification, it is beneficial to decouple the static high-level planning from the dynamic functions like collision avoidance. In this paper, we propose a conceptual two-layer framework for verifying autonomous vehicles, which consists of a static layer and a dynamic layer. We focus concretely on modeling and verifying the dynamic layer using hybrid automata and UPPAAL SMC, where a continuous movement of the vehicle as well as collision avoidance via a dipole flow field algorithm are considered. This framework achieves decoupling by separating the verification of the vehicle's autonomous path planning from that of the vehicle autonomous operation in a continuous dynamic environment. To simplify the modeling process, we propose a pattern-based design method, where patterns are expressed as hybrid automata. We demonstrate the applicability of the dynamic layer of our framework on an industrial prototype of an autonomous wheel loader.
Autonomous vehicles are expected to be able to avoid static and dynamic obstacles automatically, along their way. However, most of the collision-avoidance functionality is not formally verified, which hinders ensuring such systems’ safety. In this paper, we introduce formal definitions of the vehicle’s movement and trajectory, based on hybrid transition systems. Since formally verifying hybrid systems algorithmically is undecidable, we reduce the verification of nonlinear vehicle behavior to verifying discrete-time vehicle behavior overapproximations. Using this result, we propose a generic approach to formally verify autonomous vehicles with nonlinear behavior against reach-avoid requirements. The approach provides a Uppaal timed-automata model of vehicle behavior, and uses Uppaal STRATEGO for verifying the model with user-programmed libraries of collision-avoidance algorithms. Our experiments show the approach’s effectiveness in discovering bugs in a state-of-the-art version of a selected collision-avoidance algorithm, as well as in proving the absence of bugs in the algorithm’s improved version. © 2021, Springer Nature Switzerland AG.
This is a textbook developed for use in the Master Programme Module E-M.6 "Real-Time Systems" as part of the Postgraduate Distance studies organized by Fraunhofer IESE and the Distance and International Studies Center at the Technical University of Kaiserslauten, Germany.
Presentation of the achievements and activities within the PROGRESS national strategic research centre 2006-2008
Functional safety of a system is the part of its overall safety that depends on the system operating correctly in response to its inputs. Safety is defined as the absence of unacceptable/unreasonable risk by functional safety standards, which enforce safety requirements in each phase of the development process of safety-critical software and hardware systems. Acceptability of risks is judged within a framework of analysis with contextual and cultural aspects by individuals who may introduce subjectivity and misconceptions in the assessment. While functional safety standards elaborate much on the avoidance of unreasonable risk in the development of safety-critical software and hardware systems, little is addressed on the issue of avoiding unreasonable judgments of risk. Through the studies of common fallacies in risk perception and ethics, we present a moral-psychological analysis of functional safety standards and propose plausible improvements of the involved risk-related decision making processes, with a focus on the notion of an acceptable residual risk. As a functional safety reference model, we use the functional safety standard ISO 26262, which addresses potential hazards caused by malfunctions of software and hardware systems within road vehicles and defines safety measures that are required to achieve an acceptable level of safety. The analysis points out the critical importance of a robust safety culture with developed countermeasures to the common fallacies in risk perception, which are not addressed by contemporary functional safety standards. We argue that functional safety standards should be complemented with the analysis of potential hazards caused by fallacies in risk perception, their countermeasures, and the requirement that residual risks must be explicated, motivated, and accompanied by a plan for their continuous reduction. This approach becomes especially important in contemporary developed autonomous vehicles with increasing computational control by increasingly intelligent software applications.
Design artifacts of embedded systems are subjected to a number of modifications during the development process. Verified artifacts that subsequently are modified must nec- essarily be re-Verified to ensure that no faults have been introduced in response to the modification. We collectively call this type of verification as regression verification. In this paper, we contribute with a technique for selective regression verification of embedded systems modeled in the Architec- ture Analysis and Design Language (AADL). The technique can be used with any AADL-based verification technique to eficiently perform regression verification by only selecting verification sequences that cover parts that are afiected by the modification for re-execution. This allows for the avoid- ance of unnecessary re-verification, and thereby unnecessary costs. The selection is based on the concept of specification slicing through system dependence graphs (SDGs) such that the efiect of a modification can be identified.
Dependable software-intensive systems, such as embedded systems for avionics and vehicles are often developed under severe quality, schedule and budget constraints. As the size and complexity of these systems dramatically increases, the architecture design phase becomes more and more significant in order to meet these constraints. The use of Architecture Description Languages (ADLs) provides an important basis for mutual communication, analysis and evaluation activities. Hence, selecting an ADL suitable for such activities is of great importance. In this paper we compare and investigate the two ADLs - AADL and EAST-ADL. The level of support provided to developers of dependable software-intensive systems is compared, and several critical areas of the ADLs are highlighted. Results of using an extended comparison framework showed many similarities, but also one clear distinction between the languages regarding the perspectives and the levels of abstraction in which systems are modeled. © 2011 Springer-Verlag.
Architectural engineering of embedded systems comprehensively affects both the development processes and the abilities of the systems. Verification of architectural engineering is consequently essential in the development of safety- and mission-critical embedded system to avoid costly and hazardous faults. In this paper, we present the Architecture Quality Assurance Tool (AQAT), an application program developed to provide a holistic, formal, and automatic verification process for architectural engineering of critical embedded systems. AQAT includes architectural model checking, model-based testing, and selective regression verification features to effectively and efficiently detect design faults, implementation faults, and faults created by maintenance modifications. Furthermore, the tool includes a feature that analyzes architectural dependencies, which in addition to providing essential information for impact analyzes of architectural design changes may be used for hazard analysis, such as the identification of potential error propagations, common cause failures, and single point failures. Overviews of both the graphical user interface and the back-end processes of AQAT are presented with a sensor-to-actuator system example.
Architecture engineering is essential to achieve dependability of critical embedded systems and affects large parts of the system life cycle. There is consequently little room for faults, which may cause substantial costs and devastating harm. Verification in architecture engineering should therefore be holistically and systematically managed in the development of critical embedded systems, from requirements analysis and design to implementation and maintenance. In this paper, we address this problem by presenting AQAF: an Architecture Quality Assurance Framework for critical embedded systems modeled in the Architecture Analysis and Design Language (AADL). The framework provides a holistic set of verification techniques with a common formalism and semantic domain, architecture flow graphs and timed automata, enabling completely formal and automated verification processes covering virtually the entire life cycle. The effectiveness and efficiency of the framework are validated in a case study comprising a safety-critical train control system.
Design artifacts of dependable embedded systems, and the systems themselves, are subjected to a number of modifications during the development process. Verified artifacts that subsequently are modified must necessarily be re-verified to ensure that no faults have been introduced in response to the modification. We collectively call this type of verification as regression verification. Studies show that regression testing alone consumes a vast amount of the total development cost. This is likely a result of unnecessary verification of parts that are not affected by the modification. In this paper, we propose an architecture-based selective regression verification technique for the development process of dependable embedded systems specified in the Architecture Analysis and Design Language (AADL). The selection of necessary regression verification sequences is based on the concept of specification slicing through System Dependence Graphs (SDGs). This allows for the avoidance of unnecessary re-verification, and thereby unnecessary costs.
The Architecture Quality Assurance Framework (AQAF) is a theory developed to provide a holistic and formal verification process for architectural engineering of critical embedded systems. AQAF encompasses integrated architectural model checking, model-based testing, and selective regression verification techniques to achieve this goal. The Architecture Quality Assurance Tool (AQAT) implements the theory of AQAF and enables automated application of the framework. In this paper, we present an evaluation of AQAT and the underlying AQAF theory by means of an industrial case study, where resource efficiency and fault detection effectiveness are the targeted properties of evaluation. The method of fault injection is utilized to guarantee coverage of fault types and to generate a data sample size adequate for statistical analysis. We discovered important areas of improvement in this study, which required further development of the framework before satisfactory results could be achieved. The final results present a 100% fault detection rate at the design level, a 98.5% fault detection rate at the implementation level, and an average increased efficiency of 6.4% with the aid of the selective regression verification technique.
The Architecture Analysis and Design Language (AADL) is used to represent architecture design decisions of safety-critical and real-time embedded systems. Due to the far-reaching effects these decisions have on the development process, an architecture design fault is likely to have a significant deteriorating impact through the complete process. Automated fault avoidance of architecture design decisions therefore has the potential to significantly reduce the cost of the development while increasing the dependability of the end product. To provide means for automated fault avoidance when developing systems specified in AADL, a formal verification technique has been developed to ensure completeness and consistency of an AADL specification as well as its conformity with the end product. The approach requires the semantics of AADL to be formalized and implemented. We use the methodology of semantic anchoring to contribute with a formal and implemented semantics of a subset of AADL through a set of transformation rules to timed automata constructs. In addition, the verification technique, including the transformation rules, is validated using a case study of a safety-critical fuel-level system developed by a major vehicle manufacturer.